- About Scala
- Documentation
- Code Examples
- Software
- Scala Developers
questioning FP
Sun, 2011-12-11, 19:21
#ddd
Re: Re: questioning FP
Laziness is not "handy" as much as it is critical to getting anything
useful done. I have no idea why you think "strictness is the only way to
get reasonable performance", except perhaps, lots of people say it
(therefore it is true?).
On 10/12/11 20:09, Jesper Nordenberg wrote:
> While laziness is handy is some cases, I just don't think it's a good
> default on todays computers where strictness is the only way to get
> reasonable performance.
>
> Given unlimited memory and computing power things comes in a new
> perspective.
>
> /Jesper Nordenberg
>
> nicolas.oury@gmail.com skrev 2011-12-10 10:16:
>> On Tue, Dec 6, 2011 at 6:52 AM, Tony Morris wrote:
>>> I had lunch with SPJ today. Laziness is the correct default.
>>>
>>
>> The problem with the lazy by default choice is that you don't have a
>> few interesting types, like Lists...
>>
>> In Haskell, [a] is a type of a coList. It is not bad per se but it
>> breaks a bit of modularity.
>> A function of type f :: a -> [a] can either return a finite or
>> infinite coList.
>>
>> Meaning that if somebody else wrote that function, I have to trust
>> documentation, ask him or read the implementation
>> before writing length(f "foo").
>>
>> This is because the type system does not allow to restrict [a] to only
>> contain actual lists....
>>
>> This link calls that paucity of types and gives interesting points on
>> the subject:
>>
>> http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/
>>
>>
>> Best regards,
>>
>> Nicolas.
>>
>
>
Sun, 2011-12-11, 20:01
#fff
Re: Re: questioning FP
On Sat, Dec 10, 2011 at 2:49 PM, Tony Morris wrote:
> Laziness is not "handy" as much as it is critical to getting anything
> useful done.
Could you explain why?
Sun, 2011-12-11, 20:21
#111111
Re: questioning FP
I certainly don't think laziness is a requirement to getting anything
useful done. Do you also think it's impossible to get anything useful
done in a total language?
As for performance, please read
http://www.haskell.org/haskellwiki/Performance/Strictness. A lazy
language without strictness analysis would have terrible performance.
/Jesper Nordenberg
Tony Morris skrev 2011-12-10 15:49:
> Laziness is not "handy" as much as it is critical to getting anything
> useful done. I have no idea why you think "strictness is the only way to
> get reasonable performance", except perhaps, lots of people say it
> (therefore it is true?).
>
> On 10/12/11 20:09, Jesper Nordenberg wrote:
>> While laziness is handy is some cases, I just don't think it's a good
>> default on todays computers where strictness is the only way to get
>> reasonable performance.
>>
>> Given unlimited memory and computing power things comes in a new
>> perspective.
>>
>> /Jesper Nordenberg
>>
>> nicolas.oury@gmail.com skrev 2011-12-10 10:16:
>>> On Tue, Dec 6, 2011 at 6:52 AM, Tony Morris wrote:
>>>> I had lunch with SPJ today. Laziness is the correct default.
>>>>
>>>
>>> The problem with the lazy by default choice is that you don't have a
>>> few interesting types, like Lists...
>>>
>>> In Haskell, [a] is a type of a coList. It is not bad per se but it
>>> breaks a bit of modularity.
>>> A function of type f :: a -> [a] can either return a finite or
>>> infinite coList.
>>>
>>> Meaning that if somebody else wrote that function, I have to trust
>>> documentation, ask him or read the implementation
>>> before writing length(f "foo").
>>>
>>> This is because the type system does not allow to restrict [a] to only
>>> contain actual lists....
>>>
>>> This link calls that paucity of types and gives interesting points on
>>> the subject:
>>>
>>> http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/
>>>
>>>
>>> Best regards,
>>>
>>> Nicolas.
>>>
>>
>>
>
>
Sun, 2011-12-11, 20:41
#131313
Re: Re: questioning FP
I seem to remember Connor Mc Bride saying the evaluation strategy
should be an effect reflected in the types. That could be the ideal
solution.
Paul
On Sun, Dec 11, 2011 at 20:15, Jesper Nordenberg wrote:
> I certainly don't think laziness is a requirement to getting anything useful
> done. Do you also think it's impossible to get anything useful done in a
> total language?
>
> As for performance, please read
> http://www.haskell.org/haskellwiki/Performance/Strictness. A lazy language
> without strictness analysis would have terrible performance.
>
> /Jesper Nordenberg
>
> Tony Morris skrev 2011-12-10 15:49:
>
>> Laziness is not "handy" as much as it is critical to getting anything
>> useful done. I have no idea why you think "strictness is the only way to
>> get reasonable performance", except perhaps, lots of people say it
>> (therefore it is true?).
>>
>> On 10/12/11 20:09, Jesper Nordenberg wrote:
>>>
>>> While laziness is handy is some cases, I just don't think it's a good
>>> default on todays computers where strictness is the only way to get
>>> reasonable performance.
>>>
>>> Given unlimited memory and computing power things comes in a new
>>> perspective.
>>>
>>> /Jesper Nordenberg
>>>
>>> nicolas.oury@gmail.com skrev 2011-12-10 10:16:
>>>>
>>>> On Tue, Dec 6, 2011 at 6:52 AM, Tony Morris
>>>> wrote:
>>>>>
>>>>> I had lunch with SPJ today. Laziness is the correct default.
>>>>>
>>>>
>>>> The problem with the lazy by default choice is that you don't have a
>>>> few interesting types, like Lists...
>>>>
>>>> In Haskell, [a] is a type of a coList. It is not bad per se but it
>>>> breaks a bit of modularity.
>>>> A function of type f :: a -> [a] can either return a finite or
>>>> infinite coList.
>>>>
>>>> Meaning that if somebody else wrote that function, I have to trust
>>>> documentation, ask him or read the implementation
>>>> before writing length(f "foo").
>>>>
>>>> This is because the type system does not allow to restrict [a] to only
>>>> contain actual lists....
>>>>
>>>> This link calls that paucity of types and gives interesting points on
>>>> the subject:
>>>>
>>>>
>>>> http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/
>>>>
>>>>
>>>> Best regards,
>>>>
>>>> Nicolas.
>>>>
>>>
>>>
>>
>>
>
>
Sun, 2011-12-11, 20:51
#151515
Re: Re: questioning FP
On Sun, Dec 11, 2011 at 11:35 AM, Paul Brauner wrote:
> I seem to remember Connor Mc Bride saying the evaluation strategy
> should be an effect reflected in the types. That could be the ideal
> solution.
hm. i seem to remember a few discussion on
comp.lang.{haskell,functional} where people bitch about the fact that
turning on various flavours of optimization in ghc lead to widely
different runtime behaviour, mostly in non-obvious, non-predictable
ways.
so i wonder how that would be reflected in the types.
and how you'd refactor your entire program if you decided you wanted
to dial in a different optimization trade-off.
?
Sun, 2011-12-11, 23:31
#171717
Re: questioning FP
It basically boils down to if bottom is a member of all types.
/Jesper Nordenberg
Paul Brauner skrev 2011-12-11 20:35:
> I seem to remember Connor Mc Bride saying the evaluation strategy
> should be an effect reflected in the types. That could be the ideal
> solution.
>
> Paul
>
> On Sun, Dec 11, 2011 at 20:15, Jesper Nordenberg wrote:
>> I certainly don't think laziness is a requirement to getting anything useful
>> done. Do you also think it's impossible to get anything useful done in a
>> total language?
>>
>> As for performance, please read
>> http://www.haskell.org/haskellwiki/Performance/Strictness. A lazy language
>> without strictness analysis would have terrible performance.
>>
>> /Jesper Nordenberg
>>
>> Tony Morris skrev 2011-12-10 15:49:
>>
>>> Laziness is not "handy" as much as it is critical to getting anything
>>> useful done. I have no idea why you think "strictness is the only way to
>>> get reasonable performance", except perhaps, lots of people say it
>>> (therefore it is true?).
>>>
>>> On 10/12/11 20:09, Jesper Nordenberg wrote:
>>>>
>>>> While laziness is handy is some cases, I just don't think it's a good
>>>> default on todays computers where strictness is the only way to get
>>>> reasonable performance.
>>>>
>>>> Given unlimited memory and computing power things comes in a new
>>>> perspective.
>>>>
>>>> /Jesper Nordenberg
>>>>
>>>> nicolas.oury@gmail.com skrev 2011-12-10 10:16:
>>>>>
>>>>> On Tue, Dec 6, 2011 at 6:52 AM, Tony Morris
>>>>> wrote:
>>>>>>
>>>>>> I had lunch with SPJ today. Laziness is the correct default.
>>>>>>
>>>>>
>>>>> The problem with the lazy by default choice is that you don't have a
>>>>> few interesting types, like Lists...
>>>>>
>>>>> In Haskell, [a] is a type of a coList. It is not bad per se but it
>>>>> breaks a bit of modularity.
>>>>> A function of type f :: a -> [a] can either return a finite or
>>>>> infinite coList.
>>>>>
>>>>> Meaning that if somebody else wrote that function, I have to trust
>>>>> documentation, ask him or read the implementation
>>>>> before writing length(f "foo").
>>>>>
>>>>> This is because the type system does not allow to restrict [a] to only
>>>>> contain actual lists....
>>>>>
>>>>> This link calls that paucity of types and gives interesting points on
>>>>> the subject:
>>>>>
>>>>>
>>>>> http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/
>>>>>
>>>>>
>>>>> Best regards,
>>>>>
>>>>> Nicolas.
>>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>
Mon, 2011-12-12, 11:11
#191919
Re: Re: questioning FP
The simplest example I know of in the standard library is Stream.foldRight function, which is too strict in its arguments and its argument's arguments, resulting in consequences like, you cannot write map (and a zillion other useful functions) using foldRight, requiring you to instead repeat the body of foldRight in the map implementation. This code repetition is solely a consequence of strictness and its adverse impact on composition. Unfortunately, the situation becomes worse as I further "detrivialise" the example.
On Sat, Dec 10, 2011 at 2:49 PM, Tony Morris <tonymorris@gmail.com> wrote:
> Laziness is not "handy" as much as it is critical to getting anything
> useful done.
Could you explain why?
Mon, 2011-12-12, 11:21
#1b1b1b
Re: Re: questioning FP
Yes Conor and I have discussed this on IRC. I would subscribe to this newsletter if you wrote it. I don't know a way of reflecting evaluation in the type in both a useful and meaningful way. Are you going to be the smart guy whose idea I steal!? :)
On Dec 12, 2011 6:35 AM, "Paul Brauner" <polux2001@gmail.com> wrote:I seem to remember Connor Mc Bride saying the evaluation strategy
should be an effect reflected in the types. That could be the ideal
solution.
Paul
On Sun, Dec 11, 2011 at 20:15, Jesper Nordenberg <megagurka@yahoo.com> wrote:
> I certainly don't think laziness is a requirement to getting anything useful
> done. Do you also think it's impossible to get anything useful done in a
> total language?
>
> As for performance, please read
> http://www.haskell.org/haskellwiki/Performance/Strictness. A lazy language
> without strictness analysis would have terrible performance.
>
> /Jesper Nordenberg
>
> Tony Morris skrev 2011-12-10 15:49:
>
>> Laziness is not "handy" as much as it is critical to getting anything
>> useful done. I have no idea why you think "strictness is the only way to
>> get reasonable performance", except perhaps, lots of people say it
>> (therefore it is true?).
>>
>> On 10/12/11 20:09, Jesper Nordenberg wrote:
>>>
>>> While laziness is handy is some cases, I just don't think it's a good
>>> default on todays computers where strictness is the only way to get
>>> reasonable performance.
>>>
>>> Given unlimited memory and computing power things comes in a new
>>> perspective.
>>>
>>> /Jesper Nordenberg
>>>
>>> nicolas.oury@gmail.com skrev 2011-12-10 10:16:
>>>>
>>>> On Tue, Dec 6, 2011 at 6:52 AM, Tony Morris<tmorris@tmorris.net>
>>>> wrote:
>>>>>
>>>>> I had lunch with SPJ today. Laziness is the correct default.
>>>>>
>>>>
>>>> The problem with the lazy by default choice is that you don't have a
>>>> few interesting types, like Lists...
>>>>
>>>> In Haskell, [a] is a type of a coList. It is not bad per se but it
>>>> breaks a bit of modularity.
>>>> A function of type f :: a -> [a] can either return a finite or
>>>> infinite coList.
>>>>
>>>> Meaning that if somebody else wrote that function, I have to trust
>>>> documentation, ask him or read the implementation
>>>> before writing length(f "foo").
>>>>
>>>> This is because the type system does not allow to restrict [a] to only
>>>> contain actual lists....
>>>>
>>>> This link calls that paucity of types and gives interesting points on
>>>> the subject:
>>>>
>>>>
>>>> http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/
>>>>
>>>>
>>>> Best regards,
>>>>
>>>> Nicolas.
>>>>
>>>
>>>
>>
>>
>
>
Mon, 2011-12-12, 12:41
#141414
Re: Re: questioning FP
On Sun, Dec 11, 2011 at 10:22 PM, Tony Morris wrote:
> The simplest example I know of in the standard library is Stream.foldRight
> function, which is too strict in its arguments and its argument's arguments,
> resulting in consequences like, you cannot write map (and a zillion other
> useful functions) using foldRight, requiring you to instead repeat the body
> of foldRight in the map implementation. This code repetition is solely a
> consequence of strictness and its adverse impact on composition.
> Unfortunately, the situation becomes worse as I further "detrivialise" the
> example.
Interesting point.
Of course, with this kind of things, it is always hard to separate
what is a fundamental problem from
different habits.
For example, map on Stream can be defined by an unfold. (and it seems
to me to make more sense to define it that way).
But maybe this means that having a by-name variant of foldRight in the
library is a good idea for a strict language.
Best regards,
Nicolas.
Mon, 2011-12-12, 12:51
#161616
Re: Re: questioning FP
I think more generally Hughes makes the point in "Why functional
programming matters" that lazyness brings modularity. The foldR
example is one instance of that general principle if I'm not mistaken.
Paul
On Mon, Dec 12, 2011 at 12:34, nicolas.oury@gmail.com
wrote:
> On Sun, Dec 11, 2011 at 10:22 PM, Tony Morris wrote:
>> The simplest example I know of in the standard library is Stream.foldRight
>> function, which is too strict in its arguments and its argument's arguments,
>> resulting in consequences like, you cannot write map (and a zillion other
>> useful functions) using foldRight, requiring you to instead repeat the body
>> of foldRight in the map implementation. This code repetition is solely a
>> consequence of strictness and its adverse impact on composition.
>> Unfortunately, the situation becomes worse as I further "detrivialise" the
>> example.
>
> Interesting point.
> Of course, with this kind of things, it is always hard to separate
> what is a fundamental problem from
> different habits.
>
> For example, map on Stream can be defined by an unfold. (and it seems
> to me to make more sense to define it that way).
>
> But maybe this means that having a by-name variant of foldRight in the
> library is a good idea for a strict language.
>
> Best regards,
>
> Nicolas.
Mon, 2011-12-12, 13:01
#171717
Re: Re: questioning FP
On Mon, Dec 12, 2011 at 12:44 PM, Paul Brauner <polux2001@gmail.com> wrote:
I think more generally Hughes makes the point in "Why functionalYes. But note that you can do Hughes' example even with plain iterators! When he wrote that it was the high tide of laziness; everybody believed you needed to be lazy. Now the pendulum has swung back.
programming matters" that lazyness brings modularity. The foldR
example is one instance of that general principle if I'm not mistaken.
Cheers
-- Martin
Mon, 2011-12-12, 13:31
#191919
Re: Re: questioning FP
On Mon, Dec 12, 2011 at 11:44 AM, Paul Brauner wrote:
> I think more generally Hughes makes the point in "Why functional
> programming matters" that lazyness brings modularity. The foldR
> example is one instance of that general principle if I'm not mistaken.
>
> Paul
>
If I remember well, most of the article is about separating generator
and selector on generated data.
This can be done in a strict language that allows to define coinductive types.
Some other solutions have also been found for the same kind of
problem, delimited continuations, for example.
Best,
Nicolas.
Mon, 2011-12-12, 13:41
#1b1b1b
Re: Re: questioning FP
On Mon, Dec 12, 2011 at 13:23, nicolas.oury@gmail.com
wrote:
> On Mon, Dec 12, 2011 at 11:44 AM, Paul Brauner wrote:
>> I think more generally Hughes makes the point in "Why functional
>> programming matters" that lazyness brings modularity. The foldR
>> example is one instance of that general principle if I'm not mistaken.
>>
>> Paul
>>
>
> If I remember well, most of the article is about separating generator
> and selector on generated data.
> This can be done in a strict language that allows to define coinductive types.
Yes sure, as well as you can have foldR generate a stream. But I think
the point was if you get a fonction from someone that didn't think of
using a coinductive output type, then you're screwed (it's like
"virtual" not by default in C++).
>
> Some other solutions have also been found for the same kind of
> problem, delimited continuations, for example.
>
Yeah, that one could do I guess.
> Best,
>
> Nicolas.
Mon, 2011-12-12, 19:51
#1d1d1d
Re: Re: questioning FP
Dear Lazily Strict and Strictly Lazy alike,
i love lazy for expressiveness. i'm less excited about it from a computational complexity perspective. i nearly always screw up my estimation the computational complexity of an algorithm i express lazily. However, i also note that complexity seems brittle in the face of laziness. Very small changes can lead to huge (and unexpected) differences in resource consumption. Anecdotally, i've noticed over the years that Haskell mailing list conversations regarding unexpected turns in complexity nearly always need recourse to a guru, rather than simple step-by-step reasoning. It gives me the impression that there is a tension amongst intuition, complexity and laziness.
Best wishes,
--greg
On Mon, Dec 12, 2011 at 4:33 AM, Paul Brauner <polux2001@gmail.com> wrote:
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
i love lazy for expressiveness. i'm less excited about it from a computational complexity perspective. i nearly always screw up my estimation the computational complexity of an algorithm i express lazily. However, i also note that complexity seems brittle in the face of laziness. Very small changes can lead to huge (and unexpected) differences in resource consumption. Anecdotally, i've noticed over the years that Haskell mailing list conversations regarding unexpected turns in complexity nearly always need recourse to a guru, rather than simple step-by-step reasoning. It gives me the impression that there is a tension amongst intuition, complexity and laziness.
Best wishes,
--greg
On Mon, Dec 12, 2011 at 4:33 AM, Paul Brauner <polux2001@gmail.com> wrote:
On Mon, Dec 12, 2011 at 13:23, nicolas.oury@gmail.com
<nicolas.oury@gmail.com> wrote:
> On Mon, Dec 12, 2011 at 11:44 AM, Paul Brauner <polux2001@gmail.com> wrote:
>> I think more generally Hughes makes the point in "Why functional
>> programming matters" that lazyness brings modularity. The foldR
>> example is one instance of that general principle if I'm not mistaken.
>>
>> Paul
>>
>
> If I remember well, most of the article is about separating generator
> and selector on generated data.
> This can be done in a strict language that allows to define coinductive types.
Yes sure, as well as you can have foldR generate a stream. But I think
the point was if you get a fonction from someone that didn't think of
using a coinductive output type, then you're screwed (it's like
"virtual" not by default in C++).
>
> Some other solutions have also been found for the same kind of
> problem, delimited continuations, for example.
>
Yeah, that one could do I guess.
> Best,
>
> Nicolas.
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Wed, 2011-12-14, 15:21
#1f1f1f
Re: Re: questioning FP
What you call a guru, I call "a regular person who is able to shift their approach to problem-solving readily and effectively such that its application yields a result without external hindrances e.g. tools."
Yes it may be hard, but the benefits are there even if they are not easily recognised. The solution is to harden up, not throw your hands in the air.
On Dec 13, 2011 4:45 AM, "Meredith Gregory" <lgreg.meredith@gmail.com> wrote:Dear Lazily Strict and Strictly Lazy alike,
i love lazy for expressiveness. i'm less excited about it from a computational complexity perspective. i nearly always screw up my estimation the computational complexity of an algorithm i express lazily. However, i also note that complexity seems brittle in the face of laziness. Very small changes can lead to huge (and unexpected) differences in resource consumption. Anecdotally, i've noticed over the years that Haskell mailing list conversations regarding unexpected turns in complexity nearly always need recourse to a guru, rather than simple step-by-step reasoning. It gives me the impression that there is a tension amongst intuition, complexity and laziness.
Best wishes,
--greg
On Mon, Dec 12, 2011 at 4:33 AM, Paul Brauner <polux2001@gmail.com> wrote:
On Mon, Dec 12, 2011 at 13:23, nicolas.oury@gmail.com
<nicolas.oury@gmail.com> wrote:
> On Mon, Dec 12, 2011 at 11:44 AM, Paul Brauner <polux2001@gmail.com> wrote:
>> I think more generally Hughes makes the point in "Why functional
>> programming matters" that lazyness brings modularity. The foldR
>> example is one instance of that general principle if I'm not mistaken.
>>
>> Paul
>>
>
> If I remember well, most of the article is about separating generator
> and selector on generated data.
> This can be done in a strict language that allows to define coinductive types.
Yes sure, as well as you can have foldR generate a stream. But I think
the point was if you get a fonction from someone that didn't think of
using a coinductive output type, then you're screwed (it's like
"virtual" not by default in C++).
>
> Some other solutions have also been found for the same kind of
> problem, delimited continuations, for example.
>
Yeah, that one could do I guess.
> Best,
>
> Nicolas.
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Wed, 2011-12-14, 15:51
#212121
Re: Re: questioning FP
Strictness should be the default in a general purpose language, and laziness a feature that can be turned on when the situation calls for it.Of course you can have a different opinion, but then I really cannot take you seriously.
Cheers,
Steven
PS: Yep, if there is a response to this, I can imagine what it looks like :-D
On 12.12.2011, at 23:28, Tony Morris wrote:
Cheers,
Steven
PS: Yep, if there is a response to this, I can imagine what it looks like :-D
On 12.12.2011, at 23:28, Tony Morris wrote:
What you call a guru, I call "a regular person who is able to shift their approach to problem-solving readily and effectively such that its application yields a result without external hindrances e.g. tools."
Yes it may be hard, but the benefits are there even if they are not easily recognised. The solution is to harden up, not throw your hands in the air.
On Dec 13, 2011 4:45 AM, "Meredith Gregory" <lgreg.meredith@gmail.com> wrote:Dear Lazily Strict and Strictly Lazy alike,
i love lazy for expressiveness. i'm less excited about it from a computational complexity perspective. i nearly always screw up my estimation the computational complexity of an algorithm i express lazily. However, i also note that complexity seems brittle in the face of laziness. Very small changes can lead to huge (and unexpected) differences in resource consumption. Anecdotally, i've noticed over the years that Haskell mailing list conversations regarding unexpected turns in complexity nearly always need recourse to a guru, rather than simple step-by-step reasoning. It gives me the impression that there is a tension amongst intuition, complexity and laziness.
Best wishes,
--greg
On Mon, Dec 12, 2011 at 4:33 AM, Paul Brauner <polux2001@gmail.com> wrote:
On Mon, Dec 12, 2011 at 13:23, nicolas.oury@gmail.com
<nicolas.oury@gmail.com> wrote:
> On Mon, Dec 12, 2011 at 11:44 AM, Paul Brauner <polux2001@gmail.com> wrote:
>> I think more generally Hughes makes the point in "Why functional
>> programming matters" that lazyness brings modularity. The foldR
>> example is one instance of that general principle if I'm not mistaken.
>>
>> Paul
>>
>
> If I remember well, most of the article is about separating generator
> and selector on generated data.
> This can be done in a strict language that allows to define coinductive types.
Yes sure, as well as you can have foldR generate a stream. But I think
the point was if you get a fonction from someone that didn't think of
using a coinductive output type, then you're screwed (it's like
"virtual" not by default in C++).
>
> Some other solutions have also been found for the same kind of
> problem, delimited continuations, for example.
>
Yeah, that one could do I guess.
> Best,
>
> Nicolas.
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Wed, 2011-12-14, 16:01
#232323
Re: Re: questioning FP
It does seem like debugging lazy programs is a skill that programmers must acquire. It's the most-heard complaint against Haskell I've seen. In my opinion you can write useful code with either lazy-by-default or eager-by-default languages. It'd be nice if you could opt into one or the other without a terrible amount of syntactic noise.
I also agree with Greg. From what I've seen of lazy programs, they are an invaluable tool, but can get very tricky in the details.
A special note: Haskell's runtime is *optimised* for laziness. Scala runs on the JVM, which is designed with imperative programs in mind. I think Scala has chosen the right default for its platform and Haskell has chosen the right default for itself.
- Josh
On Mon, Dec 12, 2011 at 6:28 PM, Tony Morris <tmorris@tmorris.net> wrote:
I also agree with Greg. From what I've seen of lazy programs, they are an invaluable tool, but can get very tricky in the details.
A special note: Haskell's runtime is *optimised* for laziness. Scala runs on the JVM, which is designed with imperative programs in mind. I think Scala has chosen the right default for its platform and Haskell has chosen the right default for itself.
- Josh
On Mon, Dec 12, 2011 at 6:28 PM, Tony Morris <tmorris@tmorris.net> wrote:
What you call a guru, I call "a regular person who is able to shift their approach to problem-solving readily and effectively such that its application yields a result without external hindrances e.g. tools."
Yes it may be hard, but the benefits are there even if they are not easily recognised. The solution is to harden up, not throw your hands in the air.
On Dec 13, 2011 4:45 AM, "Meredith Gregory" <lgreg.meredith@gmail.com> wrote:Dear Lazily Strict and Strictly Lazy alike,
i love lazy for expressiveness. i'm less excited about it from a computational complexity perspective. i nearly always screw up my estimation the computational complexity of an algorithm i express lazily. However, i also note that complexity seems brittle in the face of laziness. Very small changes can lead to huge (and unexpected) differences in resource consumption. Anecdotally, i've noticed over the years that Haskell mailing list conversations regarding unexpected turns in complexity nearly always need recourse to a guru, rather than simple step-by-step reasoning. It gives me the impression that there is a tension amongst intuition, complexity and laziness.
Best wishes,
--greg
On Mon, Dec 12, 2011 at 4:33 AM, Paul Brauner <polux2001@gmail.com> wrote:
On Mon, Dec 12, 2011 at 13:23, nicolas.oury@gmail.com
<nicolas.oury@gmail.com> wrote:
> On Mon, Dec 12, 2011 at 11:44 AM, Paul Brauner <polux2001@gmail.com> wrote:
>> I think more generally Hughes makes the point in "Why functional
>> programming matters" that lazyness brings modularity. The foldR
>> example is one instance of that general principle if I'm not mistaken.
>>
>> Paul
>>
>
> If I remember well, most of the article is about separating generator
> and selector on generated data.
> This can be done in a strict language that allows to define coinductive types.
Yes sure, as well as you can have foldR generate a stream. But I think
the point was if you get a fonction from someone that didn't think of
using a coinductive output type, then you're screwed (it's like
"virtual" not by default in C++).
>
> Some other solutions have also been found for the same kind of
> problem, delimited continuations, for example.
>
Yeah, that one could do I guess.
> Best,
>
> Nicolas.
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Wed, 2011-12-14, 17:31
#252525
Re: Re: questioning FP
> Strictness should be the default in a general purpose language, and
laziness a feature that can be turned on when the situation calls for
it.
Or, you know, the other way around. Once you are strict, it is too late to be lazy. You can always force a lazy thunk, but it's impossible to unforce a strict one.
In a very real way, what you're saying is analogous to: "untyped should be the default in a general purpose language, and types are a feature that can be turned on when the situation calls for it". Think about it.
Or, you know, the other way around. Once you are strict, it is too late to be lazy. You can always force a lazy thunk, but it's impossible to unforce a strict one.
In a very real way, what you're saying is analogous to: "untyped should be the default in a general purpose language, and types are a feature that can be turned on when the situation calls for it". Think about it.
Wed, 2011-12-14, 17:41
#1e1e1e
Re: Re: questioning FP
Oh, this might actually be exactly what I am saying. Take a look at my language Babel-17 (www.babel-17.com). ;-)
Cheers,
Steven
On 14.12.2011, at 16:22, Runar Bjarnason wrote:
> > Strictness should be the default in a general purpose language, and laziness a feature that can be turned on when the situation calls for it.
>
> Or, you know, the other way around. Once you are strict, it is too late to be lazy. You can always force a lazy thunk, but it's impossible to unforce a strict one.
>
> In a very real way, what you're saying is analogous to: "untyped should be the default in a general purpose language, and types are a feature that can be turned on when the situation calls for it". Think about it.
>
Wed, 2011-12-14, 17:51
#202020
Re: Re: questioning FP
Even in a statically typed language, that's certainly the case. Parser Combinators, e.g. go from unstructured data to structured data. That said, I love type systems as much as I like laziness. I'd love to see the JVM support better lazy primitives and such, but for now, I blend styles.
- Josh
On Wed, Dec 14, 2011 at 11:22 AM, Runar Bjarnason <runarorama@gmail.com> wrote:
- Josh
On Wed, Dec 14, 2011 at 11:22 AM, Runar Bjarnason <runarorama@gmail.com> wrote:
> Strictness should be the default in a general purpose language, and laziness a feature that can be turned on when the situation calls for it.
Or, you know, the other way around. Once you are strict, it is too late to be lazy. You can always force a lazy thunk, but it's impossible to unforce a strict one.
In a very real way, what you're saying is analogous to: "untyped should be the default in a general purpose language, and types are a feature that can be turned on when the situation calls for it". Think about it.
Wed, 2011-12-14, 18:01
#222222
Re: Re: questioning FP
Hi Runar,
Although in the logical sense I see why you equate untyped with strict evaluation, I think in the cognitive sense it is the other way around. If something is untyped I have, by default, a lack of knowledge about what operations make sense with it, just as in a lazy-by-default I have, by default, a lack of knowledge about if the value has been evaluated yet. It's this lack of knowledge that makes it harder to intuitively reason about lazy code. When the context changes, different things go from waiting to be computed to being computed, and this can generally only be reasoned through starting at the outer-most context, working in. This makes it extremely hard to reason about lazy code in a modular, monotonic manner.
Matthew
On 14 December 2011 16:22, Runar Bjarnason <runarorama@gmail.com> wrote:
--
Dr Matthew PocockIntegrative Bioinformatics Group, School of Computing Science, Newcastle Universitymailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozerskype: matthew.pococktel: (0191) 2566550mob: +447535664143
Although in the logical sense I see why you equate untyped with strict evaluation, I think in the cognitive sense it is the other way around. If something is untyped I have, by default, a lack of knowledge about what operations make sense with it, just as in a lazy-by-default I have, by default, a lack of knowledge about if the value has been evaluated yet. It's this lack of knowledge that makes it harder to intuitively reason about lazy code. When the context changes, different things go from waiting to be computed to being computed, and this can generally only be reasoned through starting at the outer-most context, working in. This makes it extremely hard to reason about lazy code in a modular, monotonic manner.
Matthew
On 14 December 2011 16:22, Runar Bjarnason <runarorama@gmail.com> wrote:
> Strictness should be the default in a general purpose language, and laziness a feature that can be turned on when the situation calls for it.
Or, you know, the other way around. Once you are strict, it is too late to be lazy. You can always force a lazy thunk, but it's impossible to unforce a strict one.
In a very real way, what you're saying is analogous to: "untyped should be the default in a general purpose language, and types are a feature that can be turned on when the situation calls for it". Think about it.
--
Dr Matthew PocockIntegrative Bioinformatics Group, School of Computing Science, Newcastle Universitymailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozerskype: matthew.pococktel: (0191) 2566550mob: +447535664143
Wed, 2011-12-14, 18:21
#232323
Re: Re: questioning FP
On Wednesday, December 14, 2011 11:45:13 AM UTC-5, Matthew Pocock wrote:
Hi Runar,
Although in the logical sense I see why you equate untyped with strict evaluation, I think in the cognitive sense it is the other way around. If something is untyped I have, by default, a lack of knowledge about what operations make sense with it, just as in a lazy-by-default I have, by default, a lack of knowledge about if the value has been evaluated yet. It's this lack of knowledge that makes it harder to intuitively reason about lazy code. When the context changes, different things go from waiting to be computed to being computed, and this can generally only be reasoned through starting at the outer-most context, working in. This makes it extremely hard to reason about lazy code in a modular, monotonic manner.
I don't follow very well what you have just said, except maybe that laziness and side-effects don't work well together. That is certainly true. If you want to remain lazy, you also have to remain pure. But reasoning about the execution of lazy pure code is as simple as possible. You just substitute equals for equals.
Then again, working with side-effects is pretty difficult even in strict code. Consider:
if (true) "hooray" else error("oops")
What is the result of this? Well, if "if" weren't lazy then it would always generate an error. Turns out that you need to introduce laziness to do anything useful.
We don't really need the "if" construct (or other control constructs) if we have laziness:
sealed trait Bool {
def apply[A](t: => A, f: => A): A
}
case object False extends Bool {
def apply[A](t: => A, f: => A) = t
}
case class True extends Bool {
def apply[A](t: => A, f: => A) = f
}
We can write a strict "if":
def strictIf[A](b: Bool, a1: A, a2: A) = b(a1, a2)
But if Bool.apply were already strict, it would be impossible to write the lazy version. Ergo, the only way to get anything useful done in strict languages is by adding special forms like "if" that introduce non-strictness.
Wed, 2011-12-14, 18:31
#252525
Re: Re: questioning FP
That should work:
sealed trait Bool {
def apply[A](t: => A, f: => A): Unit => A
}
case object False extends Bool {
def apply[A](t: => A, f: => A) = x => t
}
case class True extends Bool {
def apply[A](t: => A, f: => A) = x => f
}
Wed, 2011-12-14, 18:41
#272727
Re: Re: questioning FP
Please, forgive/forget my preceeding email.
I meant this one:
def strictIf[A](b: Bool, a1: Unit => A, a2: Unit => A) = b(a1, a2) ()
It should work with Bool.apply being strict.
Wed, 2011-12-14, 18:41
#292929
Re: Re: questioning FP
On Wed, Dec 14, 2011 at 15:30, nicolas.oury@gmail.com
wrote:
> Please, forgive/forget my preceeding email.
>
> I meant this one:
> def strictIf[A](b: Bool, a1: Unit => A, a2: Unit => A) = b(a1, a2) ()
>
> It should work with Bool.apply being strict.
Only because b doesn't use a1 or a2 in any way. In other words, this
work-around only works for special cases.
Wed, 2011-12-14, 18:51
#2b2b2b
Re: Re: questioning FP
On 14 December 2011 17:15, Runar Bjarnason <runarorama@gmail.com> wrote:
I don't follow very well what you have just said, except maybe that laziness and side-effects don't work well together.
No, it had nothing to do with side-effects. It's about having knowledge about if something has already been evaluated. In a lazy language, some value can be in one of two states - evaluated or waiting to be evalutated. As someone looking at the code, your knowledge about its state is one of three values: you know that it will have been evaluted, you know that it will not have been evaluted, you do not know if it has or has not been evaluated.
Then again, working with side-effects is pretty difficult even in strict code. Consider:
if (true) "hooray" else error("oops")
What is the result of this? Well, if "if" weren't lazy then it would always generate an error. Turns out that you need to introduce laziness to do anything useful.
Sure, but change this to:
something(x: Boolean) = if(x) "horray" else error("oops")
Now, we don't know if this terminates or not until we know the value of x. This isn't really the sort of case I was thinking of though. Perhaps a list with lazy evaluation of head and tail would be better. As someone who has been handed a List[X], I have no idea what the complexity of head or tail is on it, as I have no idea if the head or tail has been previously evaluated, and indeed have no idea what the complexity of the operation to be evaluated would be. I know this isn't legal scala, but:
trait List[+A] { def head: A def tail: List[A]}
case object Nil extends List[Nothing] { def head: A = throw new NoSuchElementException("No head on the Nil list") def tail: List[A] = throw new NoSuchElementException("No tail on the Nil list")}
case class Cons[A](lazy val head: A, lazy val tail: List[A]) extends List[+A]
Now, if I give you an instance x: List[A] using the above definition, what is the complexity of calling x.head or x.tail? You have no way of knowing without also knowing the intimate details of how x was built and if x.head or x.tail have previously been invoked. To reason about the complexity of any part of your program, you have to know about an ever expanding wider, prior-evaluated portion of it. As you showed with `lazy if`, potentially we can't even hope to reason about lower bound estimates on complexity without running portions of the program to know what branch it takes.
Matthew
--
Dr Matthew PocockIntegrative Bioinformatics Group, School of Computing Science, Newcastle Universitymailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozerskype: matthew.pococktel: (0191) 2566550mob: +447535664143
Wed, 2011-12-14, 18:51
#2d2d2d
Re: questioning FP
On 2011-12-14 11:15:00 -0600, Runar Bjarnason said:
> But if Bool.apply were already strict, it would be impossible to write
> the lazy version. Ergo, the only way to get anything useful done in
> strict languages is by adding special forms like "if" that introduce
> non-strictness.
Smalltalk does this by simply passing blocks into ifTrue:ifFalse:
condition ifTrue: [block to be evaluated if true]
ifFalse: [block to be evaluated if false]
The blocks respond to value: and can be evaluated when needed. So:
cond ifTrue: [ 2 + 3 ] ifFalse: [ 4 + 5 ] will only evaluate one +
while
cond ifTrue: (2 + 3) ifFalse: (4 + 5) will evaluate both +'s
This seems to have worked fine for Smalltalk, no laziness built in.
Wed, 2011-12-14, 19:01
#2f2f2f
Re: Re: questioning FP
On Wed, Dec 14, 2011 at 15:38, Matthew Pocock
wrote:
>
>> Then again, working with side-effects is pretty difficult even in strict
>> code. Consider:
>>
>> if (true) "hooray" else error("oops")
>>
>> What is the result of this? Well, if "if" weren't lazy then it would
>> always generate an error. Turns out that you need to introduce laziness to
>> do anything useful.
>
>
> Sure, but change this to:
>
> something(x: Boolean) = if(x) "horray" else error("oops")
>
> Now, we don't know if this terminates or not until we know the value of x.
Only if there's non-strictness. If "if" wasn't non-strict, it would
always generate an error, irrespective of the value of x. A strict
"if" would work like this:
something(x: Boolean) = {
val ifTrue = "horray"
val ifFalse = error("oops")
if(x) ifTrue else ifFalse
}
Replace "val" with "lazy val" and you have lazy behavior. Replace
"val" with "def" and you have another kind of non-strictness, more in
line with how "if" actually works.
Wed, 2011-12-14, 19:01
#282828
Re: Re: questioning FP
On Wed, Dec 14, 2011 at 8:22 AM, Runar Bjarnason <runarorama@gmail.com> wrote:
You can always force a lazy thunk, but it's impossible to unforce a strict one.
Can't you just wrap it in a function that will return that thunk when evaluated? Now you have laziness.
While I like the theoretical appeal of laziness, my main problem with it is reasoning about its complexity. It's pretty easy to see that an algorithm making use of eager constructs is O(n^2) but good luck doing this math for any non-trivial piece of lazy code.
-- Cédric
Wed, 2011-12-14, 19:11
#2a2a2a
Re: Re: questioning FP
Laziness is thunkification + caching so, you can always emulate its behaviour in cbv using closures but you'all miss the truly lazy part.
On Dec 14, 2011 6:45 PM, "Sophie" <itsme213@hotmail.com> wrote:On 2011-12-14 11:15:00 -0600, Runar Bjarnason said:
But if Bool.apply were already strict, it would be impossible to write the lazy version. Ergo, the only way to get anything useful done in strict languages is by adding special forms like "if" that introduce non-strictness.
Smalltalk does this by simply passing blocks into ifTrue:ifFalse:
condition ifTrue: [block to be evaluated if true]
ifFalse: [block to be evaluated if false]
The blocks respond to value: and can be evaluated when needed. So:
cond ifTrue: [ 2 + 3 ] ifFalse: [ 4 + 5 ] will only evaluate one +
while
cond ifTrue: (2 + 3) ifFalse: (4 + 5) will evaluate both +'s
This seems to have worked fine for Smalltalk, no laziness built in.
Wed, 2011-12-14, 19:21
#2c2c2c
Re: Re: questioning FP
s/all/ll
Crappy phone...
Laziness is thunkification + caching so, you can always emulate its behaviour in cbv using closures but you'all miss the truly lazy part.
On Dec 14, 2011 6:45 PM, "Sophie" <itsme213@hotmail.com> wrote:On 2011-12-14 11:15:00 -0600, Runar Bjarnason said:
But if Bool.apply were already strict, it would be impossible to write the lazy version. Ergo, the only way to get anything useful done in strict languages is by adding special forms like "if" that introduce non-strictness.
Smalltalk does this by simply passing blocks into ifTrue:ifFalse:
condition ifTrue: [block to be evaluated if true]
ifFalse: [block to be evaluated if false]
The blocks respond to value: and can be evaluated when needed. So:
cond ifTrue: [ 2 + 3 ] ifFalse: [ 4 + 5 ] will only evaluate one +
while
cond ifTrue: (2 + 3) ifFalse: (4 + 5) will evaluate both +'s
This seems to have worked fine for Smalltalk, no laziness built in.
Wed, 2011-12-14, 19:31
#2d2d2d
Re: Re: questioning FP
Guys,
It's not an either or, us vs. them debate.
In Haskell you can force strict evaluation, and in scala you can use lazy evaluation. It's kind of a continuum.
While Haskell s runtime is optimized for thunks and laziness, the jvm is not and laziness is sort of kludged on top. This is what I mean by scala choosing the right default. We swing the continuum towards strict for our runtime. If the JVM had a better laziness story, maybe things would be different. However we're trying to make optimal use of HotSpot.
On Dec 14, 2011 12:55 PM, "Cédric Beust ♔" <cedric@beust.com> wrote:
On Wed, Dec 14, 2011 at 8:22 AM, Runar Bjarnason <runarorama@gmail.com> wrote:You can always force a lazy thunk, but it's impossible to unforce a strict one.
Can't you just wrap it in a function that will return that thunk when evaluated? Now you have laziness.
While I like the theoretical appeal of laziness, my main problem with it is reasoning about its complexity. It's pretty easy to see that an algorithm making use of eager constructs is O(n^2) but good luck doing this math for any non-trivial piece of lazy code.
-- Cédric
Wed, 2011-12-14, 19:31
#2f2f2f
Re: Re: questioning FP
On Wed, Dec 14, 2011 at 09:55:31AM -0800, Cédric Beust ♔ said
> While I like the theoretical appeal of laziness, my main problem with it is
> reasoning about its complexity. It's pretty easy to see that an algorithm
> making use of eager constructs is O(n^2) but good luck doing this math for
> any non-trivial piece of lazy code.
On the other hand, lazy evaluation is essentially required for effecient
implementation of persistent data structures; cf. Okasaki's Purely
Functional Data Structures.
Wed, 2011-12-14, 19:41
#313131
Re: Re: questioning FP
I was just stating that it's no surprise you can emulate the
observational behaviour of lazyness using thunks, but that it doesn't
implement graph reduction semantics. Of course, once you have
references you can implement the graph reduction semantics.
Now, I totally agree with Josh. The situations are symmetric: you can
express the same things with one or the other by default and the other
as an option. And it totally makes sense to chose CBV as the default
on the JVM.
Paul
2011/12/14 Josh Suereth :
> Guys,
>
> It's not an either or, us vs. them debate.
> In Haskell you can force strict evaluation, and in scala you can use lazy
> evaluation. It's kind of a continuum.
>
> While Haskell s runtime is optimized for thunks and laziness, the jvm is not
> and laziness is sort of kludged on top. This is what I mean by scala
> choosing the right default. We swing the continuum towards strict for our
> runtime. If the JVM had a better laziness story, maybe things would be
> different. However we're trying to make optimal use of HotSpot.
>
> On Dec 14, 2011 12:55 PM, "Cédric Beust ♔" wrote:
>>
>>
>> On Wed, Dec 14, 2011 at 8:22 AM, Runar Bjarnason
>> wrote:
>>>
>>> You can always force a lazy thunk, but it's impossible to unforce a
>>> strict one.
>>
>>
>> Can't you just wrap it in a function that will return that thunk when
>> evaluated? Now you have laziness.
>>
>> While I like the theoretical appeal of laziness, my main problem with it
>> is reasoning about its complexity. It's pretty easy to see that an algorithm
>> making use of eager constructs is O(n^2) but good luck doing this math for
>> any non-trivial piece of lazy code.
>>
>> --
>> Cédric
>>
>>
>>
>>
>
Wed, 2011-12-14, 19:41
#333333
Re: Re: questioning FP
On Wed, Dec 14, 2011 at 7:28 PM, Geoff Reedy wrote:
> On Wed, Dec 14, 2011 at 09:55:31AM -0800, Cédric Beust ♔ said
>> While I like the theoretical appeal of laziness, my main problem with it is
>> reasoning about its complexity. It's pretty easy to see that an algorithm
>> making use of eager constructs is O(n^2) but good luck doing this math for
>> any non-trivial piece of lazy code.
>
> On the other hand, lazy evaluation is essentially required for effecient
> implementation of persistent data structures; cf. Okasaki's Purely
> Functional Data Structures.
Maybe for some of them. But Scala's persistent vectors and hash tries
do just fine with strict evaluation. Would be interesting to see some
performance comparisons.
Wed, 2011-12-14, 19:51
#353535
Re: Re: questioning FP
2011/12/14 Josh Suereth :
> Guys,
>
> It's not an either or, us vs. them debate.
> In Haskell you can force strict evaluation, and in scala you can use lazy
> evaluation. It's kind of a continuum.
>
> While Haskell s runtime is optimized for thunks and laziness, the jvm is not
> and laziness is sort of kludged on top. This is what I mean by scala
> choosing the right default. We swing the continuum towards strict for our
> runtime. If the JVM had a better laziness story, maybe things would be
> different. However we're trying to make optimal use of HotSpot.
>
And, of course, laziness really requires purity, which Scala does not
have. But I agree, implementing lazy languages on the JVM looks like a
challenge.
Cheers
Wed, 2011-12-14, 20:01
#373737
Re: Re: questioning FP
On Wed, Dec 14, 2011 at 16:28, Geoff Reedy wrote:
> On Wed, Dec 14, 2011 at 09:55:31AM -0800, Cédric Beust ♔ said
>> While I like the theoretical appeal of laziness, my main problem with it is
>> reasoning about its complexity. It's pretty easy to see that an algorithm
>> making use of eager constructs is O(n^2) but good luck doing this math for
>> any non-trivial piece of lazy code.
>
> On the other hand, lazy evaluation is essentially required for effecient
> implementation of persistent data structures; cf. Okasaki's Purely
> Functional Data Structures.
???
I suspect ML people would heartily disagree with this...
Wed, 2011-12-14, 20:11
#393939
Re: Re: questioning FP
Laziness helps Okasaki use amortization arguments for certain
functional data structures. He uses laziness to maintain reasonable
runtime efficiencies when considering operations across different
persistent states of an functional data structure. For example, you
could have a queue where the next dequeue involves reversing a list.
If you had many copies of this queue and dequeue on all of them is
O(n), your amortization argument is ruined. With laziness, only the
first call to dequeue is O(n). Subsequent calls on copies of the
queue are O(1).
On Wed, Dec 14, 2011 at 10:45 AM, Daniel Sobral wrote:
> On Wed, Dec 14, 2011 at 16:28, Geoff Reedy wrote:
>> On Wed, Dec 14, 2011 at 09:55:31AM -0800, Cédric Beust ♔ said
>>> While I like the theoretical appeal of laziness, my main problem with it is
>>> reasoning about its complexity. It's pretty easy to see that an algorithm
>>> making use of eager constructs is O(n^2) but good luck doing this math for
>>> any non-trivial piece of lazy code.
>>
>> On the other hand, lazy evaluation is essentially required for effecient
>> implementation of persistent data structures; cf. Okasaki's Purely
>> Functional Data Structures.
>
> ???
>
> I suspect ML people would heartily disagree with this...
>
> --
> Daniel C. Sobral
>
> I travel to the future all the time.
Wed, 2011-12-14, 20:21
#323232
Re: Re: questioning FP
You might also be interested in this from his thesis:
"The above implementation of queues is asymptotically optimal | you can't ask
for better bounds than O(1) time per operation. However, in practice, it tends
to be fairly slow. There are at least two reasons for this. First,
lazy evaluation is
slower than strict evaluation, because of the need to create and memoize suspen-
sions. Compilers for lazy languages recognize this fact and use
strictness analysis
to turn lazy evaluation into strict evaluation whenever possible. However, when
lazy evaluation serves an algorithmic purpose, as it does here, it will never be
eliminated by strictness analysis. But even if we need lazy evaluation, maybe we
don't need so much of it."
Bottom line it depends on how we are defining "efficient".
On Wed, Dec 14, 2011 at 11:04 AM, Michael Schmitz
wrote:
> Laziness helps Okasaki use amortization arguments for certain
> functional data structures. He uses laziness to maintain reasonable
> runtime efficiencies when considering operations across different
> persistent states of an functional data structure. For example, you
> could have a queue where the next dequeue involves reversing a list.
> If you had many copies of this queue and dequeue on all of them is
> O(n), your amortization argument is ruined. With laziness, only the
> first call to dequeue is O(n). Subsequent calls on copies of the
> queue are O(1).
>
> On Wed, Dec 14, 2011 at 10:45 AM, Daniel Sobral wrote:
>> On Wed, Dec 14, 2011 at 16:28, Geoff Reedy wrote:
>>> On Wed, Dec 14, 2011 at 09:55:31AM -0800, Cédric Beust ♔ said
>>>> While I like the theoretical appeal of laziness, my main problem with it is
>>>> reasoning about its complexity. It's pretty easy to see that an algorithm
>>>> making use of eager constructs is O(n^2) but good luck doing this math for
>>>> any non-trivial piece of lazy code.
>>>
>>> On the other hand, lazy evaluation is essentially required for effecient
>>> implementation of persistent data structures; cf. Okasaki's Purely
>>> Functional Data Structures.
>>
>> ???
>>
>> I suspect ML people would heartily disagree with this...
>>
>> --
>> Daniel C. Sobral
>>
>> I travel to the future all the time.
Wed, 2011-12-14, 22:31
#343434
Re: Re: questioning FP
2011/12/14 Cédric Beust ♔ :
>
> On Wed, Dec 14, 2011 at 8:22 AM, Runar Bjarnason
> wrote:
>>
>> You can always force a lazy thunk, but it's impossible to unforce a strict
>> one.
>
>
> Can't you just wrap it in a function that will return that thunk when
> evaluated? Now you have laziness.
That's not lazy. That's evaluation on demand.
scala> val a = { () => println("eval"); 5 }
a: () => Int =
scala> a()
eval
res2: Int = 5
scala> a()
eval
res3: Int = 5
If that made things actually lazy, eval would only be printed once.
>
> While I like the theoretical appeal of laziness, my main problem with it is
> reasoning about its complexity. It's pretty easy to see that an algorithm
> making use of eager constructs is O(n^2) but good luck doing this math for
> any non-trivial piece of lazy code.
>
> --
> Cédric
>
>
>
>
Wed, 2011-12-14, 23:31
#363636
Re: Re: questioning FP
On Wed, Dec 14, 2011 at 07:31:21PM +0100, martin odersky said
> Maybe for some of them. But Scala's persistent vectors and hash tries
> do just fine with strict evaluation. Would be interesting to see some
> performance comparisons.
True, it's only needed to ensure amortized performance bounds. If the
worst case bounds are good enough, which they are for the persistent
vectors and hash tries, then laziness is indeed unnecessary.
Thu, 2011-12-15, 01:11
#373737
Re: Re: questioning FP
Dear Matthew,
Thanks for this clear and concise example! The list example clearly illustrates, in a lazy setting, the difficulty of reasoning about complexity in a modular fashion. Without a modular approach to reasoning about complexity, the computational complexity of code is a brittle phenomenon with small changes leading to changes in computational complexity that are neither expected -- nor could they be.
The list example means that someone can checkin in a completely different part of the codebase a resource -- that even conforms to some type class -- and a client of the collection code suddenly goes completely wonky from a complexity perspective. This is reminiscent of the untyped dynamic setting, such as Ruby, where someone can monkey-patch in a completely different part of the codebase and code i am working on suddenly breaks.
The situation is not without hope -- i think. One recourse is to move resource usage into the type system. Linear types and other disciplines offer this and it seems to me not inconceivable to combine linear and (some modified form of) lazy computation. Still, we are a certain distance away from being able to plugin in resource tracking type disciplines into industrial languages -- even advanced languages like Scala.
Best wishes,
--greg
On Wed, Dec 14, 2011 at 9:38 AM, Matthew Pocock <turingatemyhamster@gmail.com> wrote:
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Thanks for this clear and concise example! The list example clearly illustrates, in a lazy setting, the difficulty of reasoning about complexity in a modular fashion. Without a modular approach to reasoning about complexity, the computational complexity of code is a brittle phenomenon with small changes leading to changes in computational complexity that are neither expected -- nor could they be.
The list example means that someone can checkin in a completely different part of the codebase a resource -- that even conforms to some type class -- and a client of the collection code suddenly goes completely wonky from a complexity perspective. This is reminiscent of the untyped dynamic setting, such as Ruby, where someone can monkey-patch in a completely different part of the codebase and code i am working on suddenly breaks.
The situation is not without hope -- i think. One recourse is to move resource usage into the type system. Linear types and other disciplines offer this and it seems to me not inconceivable to combine linear and (some modified form of) lazy computation. Still, we are a certain distance away from being able to plugin in resource tracking type disciplines into industrial languages -- even advanced languages like Scala.
Best wishes,
--greg
On Wed, Dec 14, 2011 at 9:38 AM, Matthew Pocock <turingatemyhamster@gmail.com> wrote:
On 14 December 2011 17:15, Runar Bjarnason <runarorama@gmail.com> wrote:
I don't follow very well what you have just said, except maybe that laziness and side-effects don't work well together.
No, it had nothing to do with side-effects. It's about having knowledge about if something has already been evaluated. In a lazy language, some value can be in one of two states - evaluated or waiting to be evalutated. As someone looking at the code, your knowledge about its state is one of three values: you know that it will have been evaluted, you know that it will not have been evaluted, you do not know if it has or has not been evaluated.Then again, working with side-effects is pretty difficult even in strict code. Consider:
if (true) "hooray" else error("oops")
What is the result of this? Well, if "if" weren't lazy then it would always generate an error. Turns out that you need to introduce laziness to do anything useful.
Sure, but change this to:
something(x: Boolean) = if(x) "horray" else error("oops")
Now, we don't know if this terminates or not until we know the value of x. This isn't really the sort of case I was thinking of though. Perhaps a list with lazy evaluation of head and tail would be better. As someone who has been handed a List[X], I have no idea what the complexity of head or tail is on it, as I have no idea if the head or tail has been previously evaluated, and indeed have no idea what the complexity of the operation to be evaluated would be. I know this isn't legal scala, but:
trait List[+A] { def head: A def tail: List[A]}
case object Nil extends List[Nothing] { def head: A = throw new NoSuchElementException("No head on the Nil list") def tail: List[A] = throw new NoSuchElementException("No tail on the Nil list")}
case class Cons[A](lazy val head: A, lazy val tail: List[A]) extends List[+A]
Now, if I give you an instance x: List[A] using the above definition, what is the complexity of calling x.head or x.tail? You have no way of knowing without also knowing the intimate details of how x was built and if x.head or x.tail have previously been invoked. To reason about the complexity of any part of your program, you have to know about an ever expanding wider, prior-evaluated portion of it. As you showed with `lazy if`, potentially we can't even hope to reason about lower bound estimates on complexity without running portions of the program to know what branch it takes.
Matthew
--
Dr Matthew PocockIntegrative Bioinformatics Group, School of Computing Science, Newcastle University mailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozerskype: matthew.pococktel: (0191) 2566550mob: +447535664143
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Thu, 2011-12-15, 01:31
#393939
Re: Re: questioning FP
2011/12/14 Runar Bjarnason <runarorama@gmail.com>
> Strictness should be the default in a general purpose language, and laziness a feature that can be turned on when the situation calls for it.
Or, you know, the other way around. Once you are strict, it is too late to be lazy. You can always force a lazy thunk, but it's impossible to unforce a strict one.
Both are needed for sure. One is great for designing modular abstractions and the other permits to control the execution. Now, I think a programming language must permit to compile lazy expressions directly to the bare metal without incurring a high runtime baggage overhead. This offers a tight control on the execution (strict) of high-level abstractions (lazy). Despite that they are not perfect and there is still a lot of ground to be covered, strict programming languages like ML and Scala show that it is possible without dedicated runtime support for laziness.
Cheers,
Sébastien
Thu, 2011-12-15, 02:51
#3b3b3b
Re: questioning FP
On 14 dec, 22:26, Richard Wallace wrote:
> 2011/12/14 Cédric Beust ♔ :
>
>
>
> > On Wed, Dec 14, 2011 at 8:22 AM, Runar Bjarnason
> > wrote:
>
> >> You can always force a lazy thunk, but it's impossible to unforce a strict
> >> one.
>
> > Can't you just wrap it in a function that will return that thunk when
> > evaluated? Now you have laziness.
>
> That's not lazy. That's evaluation on demand.
The idea of lazy evaluation is that an expression is evaluated on
demand
>
> scala> val a = { () => println("eval"); 5 }
> a: () => Int =
The expression is partially evaluated: the expression
"println("eval"); 5" is lifted in a thunk.
This is "call by name" lazy evaluation. Every time you call a with a
function application the whole expression is evaluated
>
> scala> a()
> eval
> res2: Int = 5
>
> scala> a()
> eval
> res3: Int = 5
>
> If that made things actually lazy, eval would only be printed once.
Probably you mean "It is not memoized" which is a technique to speedup
by making a lookup table.
If you memoize it too then it is called "call by need"
REPL
====
scala> val a = { () => println("eval"); 5 }
a: () => Int =
Now a is "call by name" lazy evaluation by lifting the expression in a
thunk
scala> lazy val b = a()
b: Int =
Now I made the expression memoized (i.e. cached) too. If I don't use
the lazy keyword the expression would be directly evaluated.
This is "call by need" lazy evaluation.
scala> b
eval
res28: Int = 5
scala> b
res29: Int = 5
You can do this also inside a method
call by need lazy evaluation (memoized)
=======================================
scala> def m(f: => Int) = { lazy val a = f ; a + a + a}
m: (f: => Int)Int
scala> val b = m({println("eval");5})
eval
b: Int = 15
call by name lazy evaluation (non-memoized)
===========================================
scala> def m(f: => Int) = { f + f + f }
m: (f: => Int)Int
scala> val b = m({println("eval");5})
eval
eval
eval
b: Int = 15
Thu, 2011-12-15, 03:51
#3d3d3d
Re: Re: questioning FP
That is what I meant. The point was it's not quite as simple to make
things lazy as was made out, even in the simplest case. But it
requires quite a lot of work to do it everywhere since you can't
return a lazily evaluated value from a function. So you need to wrap
it everywhere. And then there is no strictness evaluation, so you
have to do more work that the compiler could be doing for you.
Strict by default makes sense for a language on the JVM. But that,
along with other factors, just makes me question the usefulness of the
JVM.
On Wed, Dec 14, 2011 at 6:44 PM, Dave wrote:
>
>
> On 14 dec, 22:26, Richard Wallace wrote:
>> 2011/12/14 Cédric Beust ♔ :
>>
>>
>>
>> > On Wed, Dec 14, 2011 at 8:22 AM, Runar Bjarnason
>> > wrote:
>>
>> >> You can always force a lazy thunk, but it's impossible to unforce a strict
>> >> one.
>>
>> > Can't you just wrap it in a function that will return that thunk when
>> > evaluated? Now you have laziness.
>>
>> That's not lazy. That's evaluation on demand.
>
>
> The idea of lazy evaluation is that an expression is evaluated on
> demand
>
>
>>
>> scala> val a = { () => println("eval"); 5 }
>> a: () => Int =
>
>
> The expression is partially evaluated: the expression
> "println("eval"); 5" is lifted in a thunk.
> This is "call by name" lazy evaluation. Every time you call a with a
> function application the whole expression is evaluated
>
>>
>> scala> a()
>> eval
>> res2: Int = 5
>>
>> scala> a()
>> eval
>> res3: Int = 5
>>
>> If that made things actually lazy, eval would only be printed once.
>
>
> Probably you mean "It is not memoized" which is a technique to speedup
> by making a lookup table.
> If you memoize it too then it is called "call by need"
>
>
> REPL
> ====
>
> scala> val a = { () => println("eval"); 5 }
> a: () => Int =
>
> Now a is "call by name" lazy evaluation by lifting the expression in a
> thunk
>
> scala> lazy val b = a()
> b: Int =
>
> Now I made the expression memoized (i.e. cached) too. If I don't use
> the lazy keyword the expression would be directly evaluated.
> This is "call by need" lazy evaluation.
>
> scala> b
> eval
> res28: Int = 5
>
> scala> b
> res29: Int = 5
>
>
> You can do this also inside a method
>
> call by need lazy evaluation (memoized)
> =======================================
>
> scala> def m(f: => Int) = { lazy val a = f ; a + a + a}
> m: (f: => Int)Int
>
> scala> val b = m({println("eval");5})
> eval
> b: Int = 15
>
> call by name lazy evaluation (non-memoized)
> ===========================================
>
> scala> def m(f: => Int) = { f + f + f }
> m: (f: => Int)Int
>
> scala> val b = m({println("eval");5})
> eval
> eval
> eval
> b: Int = 15
>
>
>
>
Thu, 2011-12-15, 04:31
#3f3f3f
Re: Re: questioning FP
On Wed, Dec 14, 2011 at 6:47 PM, Richard Wallace <rwallace@thewallacepack.net> wrote:
It still gets you very close to the kind of laziness we want. Actually, Tony made the same mistake in this post where he says that you can't add laziness to Java but you can in Scala, and to illustrate it, he shows a simple example of a boolean short circuit. However, during his demonstration, he silently modifies this signature:
def lazyAnd(a: Boolean, b: Boolean)
to
def lazyAnd(a: Boolean, b: => Boolean)
Well, if you allow this kind of rewriting of the original problem, then you can do the same kind of thing in Java (although it will be more verbose and with slightly different semantics, but the idea remains) and in pretty much any strict language that offers even the most minimalistic support for lambdas.
It makes sense in a much broader ecosystem than the JVM alone (native, LLVM, .net, most interpreted languages, etc...) and there is even a growing part of the Haskell community which thinks that lazy is not the right default, as we already discussed.
It depends on the "usefulness for what", I guess. It might not be the right technology for you, but seems to work fine for millions of developers on the planet.
-- Cédric
That is what I meant. The point was it's not quite as simple to make
things lazy as was made out, even in the simplest case. But it
requires quite a lot of work to do it everywhere since you can't
return a lazily evaluated value from a function.
It still gets you very close to the kind of laziness we want. Actually, Tony made the same mistake in this post where he says that you can't add laziness to Java but you can in Scala, and to illustrate it, he shows a simple example of a boolean short circuit. However, during his demonstration, he silently modifies this signature:
def lazyAnd(a: Boolean, b: Boolean)
to
def lazyAnd(a: Boolean, b: => Boolean)
Well, if you allow this kind of rewriting of the original problem, then you can do the same kind of thing in Java (although it will be more verbose and with slightly different semantics, but the idea remains) and in pretty much any strict language that offers even the most minimalistic support for lambdas.
So you need to wrap
it everywhere. And then there is no strictness evaluation, so you
have to do more work that the compiler could be doing for you.
Strict by default makes sense for a language on the JVM.
It makes sense in a much broader ecosystem than the JVM alone (native, LLVM, .net, most interpreted languages, etc...) and there is even a growing part of the Haskell community which thinks that lazy is not the right default, as we already discussed.
But that, along with other factors, just makes me question the usefulness of the JVM.
It depends on the "usefulness for what", I guess. It might not be the right technology for you, but seems to work fine for millions of developers on the planet.
-- Cédric
Thu, 2011-12-15, 09:21
#414141
Re: Re: questioning FP
On 12/15/2011 11:44 AM, Dave wrote:
> The idea of lazy evaluation is that an expression is evaluated on
> demand
The "idea of" (definition of/mechanical test for) lazy evaluation is
such that f(bottom) != bottom. The rest is a consequence or
implementation detail.
Thu, 2011-12-15, 14:31
#434343
Re: questioning FP
On Wednesday, December 14, 2011, Tony Morris <tonymorris@gmail.com> wrote:
> On 12/15/2011 11:44 AM, Dave wrote:
>> The idea of lazy evaluation is that an expression is evaluated on
>> demand
> The "idea of" (definition of/mechanical test for) lazy evaluation is
> such that f(bottom) != bottom. The rest is a consequence or
> implementation detail.
>
>
> --
> Tony Morris
> http://tmorris.net/
>
>
>
Thu, 2011-12-15, 14:41
#3c3c3c
Re: questioning FP
i think i know what you mean, but i think once you understand it, you don't need to know what it means, and if you don't understand it, f(x) != x doesn't help at all :)
-------- Original-Nachricht --------
> Datum: Thu, 15 Dec 2011 07:27:11 -0600
> Von: ejc
> An: "tmorris@tmorris.net"
> CC: "scala-debate@googlegroups.com"
> Betreff: Re: [scala-debate] questioning FP
> On Wednesday, December 14, 2011, Tony Morris wrote:
> > On 12/15/2011 11:44 AM, Dave wrote:
> >> The idea of lazy evaluation is that an expression is evaluated on
> >> demand
> > The "idea of" (definition of/mechanical test for) lazy evaluation is
> > such that f(bottom) != bottom. The rest is a consequence or
> > implementation detail.
> >
> >
> > --
> > Tony Morris
> > http://tmorris.net/
> >
> >
> >
I recall reading this post when it first came out and not being convinced that the comments did much to undermine the claims in the article. I just reread the comments and it's still not clear that Harper's fundamental criticisms are unfounded. Tony, can you elaborate?
--
Jim Powers