- About Scala
- Documentation
- Code Examples
- Software
- Scala Developers
Re: Scala 2.9 Compiler Performance
Sat, 2011-06-11, 17:36
On Jun 11, 2011, at 10:16 AM, Paul Phillips wrote:
> On 6/9/11 5:26 PM, Tiark Rompf wrote:
>> There are more and some of them are not even in trunk yet (got intercepted by general compiler refactorings):
>>
>> https://github.com/TiarkRompf/scala-virtualized/commits/perfmaster
>
> Those changes look excellent. I invite you to commit them! Or just let me know and I can do it.
I rebased the changes on current trunk yesterday and I could commit them later today or tomorrow. If you would like to do it, feel free to go ahead though.
> A couple questions/observations:
>
> - that != NothingClass && (that isSubClass AnyRefClass))
> + that != NothingClass && ((that isSubClass AnyRefClass) ||
> + (that isSubClass ObjectClass)))
>
> I think that || clause should be behaviorally equivalent to (that isSubClass ObjectClass), but maybe you have a reason to do it this way?
The additional check for ObjectClass was necessary (I forgot why exactly) but I haven't tried whether the AnyRef check is still required.
> - val fromContains = from.toSet // avoiding repeatedly traversing from
> + val fromContains = (x: Symbol) => from.contains(x) //from.toSet <-- traversing short lists seems to be faster than allocating sets
>
> Bummer, there's nothing worse than optimizations which pessimize. Do you have any rule of thumb regarding when the number of elements and/or the number of lookups leans toward creating a set? Another factor which comes to mind is frequency of duplicates.
I guess there's no single rule of thumb that's valid for all cases. Here, the set allocations were costly because there are lots of them and all have to go through the builder and traversable indirection even for 1-element sets.
> Now that signatures are somewhat stable (if impoverished) we should make "noverify" the default and use settings.Xverify to enable.
I agree
> I really appreciate your work on this and I would like to keep performance pointed in the right direction. I wonder if we can dream up something to make it easier on future people (including our future selves, mine especially) to determine at the source level when some code has been written a particular way for performance reasons, that alternatives have been measured, and offer some means to verify that the rationale remains valid. The comments are a huge help, but it's impractical to cover too much that way, and personally I don't trust software backstops unless they involve machine verification.
Good point, I'm not sure what to do about it in the long run. I guess adding specific and verbose comments is all we can do right now ...
cheers,
- Tiark
Sat, 2011-06-11, 18:07
#2
Re: Re: Scala 2.9 Compiler Performance
One more thing: We know the compiler performance is largely memory bound. Generating lots of short-lived objects is a strain; generating long-lived objects is an even larger one. So, for instance, adding lots of new fields or objects to trees is almost certainly very bad for performance.
-- Martin
On Sat, Jun 11, 2011 at 6:53 PM, martin odersky <martin.odersky@epfl.ch> wrote:
--
----------------------------------------------
Martin Odersky
Prof., EPFL and CEO, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967
-- Martin
On Sat, Jun 11, 2011 at 6:53 PM, martin odersky <martin.odersky@epfl.ch> wrote:
On Sat, Jun 11, 2011 at 6:36 PM, Tiark Rompf <tiark.rompf@epfl.ch> wrote:On Jun 11, 2011, at 10:16 AM, Paul Phillips wrote:
> On 6/9/11 5:26 PM, Tiark Rompf wrote:
>> There are more and some of them are not even in trunk yet (got intercepted by general compiler refactorings):
>>
>> https://github.com/TiarkRompf/scala-virtualized/commits/perfmaster
>
> Those changes look excellent. I invite you to commit them! Or just let me know and I can do it.
I rebased the changes on current trunk yesterday and I could commit them later today or tomorrow. If you would like to do it, feel free to go ahead though.
> A couple questions/observations:
>
> - that != NothingClass && (that isSubClass AnyRefClass))
> + that != NothingClass && ((that isSubClass AnyRefClass) ||
> + (that isSubClass ObjectClass)))
>
> I think that || clause should be behaviorally equivalent to (that isSubClass ObjectClass), but maybe you have a reason to do it this way?
The additional check for ObjectClass was necessary (I forgot why exactly) but I haven't tried whether the AnyRef check is still required.
I'd assume not.
> - val fromContains = from.toSet // avoiding repeatedly traversing from
> + val fromContains = (x: Symbol) => from.contains(x) //from.toSet <-- traversing short lists seems to be faster than allocating sets
>
> Bummer, there's nothing worse than optimizations which pessimize. Do you have any rule of thumb regarding when the number of elements and/or the number of lookups leans toward creating a set? Another factor which comes to mind is frequency of duplicates.
I guess there's no single rule of thumb that's valid for all cases. Here, the set allocations were costly because there are lots of them and all have to go through the builder and traversable indirection even for 1-element sets.
Well, under 4 elements both do a linear search, so the added overhead of actually building the set is probably not worth it. On the other hand, once we are over 15-20 elements the set abstraction is almost certainly better. That's just non-empirically backed up rules of thumb.
Yes. Also, when I first write the compiler, I was generally performance conscious everywhere, because I knew that compilers tend to have very large hot areas. So, if it's original code the argument (I am not saying proof, that's too hard) that something does not matter should be on the one who does the change.
> Now that signatures are somewhat stable (if impoverished) we should make "noverify" the default and use settings.Xverify to enable.
I agree
> I really appreciate your work on this and I would like to keep performance pointed in the right direction. I wonder if we can dream up something to make it easier on future people (including our future selves, mine especially) to determine at the source level when some code has been written a particular way for performance reasons, that alternatives have been measured, and offer some means to verify that the rationale remains valid. The comments are a huge help, but it's impractical to cover too much that way, and personally I don't trust software backstops unless they involve machine verification.
Good point, I'm not sure what to do about it in the long run. I guess adding specific and verbose comments is all we can do right now ...
Cheers
-- Martin
--
----------------------------------------------
Martin Odersky
Prof., EPFL and CEO, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967
Sat, 2011-06-11, 18:17
#3
Re: Re: Scala 2.9 Compiler Performance
Possible to hoist closures and elide allocations in source?
On Jun 11, 2011 6:56 PM, "martin odersky" <martin.odersky@epfl.ch> wrote:> One more thing: We know the compiler performance is largely memory bound.
> Generating lots of short-lived objects is a strain; generating long-lived
> objects is an even larger one. So, for instance, adding lots of new fields
> or objects to trees is almost certainly very bad for performance.
>
> -- Martin
>
>
> On Sat, Jun 11, 2011 at 6:53 PM, martin odersky <martin.odersky@epfl.ch>wrote:
>
>>
>>
>> On Sat, Jun 11, 2011 at 6:36 PM, Tiark Rompf <tiark.rompf@epfl.ch> wrote:
>>
>>> On Jun 11, 2011, at 10:16 AM, Paul Phillips wrote:
>>>
>>> > On 6/9/11 5:26 PM, Tiark Rompf wrote:
>>> >> There are more and some of them are not even in trunk yet (got
>>> intercepted by general compiler refactorings):
>>> >>
>>> >> https://github.com/TiarkRompf/scala-virtualized/commits/perfmaster
>>> >
>>> > Those changes look excellent. I invite you to commit them! Or just let
>>> me know and I can do it.
>>>
>>> I rebased the changes on current trunk yesterday and I could commit them
>>> later today or tomorrow. If you would like to do it, feel free to go ahead
>>> though.
>>>
>>> > A couple questions/observations:
>>> >
>>> > - that != NothingClass && (that isSubClass AnyRefClass))
>>> > + that != NothingClass && ((that isSubClass AnyRefClass) ||
>>> > + (that isSubClass ObjectClass)))
>>> >
>>> > I think that || clause should be behaviorally equivalent to (that
>>> isSubClass ObjectClass), but maybe you have a reason to do it this way?
>>>
>>> The additional check for ObjectClass was necessary (I forgot why exactly)
>>> but I haven't tried whether the AnyRef check is still required.
>>>
>>
>> I'd assume not.
>>
>>
>>>
>>> > - val fromContains = from.toSet // avoiding repeatedly traversing
>>> from
>>> > + val fromContains = (x: Symbol) => from.contains(x) //from.toSet <--
>>> traversing short lists seems to be faster than allocating sets
>>> >
>>> > Bummer, there's nothing worse than optimizations which pessimize. Do
>>> you have any rule of thumb regarding when the number of elements and/or the
>>> number of lookups leans toward creating a set? Another factor which comes to
>>> mind is frequency of duplicates.
>>>
>>> I guess there's no single rule of thumb that's valid for all cases. Here,
>>> the set allocations were costly because there are lots of them and all have
>>> to go through the builder and traversable indirection even for 1-element
>>> sets.
>>>
>>
>> Well, under 4 elements both do a linear search, so the added overhead of
>> actually building the set is probably not worth it. On the other hand, once
>> we are over 15-20 elements the set abstraction is almost certainly better.
>> That's just non-empirically backed up rules of thumb.
>>
>>
>>>
>>>
>>> > Now that signatures are somewhat stable (if impoverished) we should make
>>> "noverify" the default and use settings.Xverify to enable.
>>>
>>> I agree
>>>
>>> > I really appreciate your work on this and I would like to keep
>>> performance pointed in the right direction. I wonder if we can dream up
>>> something to make it easier on future people (including our future selves,
>>> mine especially) to determine at the source level when some code has been
>>> written a particular way for performance reasons, that alternatives have
>>> been measured, and offer some means to verify that the rationale remains
>>> valid. The comments are a huge help, but it's impractical to cover too much
>>> that way, and personally I don't trust software backstops unless they
>>> involve machine verification.
>>>
>>>
>>> Good point, I'm not sure what to do about it in the long run. I guess
>>> adding specific and verbose comments is all we can do right now ...
>>>
>>> Yes. Also, when I first write the compiler, I was generally performance
>> conscious everywhere, because I knew that compilers tend to have very large
>> hot areas. So, if it's original code the argument (I am not saying proof,
>> that's too hard) that something does not matter should be on the one who
>> does the change.
>>
>> Cheers
>>
>> -- Martin
>>
>>
>>
>
>
> --
> ----------------------------------------------
> Martin Odersky
> Prof., EPFL <http://www.epfl.ch> and CEO, Typesafe <http://www.typesafe.com>
> PSED, 1015 Lausanne, Switzerland
> Tel. EPFL: +41 21 693 6863
> Tel. Typesafe: +41 21 691 4967
Sat, 2011-06-11, 21:37
#4
Re: Scala 2.9 Compiler Performance
On 6/11/11 9:36 AM, Tiark Rompf wrote:
> I rebased the changes on current trunk yesterday and I could commit
> them later today or tomorrow. If you would like to do it, feel free
> to go ahead though.
I shipped them, including the small tweaks discussed.
> Good point, I'm not sure what to do about it in the long run. I guess
> adding specific and verbose comments is all we can do right now ...
It's a research project! If our IDEs integrated lots of performance
data, they could tint the color underneath the code based on how hot it
is known to run. If you have to don sunglasses then it's probably a bad
place to coast on convenience implicits.
Sun, 2011-06-12, 08:37
#5
Re: Re: Scala 2.9 Compiler Performance
Another observation regarding memory: The compiler caches almost all types ever created during a compiler run. Any operation that instantiates type params with something fresh or otherwise creates a type that has not been encountered so far will add to memory footprint. Adding more caching could actually bring memory use down by reducing the number of slightly different types (like the n-th instantiation of some type with fresh type vars).
- Tiark
On Jun 11, 2011, at 6:55 PM, martin odersky wrote:
- Tiark
On Jun 11, 2011, at 6:55 PM, martin odersky wrote:
One more thing: We know the compiler performance is largely memory bound. Generating lots of short-lived objects is a strain; generating long-lived objects is an even larger one. So, for instance, adding lots of new fields or objects to trees is almost certainly very bad for performance.
-- Martin
On Sat, Jun 11, 2011 at 6:53 PM, martin odersky <martin.odersky@epfl.ch> wrote:
On Sat, Jun 11, 2011 at 6:36 PM, Tiark Rompf <tiark.rompf@epfl.ch> wrote:On Jun 11, 2011, at 10:16 AM, Paul Phillips wrote:
> On 6/9/11 5:26 PM, Tiark Rompf wrote:
>> There are more and some of them are not even in trunk yet (got intercepted by general compiler refactorings):
>>
>> https://github.com/TiarkRompf/scala-virtualized/commits/perfmaster
>
> Those changes look excellent. I invite you to commit them! Or just let me know and I can do it.
I rebased the changes on current trunk yesterday and I could commit them later today or tomorrow. If you would like to do it, feel free to go ahead though.
> A couple questions/observations:
>
> - that != NothingClass && (that isSubClass AnyRefClass))
> + that != NothingClass && ((that isSubClass AnyRefClass) ||
> + (that isSubClass ObjectClass)))
>
> I think that || clause should be behaviorally equivalent to (that isSubClass ObjectClass), but maybe you have a reason to do it this way?
The additional check for ObjectClass was necessary (I forgot why exactly) but I haven't tried whether the AnyRef check is still required.
I'd assume not.
> - val fromContains = from.toSet // avoiding repeatedly traversing from
> + val fromContains = (x: Symbol) => from.contains(x) //from.toSet <-- traversing short lists seems to be faster than allocating sets
>
> Bummer, there's nothing worse than optimizations which pessimize. Do you have any rule of thumb regarding when the number of elements and/or the number of lookups leans toward creating a set? Another factor which comes to mind is frequency of duplicates.
I guess there's no single rule of thumb that's valid for all cases. Here, the set allocations were costly because there are lots of them and all have to go through the builder and traversable indirection even for 1-element sets.
Well, under 4 elements both do a linear search, so the added overhead of actually building the set is probably not worth it. On the other hand, once we are over 15-20 elements the set abstraction is almost certainly better. That's just non-empirically backed up rules of thumb.
Yes. Also, when I first write the compiler, I was generally performance conscious everywhere, because I knew that compilers tend to have very large hot areas. So, if it's original code the argument (I am not saying proof, that's too hard) that something does not matter should be on the one who does the change.
> Now that signatures are somewhat stable (if impoverished) we should make "noverify" the default and use settings.Xverify to enable.
I agree
> I really appreciate your work on this and I would like to keep performance pointed in the right direction. I wonder if we can dream up something to make it easier on future people (including our future selves, mine especially) to determine at the source level when some code has been written a particular way for performance reasons, that alternatives have been measured, and offer some means to verify that the rationale remains valid. The comments are a huge help, but it's impractical to cover too much that way, and personally I don't trust software backstops unless they involve machine verification.
Good point, I'm not sure what to do about it in the long run. I guess adding specific and verbose comments is all we can do right now ...
Cheers
-- Martin
--
----------------------------------------------
Martin Odersky
Prof., EPFL and CEO, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967
Sun, 2011-06-12, 09:07
#6
Re: Scala 2.9 Compiler Performance
On Jun 11, 2011, at 10:36 PM, Paul Phillips wrote:
> On 6/11/11 9:36 AM, Tiark Rompf wrote:
>> I rebased the changes on current trunk yesterday and I could commit
>> them later today or tomorrow. If you would like to do it, feel free
>> to go ahead though.
>
> I shipped them, including the small tweaks discussed.
Thanks! I hope we can get the RefinedType check in isImpossibleSubType back in, this one saves a lot of time for pimp-my-library style code.
Oh, and the changes should probably be 'review by odersky' even though we have talked some of them through a while ago.
>> Good point, I'm not sure what to do about it in the long run. I guess
>> adding specific and verbose comments is all we can do right now ...
>
> It's a research project! If our IDEs integrated lots of performance
> data, they could tint the color underneath the code based on how hot it
> is known to run. If you have to don sunglasses then it's probably a bad
> place to coast on convenience implicits.
How far along is YourKit's integration with Eclipse?
- Tiark
Sun, 2011-06-12, 12:37
#7
Re: Re: Scala 2.9 Compiler Performance
On Sun, Jun 12, 2011 at 10:00 AM, Tiark Rompf wrote:
> On Jun 11, 2011, at 10:36 PM, Paul Phillips wrote:
>
>> On 6/11/11 9:36 AM, Tiark Rompf wrote:
>>> I rebased the changes on current trunk yesterday and I could commit
>>> them later today or tomorrow. If you would like to do it, feel free
>>> to go ahead though.
>>
>> I shipped them, including the small tweaks discussed.
>
> Thanks! I hope we can get the RefinedType check in isImpossibleSubType back in, this one saves a lot of time for pimp-my-library style code.
Here's a minimal program that doesn't compile with the RefinedType optimization.
trait NewType[X]
// change return type to Foo and it compiles.
implicit def Unwrap[X](n: NewType[X]): X = sys.error("")
class Foo(val a: Int)
def test(f: NewType[Foo]) = f.a
Oddly enough, I can now reinstate r25051 (inference of the singleton
type for objects), and compile both this and scalaz.
-jason
Mon, 2011-06-13, 07:07
#8
Re: Re: Scala 2.9 Compiler Performance
On 6/12/11 4:28 AM, Jason Zaugg wrote:
> Oddly enough, I can now reinstate r25051 (inference of the singleton
> type for objects), and compile both this and scalaz.
Head-shakingly confirmed. I don't think there's anything wrong in an
absolute sense with either r25051 or those two lines in
isImpossibleSubType. I wish I had time to figure this out, because
there's probably something worth fixing out there.
In other news, r25080 knocks another 15 seconds off quick.comp.
Thu, 2011-06-16, 14:17
#9
Re: Re: Scala 2.9 Compiler Performance
For isImpossibleSubtype, I propose the following tweak:
case RefinedType(_, decls) => decls.nonEmpty && !sym1.isNonClassType /* can't rule out abstract types */ && tp1.member(decls.head.name) == NoSymbol
(The old formulation ruled out the possibility that X <: {val a : ?}, where X is some type param)
I don't understand the relevance of the r25051 revert:
My hypothesis is that this order of events is equally descriptive:
1) Start with r25072
2) Notice scalaz doesn't build
3) Fix the problem in isImpossibleSubType
4) scalaz builds
I'm about to try this now.
cheersadriaan
On Mon, Jun 13, 2011 at 7:59 AM, Paul Phillips <paulp@improving.org> wrote:
case RefinedType(_, decls) => decls.nonEmpty && !sym1.isNonClassType /* can't rule out abstract types */ && tp1.member(decls.head.name) == NoSymbol
(The old formulation ruled out the possibility that X <: {val a : ?}, where X is some type param)
I don't understand the relevance of the r25051 revert:
1) Start with r25072I presume there was: 1b) Notice scalaz doesn't build
2) Revert r250516) re-instate r25051 7) scalaz still builds I don't see how this implies r25051 is relevant? (It never affects whether scalaz builds)
3) Notice scalaz still doesn't build
4) Check out Infer.scala from r25068, right before tiark's commits
5) scalaz builds
My hypothesis is that this order of events is equally descriptive:
1) Start with r25072
2) Notice scalaz doesn't build
3) Fix the problem in isImpossibleSubType
4) scalaz builds
I'm about to try this now.
cheersadriaan
On Mon, Jun 13, 2011 at 7:59 AM, Paul Phillips <paulp@improving.org> wrote:
On 6/12/11 4:28 AM, Jason Zaugg wrote:
> Oddly enough, I can now reinstate r25051 (inference of the singleton
> type for objects), and compile both this and scalaz.
Head-shakingly confirmed. I don't think there's anything wrong in an
absolute sense with either r25051 or those two lines in
isImpossibleSubType. I wish I had time to figure this out, because
there's probably something worth fixing out there.
In other news, r25080 knocks another 15 seconds off quick.comp.
Thu, 2011-06-16, 15:47
#10
Re: Re: Scala 2.9 Compiler Performance
On 6/16/11 6:10 AM, Adriaan Moors wrote:
> I don't see how this implies r25051 is relevant? (It never affects
> whether scalaz builds)
I'm going to assume I blew the description, but scalaz does not build at
r25050 and does at r25051. Jason confirmed this, and I'm sure anyway:
you too can confirm it. Then subsequent changes somehow made it irrelevant.
Thu, 2011-06-16, 16:07
#11
Re: Re: Scala 2.9 Compiler Performance
On Thu, Jun 16, 2011 at 4:39 PM, Paul Phillips <paulp@improving.org> wrote:
On 6/16/11 6:10 AM, Adriaan Moors wrote:ok, I see -- I'll dig a little deeper, but compiles take forever on my machine
> I don't see how this implies r25051 is relevant? (It never affects
> whether scalaz builds)
I'm going to assume I blew the description, but scalaz does not build at
r25050 and does at r25051. Jason confirmed this, and I'm sure anyway:
you too can confirm it. Then subsequent changes somehow made it irrelevant.
compiling full scalaz with my tweaked isImpossibleSubtype revealed it was not tweaked enough, btw
adriaan
Sun, 2011-06-19, 11:57
#12
Re: Re: Scala 2.9 Compiler Performance
On Thu, Jun 16, 2011 at 5:02 PM, Adriaan Moors wrote:
> ok, I see -- I'll dig a little deeper, but compiles take forever on my
> machine
> compiling full scalaz with my tweaked isImpossibleSubtype revealed it was
> not tweaked enough, btw
I've setup a CI build [1] of Scalaz against Scala 2.10.0-SNAPSHOT to
provide feedback on the correctness and performance of further
refinements to the twisty maze of inference and implicit search.
I'm doing this each time:
rm -rf ./project/boot/scala-2.10.0-SNAPSHOT
./sbt ";++2.10.0-SNAPSHOT;clean;update;test;package;publish-local;publish"
-jason
Sun, 2011-06-19, 17:57
#13
Re: Re: Scala 2.9 Compiler Performance
Is that Scala 6.0 or Scalaz trunk?
On Sun, Jun 19, 2011 at 07:50, Jason Zaugg wrote:
> On Thu, Jun 16, 2011 at 5:02 PM, Adriaan Moors wrote:
>> ok, I see -- I'll dig a little deeper, but compiles take forever on my
>> machine
>> compiling full scalaz with my tweaked isImpossibleSubtype revealed it was
>> not tweaked enough, btw
>
> I've setup a CI build [1] of Scalaz against Scala 2.10.0-SNAPSHOT to
> provide feedback on the correctness and performance of further
> refinements to the twisty maze of inference and implicit search.
>
> I'm doing this each time:
>
> rm -rf ./project/boot/scala-2.10.0-SNAPSHOT
> ./sbt ";++2.10.0-SNAPSHOT;clean;update;test;package;publish-local;publish"
>
> -jason
>
> [1] http://jenkins.scala-tools.org/view/scalaz/
>
Sun, 2011-06-19, 18:07
#14
Re: Re: Scala 2.9 Compiler Performance
On Sun, Jun 19, 2011 at 5:51 PM, Daniel Sobral <dcsobral@gmail.com> wrote:
I want some of that Scala 6.0. ;)
Best,Ismael
Is that Scala 6.0 or Scalaz trunk?
I want some of that Scala 6.0. ;)
Best,Ismael
Sun, 2011-06-19, 18:07
#15
Re: Re: Scala 2.9 Compiler Performance
On Sun, Jun 19, 2011 at 6:51 PM, Daniel Sobral wrote:
> Is that Scala 6.0 or Scalaz trunk?
>
> On Sun, Jun 19, 2011 at 07:50, Jason Zaugg wrote:
>> I've setup a CI build [1] of Scalaz against Scala 2.10.0-SNAPSHOT to
>> provide feedback on the correctness and performance of further
>> refinements to the twisty maze of inference and implicit search.
It builds trunk (well, master!), corresponding to 6.0.2-SNAPSHOT.
I assume you are suggesting that it would also be useful to build a
stable version of Scalaz against a varying build of scalac. That's a
good idea.
-jason
Sun, 2011-06-19, 18:17
#16
Re: Re: Scala 2.9 Compiler Performance
On 19 June 2011 18:02, Ismael Juma <ismael@juma.me.uk> wrote:
On Sun, Jun 19, 2011 at 5:51 PM, Daniel Sobral <dcsobral@gmail.com> wrote:Is that Scala 6.0 or Scalaz trunk?
I want some of that Scala 6.0. ;)
Is that the one with the direct neural interface and quantum typing? --
Kevin Wright
Sun, 2011-06-19, 18:27
#17
Re: Re: Scala 2.9 Compiler Performance
On Sun, Jun 19, 2011 at 14:05, Jason Zaugg wrote:
> On Sun, Jun 19, 2011 at 6:51 PM, Daniel Sobral wrote:
>> Is that Scala 6.0 or Scalaz trunk?
>>
>> On Sun, Jun 19, 2011 at 07:50, Jason Zaugg wrote:
>>> I've setup a CI build [1] of Scalaz against Scala 2.10.0-SNAPSHOT to
>>> provide feedback on the correctness and performance of further
>>> refinements to the twisty maze of inference and implicit search.
>
> It builds trunk (well, master!), corresponding to 6.0.2-SNAPSHOT.
>
> I assume you are suggesting that it would also be useful to build a
> stable version of Scalaz against a varying build of scalac. That's a
> good idea.
Actually, I presumed you were building a stable version of Scalaz
against a varying build of scalac, and was suggesting that it might be
useful to have trunk -- ok, master -- as well. :-)
On Sat, Jun 11, 2011 at 6:36 PM, Tiark Rompf <tiark.rompf@epfl.ch> wrote:
I'd assume not.
Well, under 4 elements both do a linear search, so the added overhead of actually building the set is probably not worth it. On the other hand, once we are over 15-20 elements the set abstraction is almost certainly better. That's just non-empirically backed up rules of thumb.
Yes. Also, when I first write the compiler, I was generally performance conscious everywhere, because I knew that compilers tend to have very large hot areas. So, if it's original code the argument (I am not saying proof, that's too hard) that something does not matter should be on the one who does the change.
Cheers
-- Martin