- About Scala
- Documentation
- Code Examples
- Software
- Scala Developers
Re: Super slow for-comprehension when emulating basic for-loop (about 200k iterations/second).
Fri, 2010-08-27, 15:20
On Fri, Aug 27, 2010 at 4:15 PM, Lex <lexn82@gmail.com> wrote:
I knew the reason before I posted this. But that is not the point. The point is having simple code failing so badly. The point is having for loops (the most basic construct in a language) that behave not just bad, but unexpectedly bad. The point is having to write while loops in the language that is supposed to reduce LOC count.
I understand that for-comprehensions cannot be optimized to while loops in general. However the cases of for(i <- a until b), for(item <- collection), are both most common and ubiquitous. The deserve to be optimized to proper while loops.
Simple things should be simple... Given how ubiquitous for loops are i do not understand why the simple forms cannot be compiled as efficient while loops. for(i <- a until b), for(item <- collection), are the most common cases.
How do you determine the implementation of foreach and whether or not you can while-ify it?
On Fri, Aug 27, 2010 at 10:07 AM, Ismael Juma <mlists@juma.me.uk> wrote:
On Fri, Aug 27, 2010 at 2:56 PM, Lex <lexn82@gmail.com> wrote:
> So JVM is not at fault here.
It helps to understand what is going on before making statements like
this. We said that the reason that the code runs much slower is the
lack of optimisation during static initialisation. Of course, this
means that low-level code suffers less from this issue than high-level
code. The latter does more that benefits from said JVM optimisations
that are not being done.
Best,
Ismael
--
Viktor Klang,
Code Connoisseur
Work: www.akkasource.com
Code: github.com/viktorklang
Follow: twitter.com/viktorklang
Read: klangism.tumblr.com
Fri, 2010-08-27, 15:57
#2
Re: Super slow for-comprehension when emulating basic for-loop
re: performance of for loops on Ints, there has long been a ticket on
it, which y'all could go vote for:
https://lampsvn.epfl.ch/trac/scala/ticket/1338
Fri, 2010-08-27, 16:07
#3
Re: Super slow for-comprehension when emulating basic for-loop
On Fri, Aug 27, 2010 at 3:56 PM, Seth Tisue wrote:
> re: performance of for loops on Ints, there has long been a ticket on
> it, which y'all could go vote for:
> https://lampsvn.epfl.ch/trac/scala/ticket/1338
Yeah, it's amazing how this subject keeps being discussed here as if
it's something new.
Best,
Ismael
Fri, 2010-08-27, 16:07
#4
Re: Super slow for-comprehension when emulating basic for-loop
On Fri, Aug 27, 2010 at 9:59 AM, Nils Kilden-Pedersen <nilskp@gmail.com> wrote:
For reference, this is the code:
val range = 1 to 1000000
val start = System.nanoTime
var total = 0L
//for (i <- range)
// total += i
val len = range.length
var i = 0
while (i < len)
total += range(i)
val time = (System.nanoTime - start) / 1000
println("Took %,d micros".format(time))
On Fri, Aug 27, 2010 at 9:49 AM, Randall R Schulz <rschulz@sonic.net> wrote:
On Friday August 27 2010, Nils Kilden-Pedersen wrote:
> ...
> >
> > Don't be so sure as to which is faster. Last time I benchmarked
> > something similar, the for comprehension was faster than the while
> > loop I was trying to replace it with.
>
> I just tried this again and, at least on an IndexedSeq, the while
> loop is infinitely slower, probably due to the apply(i) call being
> expensive.
If it's infinitely slower, you should not yet have the results...
I kid you not, it's still running.
For reference, this is the code:
val range = 1 to 1000000
val start = System.nanoTime
var total = 0L
//for (i <- range)
// total += i
val len = range.length
var i = 0
while (i < len)
total += range(i)
val time = (System.nanoTime - start) / 1000
println("Took %,d micros".format(time))
Fri, 2010-08-27, 16:07
#5
Re: Super slow for-comprehension when emulating basic for-loop
On Fri, Aug 27, 2010 at 11:00 AM, Nils Kilden-Pedersen <nilskp@gmail.com> wrote:
For reference, this is the code:
val range = 1 to 1000000
val start = System.nanoTime
var total = 0L
//for (i <- range)
// total += i
val len = range.length
var i = 0
while (i < len)
total += range(i)
val time = (System.nanoTime - start) / 1000
println("Took %,d micros".format(time))
This is a perfect example of why we want auto-conversion of for to while.
You forgot i += 1.
It is infinitely slower that way.
--Rex
Fri, 2010-08-27, 16:17
#6
Re: Super slow for-comprehension when emulating basic for-loop
On Fri, Aug 27, 2010 at 10:00 AM, Nils Kilden-Pedersen <nilskp@gmail.com> wrote:
On Fri, Aug 27, 2010 at 9:59 AM, Nils Kilden-Pedersen <nilskp@gmail.com> wrote:
On Fri, Aug 27, 2010 at 9:49 AM, Randall R Schulz <rschulz@sonic.net> wrote:
On Friday August 27 2010, Nils Kilden-Pedersen wrote:
> ...
> >
> > Don't be so sure as to which is faster. Last time I benchmarked
> > something similar, the for comprehension was faster than the while
> > loop I was trying to replace it with.
>
> I just tried this again and, at least on an IndexedSeq, the while
> loop is infinitely slower, probably due to the apply(i) call being
> expensive.
If it's infinitely slower, you should not yet have the results...
I kid you not, it's still running.
For reference, this is the code:
val range = 1 to 1000000
val start = System.nanoTime
var total = 0L
//for (i <- range)
// total += i
val len = range.length
var i = 0
while (i < len)
total += range(i)
val time = (System.nanoTime - start) / 1000
println("Took %,d micros".format(time))
Hahaha, I just realize that i is not incremented. No wonder :-)
Fri, 2010-08-27, 16:17
#7
Re: Super slow for-comprehension when emulating basic for-loop
On Fri, Aug 27, 2010 at 5:03 PM, Rex Kerr <ichoran@gmail.com> wrote:
On Fri, Aug 27, 2010 at 11:00 AM, Nils Kilden-Pedersen <nilskp@gmail.com> wrote:For reference, this is the code:
val range = 1 to 1000000
val start = System.nanoTime
var total = 0L
//for (i <- range)
// total += i
val len = range.length
var i = 0
while (i < len)
total += range(i)
val time = (System.nanoTime - start) / 1000
println("Took %,d micros".format(time))
This is a perfect example of why we want auto-conversion of for to while.
You forgot i += 1.
It is infinitely slower that way.
Ouch, that was harsh
--Rex
--
Viktor Klang,
Code Connoisseur
Work: www.akkasource.com
Code: github.com/viktorklang
Follow: twitter.com/viktorklang
Read: klangism.tumblr.com
Fri, 2010-08-27, 16:17
#8
Re: Super slow for-comprehension when emulating basic for-loop
On Fri, Aug 27, 2010 at 11:07 AM, Viktor Klang <viktor.klang@gmail.com> wrote:
Ouch, that was harsh
Shoulda added a smiley. I forget the +=1 all the time. ("Crikey, I thought I optimized this pretty well? Why's it taking so long?")
But, seriously, it is one reason that I wish for loops would save me from myself.
--Rex
Fri, 2010-08-27, 16:27
#9
Re: Super slow for-comprehension when emulating basic for-loop
Thank you, Seth!
+1 for this ticket: https://lampsvn.epfl.ch/trac/scala/ticket/1338
On Fri, Aug 27, 2010 at 10:56 AM, Seth Tisue <seth@tisue.net> wrote:
+1 for this ticket: https://lampsvn.epfl.ch/trac/scala/ticket/1338
On Fri, Aug 27, 2010 at 10:56 AM, Seth Tisue <seth@tisue.net> wrote:
re: performance of for loops on Ints, there has long been a ticket on
it, which y'all could go vote for:
https://lampsvn.epfl.ch/trac/scala/ticket/1338
--
Seth Tisue @ Northwestern University | http://tisue.net
lead developer, NetLogo: http://ccl.northwestern.edu/netlogo/
Fri, 2010-08-27, 16:27
#10
Re: Super slow for-comprehension when emulating basic for-loop
On Fri, Aug 27, 2010 at 10:11 AM, Rex Kerr <ichoran@gmail.com> wrote:
Isn't it good then that for loops are also faster? You get to eat your cake and have it too.
But, seriously, it is one reason that I wish for loops would save me from myself.
Isn't it good then that for loops are also faster? You get to eat your cake and have it too.
Fri, 2010-08-27, 16:27
#11
Re: Super slow for-comprehension when emulating basic for-loop
On Friday August 27 2010, Rex Kerr wrote:
> On Fri, Aug 27, 2010 at 10:47 AM, Nils Kilden-Pedersen wrote:
> > ...
> >
> > I just tried this again and, at least on an IndexedSeq, the while
> > loop is infinitely slower, probably due to the apply(i) call being
> > expensive.
>
> I'd forgotten how nonperformant the indexing of some IndexedSeqs was.
> With Vector[Int], it's about 2.8x slower to use something like
> ".sum" instead of using the while loop to compute it. It's 30%
> slower to use for than while.
>
> With ArrayBuffer[Int], both sum and for are 6-7x slower than while.
>
> With Array[Int], for is 16x slower and sum is 28x slower than while.
>
> With Range, I find that for is 4x slower (unlike Nils--I'm uncertain
> of the difference; maybe his code doesn't have time to get inlined?).
>
> --Rex
I assume we're all following the usual rules of microbenchmarking for
the JVM when performing these tests, right?
Randall Schulz
Fri, 2010-08-27, 16:27
#12
Re: Super slow for-comprehension when emulating basic for-loop
On Fri, Aug 27, 2010 at 10:17 AM, Rex Kerr <ichoran@gmail.com> wrote:
It's certainly an imperfect benchmark, but these are the numbers I get (1st is while, 2nd is for):
C:\Users\Nils>scala test.scala
Total is 500000500000 and took 85,633 micros
C:\Users\Nils>scala test.scala
Total is 500000500000 and took 11,630 micros
With Range, I find that for is 4x slower (unlike Nils--I'm uncertain of the difference; maybe his code doesn't have time to get inlined?).
It's certainly an imperfect benchmark, but these are the numbers I get (1st is while, 2nd is for):
C:\Users\Nils>scala test.scala
Total is 500000500000 and took 85,633 micros
C:\Users\Nils>scala test.scala
Total is 500000500000 and took 11,630 micros
Fri, 2010-08-27, 16:37
#13
Re: Super slow for-comprehension when emulating basic for-loop
Ok, running this code, while is about 7 times slower:
val range = 1 to 1000000
val start = System.nanoTime
var total = 0L
//for (i <- range)
// total += i
val len = range.length
var i = 0
while (i < len) {
total += range(i)
i += 1
}
val time = (System.nanoTime - start) / 1000
println("Took %,d micros".format(time))
val range = 1 to 1000000
val start = System.nanoTime
var total = 0L
//for (i <- range)
// total += i
val len = range.length
var i = 0
while (i < len) {
total += range(i)
i += 1
}
val time = (System.nanoTime - start) / 1000
println("Took %,d micros".format(time))
Fri, 2010-08-27, 16:37
#14
Re: Super slow for-comprehension when emulating basic for-loop
On Fri, Aug 27, 2010 at 10:22 AM, Randall R Schulz <rschulz@sonic.net> wrote:
No, I didn't bother for two reasons:
1. There's a factor 7 difference.
2. The original poster was concerned about this for scripts.
I assume we're all following the usual rules of microbenchmarking for
the JVM when performing these tests, right?
No, I didn't bother for two reasons:
1. There's a factor 7 difference.
2. The original poster was concerned about this for scripts.
Fri, 2010-08-27, 16:37
#15
Re: Super slow for-comprehension when emulating basic for-loop
Since noone said: are you using scalac with '-optimize' ? Its effect
is currently quite limited, but for simple things it might make a
difference.
On Fri, Aug 27, 2010 at 5:25 PM, Nils Kilden-Pedersen wrote:
> On Fri, Aug 27, 2010 at 10:22 AM, Randall R Schulz
> wrote:
>>
>> I assume we're all following the usual rules of microbenchmarking for
>> the JVM when performing these tests, right?
>
> No, I didn't bother for two reasons:
> 1. There's a factor 7 difference.
> 2. The original poster was concerned about this for scripts.
>
>
Fri, 2010-08-27, 16:47
#16
Re: Super slow for-comprehension when emulating basic for-loop
On Fri, Aug 27, 2010 at 11:22 AM, Randall R Schulz <rschulz@sonic.net> wrote:
I have a microbenchmarking library that makes sure that all the rules get followed, so my timings should be as reflective of reality as microbenchmarks ever are, unless there are bugs in my library code.
(Well, I didn't use my "warmup" method because that made the test not fit on one line, so the JIT compilation times are rolled in there too, but I had runtimes of seconds, so that should be close to negligible.)
--Rex
On Friday August 27 2010, Rex Kerr wrote:
> With Range, I find that for is 4x slower (unlike Nils--I'm uncertain
> of the difference; maybe his code doesn't have time to get inlined?).
>
> --Rex
I assume we're all following the usual rules of microbenchmarking for
the JVM when performing these tests, right?
I have a microbenchmarking library that makes sure that all the rules get followed, so my timings should be as reflective of reality as microbenchmarks ever are, unless there are bugs in my library code.
(Well, I didn't use my "warmup" method because that made the test not fit on one line, so the JIT compilation times are rolled in there too, but I had runtimes of seconds, so that should be close to negligible.)
--Rex
Fri, 2010-08-27, 16:47
#17
Re: Super slow for-comprehension when emulating basic for-loop
On Fri, Aug 27, 2010 at 10:30 AM, Johannes Rudolph <johannes.rudolph@googlemail.com> wrote:
I'm running this as a script, but if this behaves the way I think, while just got even worse, with for being unaffected:
C:\Users\Nils>scala -optimize test.scala
Total is 500000500000 and took 103,779 micros
C:\Users\Nils>scala -optimize test.scala
Total is 500000500000 and took 11,775 micros
Since noone said: are you using scalac with '-optimize' ? Its effect
is currently quite limited, but for simple things it might make a
difference.
I'm running this as a script, but if this behaves the way I think, while just got even worse, with for being unaffected:
C:\Users\Nils>scala -optimize test.scala
Total is 500000500000 and took 103,779 micros
C:\Users\Nils>scala -optimize test.scala
Total is 500000500000 and took 11,775 micros
Fri, 2010-08-27, 16:57
#18
Re: Super slow for-comprehension when emulating basic for-loop
On Friday August 27 2010, Lex wrote:
> > Now, in terms of optimising for-expressions where possible. I am
> > all for this with integer range loops and the like. If that is all
> > you desire out of this, then we agree.
> >
> > If you are arguing that objects should not use static initializers
> > for their construction, I'd ask you to think through the problem a
> > bit more.
>
> It is possible to use lazy vals to initialize singleton objects on
> demand. While that would remedy the worst case, it would also slow
> down the overall performance. So no, I am not suggesting to avoid
> static initializer for singleton construction. One can always use a
> package object with lazy vals when absolutely necessary.
Singleton objects are essentially initialized on-demand, by the JVM's
class loader.
You can use lazy vals within an object, of course.
Randall Schulz
Fri, 2010-08-27, 18:07
#19
Re: Super slow for-comprehension when emulating basic for-loop
On Fri, Aug 27, 2010 at 11:37 AM, Nils Kilden-Pedersen <nilskp@gmail.com> wrote:
This difference is mostly due to some sort of startup issue--things are rather weird with scripts this way, actually. (Try just cutting-and-pasting sections of code and all sorts of strange timings can start happening, not the max-2x-worse that you'd intuitively expect for duplication!)
For large numbers, this seems to plateau on my machine at about a 2x difference in timing, with the for loop having the advantage.
In the slightly different case I was running--where I did not call range(i) but implemented the increment myself in the while loop, but was in the REPL that has trouble applying specialization at times (and apparently didn't manage to do a good job here), it was 4x the other way.
In a more honest head-to-head with large numbers, compiled, while and for are dead even for this specific case, because of the specialization of Function1. Really quite impressive, actually!
But this is _only_ true because Range is fixed as a IndexedSeq[Int]. If it were any type that Function1 didn't specialize on (e.g. Short), or if it were generic and not specialized like most of the library, it wouldn't have this type of performance.
This does make me wonder if the library can be specialized in such a way that this performance will be preserved automatically. My own attempts have been less successful than I'd hoped (for example, I wrote a "fast C for loop" that falls victim to multiple dispatch slowdowns (failure of inlining)), but maybe either the Scala team can do better than I did, or the JDK7 JVM will be smarter than 6.
Anyway, if I take relatively large tests--summing numbers from 1 to 1,000,000,000 if using a non-explicit collection, or summing from 1 to 1,000,000 a total of 1,000 times if using an explicit collection, I get the following:
Collection For Loop While Loop Faster Notes
----------- -------- ---------- --------- ---------
Vector 7.8s 14.6s for, 1.9x
ArrayBuffer 2.9s 3.2s for, 1.1x
Array 3.2s 0.33s while, 9.7x
Range 0.43s 0.69s for, 1.6x First 2
Range 1.7s 0.69s while, 2.5x Thereafter
var i=0 (range) 0.44s (tie)
Range exhibited strange de-optimizing behavior. Other variations were small (<20%). I haven't figured out if there's a JVM flag that can stop the de-optimization. Ideas?
Anyway, for demanding computations, the Array answer at 0.33s and the for-Range and while-var at 0.44s are the numbers to aim for. The while-range might be tolerable (0.7s), but everything else is varying levels of embarrassingly slow.
--Rex
On Fri, Aug 27, 2010 at 10:30 AM, Johannes Rudolph <johannes.rudolph@googlemail.com> wrote:
Since noone said: are you using scalac with '-optimize' ? Its effect
is currently quite limited, but for simple things it might make a
difference.
I'm running this as a script, but if this behaves the way I think, while just got even worse, with for being unaffected:
C:\Users\Nils>scala -optimize test.scala
Total is 500000500000 and took 103,779 micros
C:\Users\Nils>scala -optimize test.scala
Total is 500000500000 and took 11,775 micros
This difference is mostly due to some sort of startup issue--things are rather weird with scripts this way, actually. (Try just cutting-and-pasting sections of code and all sorts of strange timings can start happening, not the max-2x-worse that you'd intuitively expect for duplication!)
For large numbers, this seems to plateau on my machine at about a 2x difference in timing, with the for loop having the advantage.
In the slightly different case I was running--where I did not call range(i) but implemented the increment myself in the while loop, but was in the REPL that has trouble applying specialization at times (and apparently didn't manage to do a good job here), it was 4x the other way.
In a more honest head-to-head with large numbers, compiled, while and for are dead even for this specific case, because of the specialization of Function1. Really quite impressive, actually!
But this is _only_ true because Range is fixed as a IndexedSeq[Int]. If it were any type that Function1 didn't specialize on (e.g. Short), or if it were generic and not specialized like most of the library, it wouldn't have this type of performance.
This does make me wonder if the library can be specialized in such a way that this performance will be preserved automatically. My own attempts have been less successful than I'd hoped (for example, I wrote a "fast C for loop" that falls victim to multiple dispatch slowdowns (failure of inlining)), but maybe either the Scala team can do better than I did, or the JDK7 JVM will be smarter than 6.
Anyway, if I take relatively large tests--summing numbers from 1 to 1,000,000,000 if using a non-explicit collection, or summing from 1 to 1,000,000 a total of 1,000 times if using an explicit collection, I get the following:
Collection For Loop While Loop Faster Notes
----------- -------- ---------- --------- ---------
Vector 7.8s 14.6s for, 1.9x
ArrayBuffer 2.9s 3.2s for, 1.1x
Array 3.2s 0.33s while, 9.7x
Range 0.43s 0.69s for, 1.6x First 2
Range 1.7s 0.69s while, 2.5x Thereafter
var i=0 (range) 0.44s (tie)
Range exhibited strange de-optimizing behavior. Other variations were small (<20%). I haven't figured out if there's a JVM flag that can stop the de-optimization. Ideas?
Anyway, for demanding computations, the Array answer at 0.33s and the for-Range and while-var at 0.44s are the numbers to aim for. The while-range might be tolerable (0.7s), but everything else is varying levels of embarrassingly slow.
--Rex
Fri, 2010-08-27, 19:17
#20
Re: Super slow for-comprehension when emulating basic for-loop
Rex,
What Scala version did you use? Iulian committed quite a few changes
recently that affect the optimizer and Range. It would be interesting
to see if trunk behaves differently.
Best,
Ismael
Fri, 2010-08-27, 23:07
#21
Re: Super slow for-comprehension when emulating basic for-loop
I used 2.8.0 final. I just ran the tests again with r22850, and everything is pretty much the same (with for maybe slightly slower on ArrayBuffer and Array, but that could be trial-to-trial variability depending on the processor who is running the thread if the memory accesses are not identical across cores), EXCEPT
for-Range now runs at 1.15s always (a win of 1.6x for while loops, even unfairly making them call Range.apply).
--Rex
On Fri, Aug 27, 2010 at 2:13 PM, Ismael Juma <mlists@juma.me.uk> wrote:
for-Range now runs at 1.15s always (a win of 1.6x for while loops, even unfairly making them call Range.apply).
--Rex
On Fri, Aug 27, 2010 at 2:13 PM, Ismael Juma <mlists@juma.me.uk> wrote:
Rex,
What Scala version did you use? Iulian committed quite a few changes
recently that affect the optimizer and Range. It would be interesting
to see if trunk behaves differently.
Best,
Ismael
Sat, 2010-08-28, 10:47
#22
Type erasure question
Hello!
Can anyone explain to me why the following code has problems with type
erasure and whether there's a way around it? It looks like all the type
checking is dynamic and should work.
def filterList[T](list: List[Any]) =
list.filter(_.isInstanceOf[T])
Regards,
Jens
Sat, 2010-08-28, 11:17
#23
Re: Type erasure question
On Sat, Aug 28, 2010 at 11:40:58AM +0200, Jens Tinz wrote:
> Can anyone explain to me why the following code has problems with type
> erasure and whether there's a way around it? It looks like all the
> type checking is dynamic and should work.
>
> def filterList[T](list: List[Any]) =
> list.filter(_.isInstanceOf[T])
Have you googled type erasure? This is pretty close to the definitive
example of what you can't do. There is no "T" at the time the instance
check is performed: there is only Object, which everything is.
Sat, 2010-08-28, 11:17
#24
Re: Type erasure question
this won't work because someone at sun decided to throw away all
generic type information at runtime.
how to make it work:
def filterList[T](list: List[_])(implicit mf:Manifest[T]) =
list.filter(mf.erasure.isInstance)
Am 28.08.2010 11:40, schrieb Jens Tinz:
> Hello!
>
> Can anyone explain to me why the following code has problems with type
> erasure and whether there's a way around it? It looks like all the
> type checking is dynamic and should work.
>
> def filterList[T](list: List[Any]) =
> list.filter(_.isInstanceOf[T])
>
>
> Regards,
> Jens
>
>
Sat, 2010-08-28, 11:27
#25
Re: Type erasure question
The type of T is lost at runtime and instanceOf relies on it,
you have to do this :
def filterList[T](list: List[Any])(implicit m: Manifest[T]) =
list.filter(m.erasure.isAssignableFrom(_.getClass))
it's a bit more ugly, but on the caller side it will be equivalent.
Cheers
On Sat, Aug 28, 2010 at 5:40 AM, Jens Tinz <jens@gamemakers.de> wrote:
Hello!
Can anyone explain to me why the following code has problems with type erasure and whether there's a way around it? It looks like all the type checking is dynamic and should work.
def filterList[T](list: List[Any]) =
list.filter(_.isInstanceOf[T])
Regards,
Jens
Sat, 2010-08-28, 11:37
#26
Re: Type erasure question
Not quite... erasure is a purely compiler-side thing, the JVM isn't involved at all.
It was actually our very own Odersky who implemented erasure in the GJ compilerhttp://lampwww.epfl.ch/gj/
This being done so that GJ could interop with existing Java collections and libraries that (at that time) were all about `Object`. Later, GJ became the basis of the Java 1.5 reference compiler, but for backward compatibility reasons it was decided not to go back and reify types in the JVM. It would have been a herculean effort with relatively little gain, as erasure was already seen to be working for 99% of scenarios.
Also, don't forget that you *can* still use reflection to query the reified type of method/constructor parameters and of member variables, it's just object instances where type parameters are fully erased. The technique is, however, quite heavyweight - as you often have to walk up the object hierarchy until you find a concrete type. The newer type conversion logic in Spring 3.x shows exactly how painful this can get: https://src.springframework.org/svn/spring-framework/trunk/org.springframework.core/src/main/java/org/springframework/core/GenericTypeResolver.java
On 28 August 2010 11:16, HamsterofDeath <h-star@gmx.de> wrote:
--
Kevin Wright
mail/google talk: kev.lee.wright@gmail.com
wave: kev.lee.wright@googlewave.com
skype: kev.lee.wright
twitter: @thecoda
It was actually our very own Odersky who implemented erasure in the GJ compilerhttp://lampwww.epfl.ch/gj/
This being done so that GJ could interop with existing Java collections and libraries that (at that time) were all about `Object`. Later, GJ became the basis of the Java 1.5 reference compiler, but for backward compatibility reasons it was decided not to go back and reify types in the JVM. It would have been a herculean effort with relatively little gain, as erasure was already seen to be working for 99% of scenarios.
Also, don't forget that you *can* still use reflection to query the reified type of method/constructor parameters and of member variables, it's just object instances where type parameters are fully erased. The technique is, however, quite heavyweight - as you often have to walk up the object hierarchy until you find a concrete type. The newer type conversion logic in Spring 3.x shows exactly how painful this can get: https://src.springframework.org/svn/spring-framework/trunk/org.springframework.core/src/main/java/org/springframework/core/GenericTypeResolver.java
On 28 August 2010 11:16, HamsterofDeath <h-star@gmx.de> wrote:
this won't work because someone at sun decided to throw away all
generic type information at runtime.
how to make it work:
def filterList[T](list: List[_])(implicit mf:Manifest[T]) =
list.filter(mf.erasure.isInstance)
Am 28.08.2010 11:40, schrieb Jens Tinz:
> Hello!
>
> Can anyone explain to me why the following code has problems with type
> erasure and whether there's a way around it? It looks like all the
> type checking is dynamic and should work.
>
> def filterList[T](list: List[Any]) =
> list.filter(_.isInstanceOf[T])
>
>
> Regards,
> Jens
>
>
--
Kevin Wright
mail/google talk: kev.lee.wright@gmail.com
wave: kev.lee.wright@googlewave.com
skype: kev.lee.wright
twitter: @thecoda
Sat, 2010-08-28, 11:57
#27
Re: Type erasure question
Silly me. It still feels like the compiler should generate code for
the type a function is called with. I guess I've been a C++ programmer
for too long.
val tst = filterList[Int](1, "a") // T is known at compile time, isn't it?
Am 28.08.2010 12:16, schrieb HamsterofDeath:
> def filterList[T](list: List[_])(implicit mf:Manifest[T]) =
> list.filter(mf.erasure.isInstance)
This version compiles. Thank you very much.
Sat, 2010-08-28, 16:27
#28
Re: Type erasure question
T is known when the call site was compiled, but the method being called was already compiled long ago.
On Sat, Aug 28, 2010 at 7:37 AM, Jens Tinz <jens@gamemakers.de> wrote:
--
Daniel C. Sobral
I travel to the future all the time.
On Sat, Aug 28, 2010 at 7:37 AM, Jens Tinz <jens@gamemakers.de> wrote:
Silly me. It still feels like the compiler should generate code for the type a function is called with. I guess I've been a C++ programmer for too long.
val tst = filterList[Int](1, "a") // T is known at compile time, isn't it?
Am 28.08.2010 12:16, schrieb HamsterofDeath:
def filterList[T](list: List[_])(implicit mf:Manifest[T]) =
list.filter(mf.erasure.isInstance)
This version compiles. Thank you very much.
--
Daniel C. Sobral
I travel to the future all the time.
Add an "Unwhileable" trait that specifically says that it's not safe to replace foreach with indexed iteration.
Though I have to imagine that a foreach that didn't work that way would be awfully confusing if it was a descendant of IndexedSeq. Especially IndexedSeqOptimized. It hardly deserves its name if it can't be foreached that way; that could just be part of the contract of IndexedSeqOptimized, and if you overrode foreach in a way that wouldn't work, you'd be breaking the contract.
Or you could have a "Whileable" trait that got mixed in to things that you wanted, and by default it would be mixed into Array and Range.
On Fri, Aug 27, 2010 at 10:24 AM, Andreas Flierl <andreas@flierl.eu> wrote:
Indeed--I've seen some of the conversations--but as someone who spends an inordinate amount of time writing
var i = start
while (i < end) {
doWhateverInvolving(i)
i += 1
}
instead of
for (i <- start until end) doWhateverInvolving(i)
because the former runs ten times faster on primitives, I have trouble understanding the supposedly greater weight of the cons. If one is not trying to do high-performance stuff routinely, one doesn't feel the weight of the pros, I guess.
It's not like the compiler is completely unaware of how to rewrite stuff (Manifest, (x,y), x => y, etc..).
On Fri, Aug 27, 2010 at 10:30 AM, Nils Kilden-Pedersen <nilskp@gmail.com> wrote:
In the case of primitives, it's amazingly slower. One _could_ apply the optimization only in the case of ranges and arrays of primitives, which are certainly quite slow. If specialization gets pushed all the way through, the penalty ought to go down to about 2x on average (and be equal in special cases) according to the tests I've done. At that point, it probably wouldn't be worth it.
But I'm not yet convinced that specialization can be pushed through the collections library as it stands without some unacceptable outcome, whether it be code bloat or everything slowing down because of having to handle special cases at runtime or insufferably complex logic or whatnot.
--Rex