This page is no longer maintained — Please continue to the home page at www.scala-lang.org

Re: Arrays: Disappointed about ridiculous slow scala code compared to Java

31 replies
Johannes Rudolph
Joined: 2008-12-17,
User offline. Last seen 29 weeks 20 hours ago.

On Wed, Nov 4, 2009 at 3:00 PM, Burkhard Ludwig
wrote:
> Victor,
>
>> 30% is not a magnitude.
>
> A factor of 20 is and that was what I consistently observed, and I did write
> it. I just checked.
> Sigh.
> Well, thanks for the hint to @specialized. After I found out how to trigger
> it. it helped.
> It brings the timings into a range where it belongs. So the rule of thumb
> is: When you use an Array[T], then don't forget to do a "def f[@specialize
> T](...)
> Greetings,
> Burkhard.

Your benchmark is bogus. If you would use (<:AnyRef)

def insert_sort[T<:AnyRef](h: Int, a: Array[T], first: Int, last:
Int)(c: Comparator[T]) {
var i = first+h
while (i < last) {
val pivot = a(i)
var j = i
while (j >= h && c.compare(pivot, a(j-h)) < 0) {
a(j) = a(j-h)
j -= h
}
a(j) = pivot
i += h
}
}

which would be equivalent to Javas definition

static void insert_sort(int h, T[] a, int first, int last,
Comparator<? super T> c)

the compiled methods are almost equal, as well as the performance. The
underlying problem is that if you don't write the <:AnyRefs, there's
some boxing happening which in case of the Java example is done before
timing in the initialization phase.

nilskp
Joined: 2009-01-30,
User offline. Last seen 1 year 27 weeks ago.
Re: Arrays: Disappointed about ridiculous slow scala code comp
Damn, I though it might have been some cool new syntax, although I would have expected Array[Int | Long].

On Wed, Nov 4, 2009 at 8:03 AM, Ricky Clarkson <ricky.clarkson@gmail.com> wrote:
He meant "Array[Int] or Array[Long]".

2009/11/4 Nils Kilden-Pedersen <nilskp@gmail.com>:
> On Wed, Nov 4, 2009 at 7:29 AM, Viktor Klang <viktor.klang@gmail.com> wrote:
>>
>> either use @specialized
>
> I thought @specialized was for class definitions only?
>
>>
>> or Array[Int/Long] etc
>
> Is that valid syntax?
>



--
Ricky Clarkson
Java and Scala Programmer, AD Holdings
+44 1565 770804
Skype: ricky_clarkson
Google Talk: ricky.clarkson@gmail.com
Google Wave: ricky.clarkson@googlewave.com

Johannes Rudolph
Joined: 2008-12-17,
User offline. Last seen 29 weeks 20 hours ago.
Re: Arrays: Disappointed about ridiculous slow scala code comp

I tell you probably nothing new but that's somewhat possible right now
(aside from the ugly prefix notation in outputs)

On Wed, Nov 4, 2009 at 3:13 PM, Ricky Clarkson wrote:
> Array[Either[Int, Long]] works.  Though, yes, I'd prefer Either to be called |

scala> type |[A,B] = Either[A,B]
defined type alias $bar

scala> Array[Int | Long](Left(5),Right(12L))
res6: Array[|[Int,Long]] = Array(Left(5), Right(12))

and even

scala> implicit def makeLeft[A,B](a:A):Either[A,B] = Left(a)
makeLeft: [A,B](A)Either[A,B]

scala> implicit def makeRight[A,B](b:B):Either[A,B] = Right(b)
makeRight: [A,B](B)Either[A,B]

scala> Array[Int|String](5,"Wurst",123,2,6)
res8: Array[|[Int,String]] = Array(Left(5), Right(Wurst), Left(123),
Left(2), Left(6))

Ricky Clarkson
Joined: 2008-12-19,
User offline. Last seen 3 years 2 weeks ago.
Re: Arrays: Disappointed about ridiculous slow scala code comp

Array[Either[Int, Long]] works. Though, yes, I'd prefer Either to be called |

2009/11/4 Nils Kilden-Pedersen :
> Damn, I though it might have been some cool new syntax, although I would
> have expected Array[Int | Long].
>
> On Wed, Nov 4, 2009 at 8:03 AM, Ricky Clarkson
> wrote:
>>
>> He meant "Array[Int] or Array[Long]".
>>
>> 2009/11/4 Nils Kilden-Pedersen :
>> > On Wed, Nov 4, 2009 at 7:29 AM, Viktor Klang
>> > wrote:
>> >>
>> >> either use @specialized
>> >
>> > I thought @specialized was for class definitions only?
>> >
>> >>
>> >> or Array[Int/Long] etc
>> >
>> > Is that valid syntax?
>> >
>>
>>
>>
>> --
>> Ricky Clarkson
>> Java and Scala Programmer, AD Holdings
>> +44 1565 770804
>> Skype: ricky_clarkson
>> Google Talk: ricky.clarkson@gmail.com
>> Google Wave: ricky.clarkson@googlewave.com
>
>

Ricky Clarkson
Joined: 2008-12-19,
User offline. Last seen 3 years 2 weeks ago.
Re: Arrays: Disappointed about ridiculous slow scala code comp

I had an idea something like that was possible. Does that scale up to
X | Y | Z, and work for pattern-matching?

2009/11/4 Johannes Rudolph :
> I tell you probably nothing new but that's somewhat possible right now
> (aside from the ugly prefix notation in outputs)
>
> On Wed, Nov 4, 2009 at 3:13 PM, Ricky Clarkson wrote:
>> Array[Either[Int, Long]] works.  Though, yes, I'd prefer Either to be called |
>
> scala> type |[A,B] = Either[A,B]
> defined type alias $bar
>
> scala> Array[Int | Long](Left(5),Right(12L))
> res6: Array[|[Int,Long]] = Array(Left(5), Right(12))
>
> and even
>
> scala> implicit def makeLeft[A,B](a:A):Either[A,B] = Left(a)
> makeLeft: [A,B](A)Either[A,B]
>
> scala> implicit def makeRight[A,B](b:B):Either[A,B] = Right(b)
> makeRight: [A,B](B)Either[A,B]
>
> scala> Array[Int|String](5,"Wurst",123,2,6)
> res8: Array[|[Int,String]] = Array(Left(5), Right(Wurst), Left(123),
> Left(2), Left(6))
>
> --
> Johannes
>
> -----------------------------------------------
> Johannes Rudolph
> http://virtual-void.net
>

Johannes Rudolph
Joined: 2008-12-17,
User offline. Last seen 29 weeks 20 hours ago.
Re: Arrays: Disappointed about ridiculous slow scala code comp

On Wed, Nov 4, 2009 at 3:24 PM, Randall R Schulz wrote:
> On Wednesday November 4 2009, Viktor Klang wrote:
>> Sigh
>>
>> I'm tired of performance envy involving arrays.
>> Arrays are a shallow abstraction over implementation detail
>> (contigous memory).
>> They are also horridly implemented in the JVM (and in Java).
>
> I'd be happy to use any abstraction for integer-indexed collections
> whose performance rivals that of arrays. Short of that, it's hardly
> feasible to eschew arrays for production code.

Shouldn't specialization solve most of the problems? (In Burkhard's
benchmark it wasn't faster because the expensive step was the boxing
and the boxing was necessary because Comparator was used which isn't
specialized.)

ebowman
Joined: 2009-04-13,
User offline. Last seen 1 year 30 weeks ago.
Re: Arrays: Disappointed about ridiculous slow scala code comp

Johannes Rudolph wrote:
> On Wed, Nov 4, 2009 at 3:24 PM, Randall R Schulz wrote:
>
>> On Wednesday November 4 2009, Viktor Klang wrote:
>>
>>> Sigh
>>>
>>> I'm tired of performance envy involving arrays.
>>> Arrays are a shallow abstraction over implementation detail
>>> (contigous memory).
>>> They are also horridly implemented in the JVM (and in Java).
>>>
>> I'd be happy to use any abstraction for integer-indexed collections
>> whose performance rivals that of arrays. Short of that, it's hardly
>> feasible to eschew arrays for production code.
>>
>
> Shouldn't specialization solve most of the problems? (In Burkhard's
> benchmark it wasn't faster because the expensive step was the boxing
> and the boxing was necessary because Comparator was used which isn't
> specialized.)
>
>

I don't see any improvement from using @specialized in this case!

Dave Griffith
Joined: 2009-01-14,
User offline. Last seen 42 years 45 weeks ago.
Re: Arrays: Disappointed about ridiculous slow scala code comp

Wow. I can't tell if that's very, very right or very, very wrong. Also not
sure whether I would want those implicits in Predef, or far away from it.
Wow.

--Dave Griffith

Johannes Rudolph
Joined: 2008-12-17,
User offline. Last seen 29 weeks 20 hours ago.
Re: Arrays: Disappointed about ridiculous slow scala code comp

On Wed, Nov 4, 2009 at 3:28 PM, Ricky Clarkson wrote:
> I had an idea something like that was possible.  Does that scale up to
> X | Y | Z, and work for pattern-matching?
The type definition probably does. The implicit builders don't. As you
can imagine they don't even work for types A and B where one of the
type is an subtype of the other one.

Burkhard Ludwig
Joined: 2009-04-18,
User offline. Last seen 42 years 45 weeks ago.
Re: Arrays: Disappointed about ridiculous slow scala code comp
Hello everybody,
There is something funny about comments about performance. I observe this since dozens of years. Seems to be a sort of dirty words or you have to have kind of a license. But I finally got some advice.
I want to summarize a bit. (I picked just one arbitrary post for this purpose).
My starting point was that when I do what seems the obvious to me I get an unexpectedly slow code.
The "obvious" was
  • store some data in an Array[Int]
  • write a method which is slightly more general than needed at the moment, namely a 
       def f[T](a: Array[T], ...) instead of def f(a: Array[Int], ...)
  • use the method with an Array[Int]
The "unexpectedly slow" was to be quantified. B.T.W: It is unavoidable to compare apples and oranges in this context, simply because that is exactly the question: Apple or Orange. There was "Java" in the comparison since it provides the limit for what can be obtained. I would have won a bet that this causes annoyance.
  • I picked for an experiment (not a benchmark) something simple (shellSort) and implemented it in varied ways. The result was that the bare array-diddling-code is simply slow. Compared to e.g. a dumb java implementation by an order of magnitude. And my Java skills are limited. For me it is totally unclear where the time penalty does come from. It has probably few to do with boxing. Looks more like reflection. See below.
There are a couple of solutions/habits
  • Do not use methods with type parameters: def f(a: Array[Int], ...). I did not try this in the experiment. Maybe I should have.
  • Do not use obvious apple Array[Int] but use either orange: Array[AnyRef] or GenericArray[Int]. This was unexpected for me, since I got the impression that GenericArrays are somewhat exotic. Both work best with bare java-code.
  • As Victor pointed out: Use @specialized for either all methods dealing with the Arrays or a "performance critical subset" thereof.  What is unfortunate is the "code bloat" be it real on just a fear and some maintenance issues: On what base do I decide what to specialize and what not?
My bottom line is still: If I use simple techniques (Arrays and methods with type parameters) in a straightforward combination, I pay a high penalty for the type of nuts-and-bolts array-code I did exercise here. Not a least-surprise behavior and very annoying.This might be less pronounced when you use an array just like any other sequence, but then, what are arrays good for in the first place?
The specialized trick helps. I have, admittedly subjective, objections against it but it works. A vaguely similar technique in C-++, namely explicit specialization never really worked for me. It has the status of a last resort in my experience. It can make a big difference but is hard to maintain once one gets productive. We will see wether this carries over to @specialized or not.
Thanks for your help.
Burkhard.

PS.: Some more details:
  • The apple-orange numbers for the simple sorter: w/o @specialized: 13924ms vs. with @specialized:  631ms (two apples) vs. Java: 266ms (an orange). The java speed is somewhat like the limit w/o most of the abstraction and whatever costs but still with the comparator.
  • Boxing: You can make the experiment with a more reduced code: reverse the array content in place. Just a loop and some indexing and some element swaps. This gives w/o @specialized: 1117ms (eleven hundred) and with @specialized: 11ms. (eleven). Here no boxing is involved at all. The difference in the generated general and specialized codes are calls to "java/lang/reflect/Array.get" and "java/lang/reflect/Array.set" in "public void reverse(java.lang.Object, int, int)"  where just some loads and stores are in "public void reverse$mIc$sp(int[], int, int)". One can read this as an example for the tremendous lever of @specialize or as a hint that something is wrong with the Arrays.
  • I tried to add manifests but id did not make any difference. I did this for no other reason than not to omit something.
James Iry
Joined: 2008-08-19,
User offline. Last seen 1 year 23 weeks ago.
Re: Arrays: Disappointed about ridiculous slow scala code comp
Just one quick clarification: @specialized is not like template specialization in C++.   In C++ when a programmer specializes a template then he or she creates source code that must be maintained.   Scala's @specialized tag works more like unspecialized C++ template instantiation: the compiler is in charge of expanding the "template."*  In other words, @specialized does not require or even allow the programmer to write custom specializations.  For more about @specialized see http://lamp.epfl.ch/~dragos/files/scala-spec.pdf.

* Even with @specialized it's not really correct to call these templates, but the idea is about the same
On Wed, Nov 4, 2009 at 8:50 AM, Burkhard Ludwig <ludwig.burkhard@googlemail.com> wrote:


The specialized trick helps. I have, admittedly subjective, objections against it but it works. A vaguely similar technique in C-++, namely explicit specialization never really worked for me. It has the status of a last resort in my experience. It can make a big difference but is hard to maintain once one gets productive. We will see wether this carries over to @specialized or not.
Johannes Rudolph
Joined: 2008-12-17,
User offline. Last seen 29 weeks 20 hours ago.
Re: Arrays: Disappointed about ridiculous slow scala code comp

Hi Burkhard,

sorry, I've been quick to dismiss your argument and didn't explain
properly what is the problem with your measurements.

On Wed, Nov 4, 2009 at 5:50 PM, Burkhard Ludwig
wrote:
> My starting point was that when I do what seems the obvious to me I get an
> unexpectedly slow code.

Yes, and it's ok that you pointed that out.

> The "obvious" was
> store some data in an Array[Int]
> write a method which is slightly more general than needed at the moment,
> namely a
>
>        def f[T](a: Array[T], ...) instead of def f(a: Array[Int], ...)
>
> use the method with an Array[Int]

> The "unexpectedly slow" was to be quantified. B.T.W: It is unavoidable to
> compare apples and oranges in this context, simply because that is exactly
> the question: Apple or Orange.

Not in this specific case...

> There was "Java" in the comparison since it provides the limit for what can
> be obtained. I would have won a bet that this causes annoyance.
That's reasonable.

> I picked for an experiment (not a benchmark) something simple (shellSort)
> and implemented it in varied ways. The result was that the bare
> array-diddling-code is simply slow. Compared to e.g. a dumb java
> implementation by an order of magnitude. And my Java skills are limited. For
> me it is totally unclear where the time penalty does come from. It has
> probably few to do with boxing. Looks more like reflection. See below.

Yes, the speed penalty comes from boxing. And the boxing seems to
dwarf the reflection penalty. The primary error in your implentation
(_both_ Scala and Java) is that you use the Comparator. The comparator
can only compare Objects. So every Int in the Array has to be boxed.
In the Scala implementation (shell) you pass a primitive Int array
(int[]). The reflective Array.get boxes the result and it finally is
used with Comparator.compare. (Remember again, that the boxing _must_
be done anyway because of Comparator.compare.) As you noticed
yourself, @specialized removes the reflective calls to Array.get but
the performance doesn't change. That's because there are now explicit
boxing calls before the call to Comparator.compare.

Contrast that to the Java shell-sort. It expects a T[] which is erased
to Object[]. So in this array all the integers are boxed already when
before your benchmark timing starts at all. This explains the vastly
better performance when actually sorting the integers: They only have
to be passed in Comparator.compare.

So whatever you want to argue about, it has to be about Comparator
only working with Objects.

> My bottom line is still: If I use simple techniques (Arrays and methods with
> type parameters) in a straightforward combination, I pay a high penalty for
> the type of nuts-and-bolts array-code I did exercise here. Not a
> least-surprise behavior and very annoying.This might be less pronounced when
> you use an array just like any other sequence, but then, what are arrays
> good for in the first place?

Actually, the problem was already in the Java code. So what we have is
the JVM, with its object and primitive types and erasure. We had this
for years and the traps have been there for years. Finally, Scala's
specialization helps to bridge the gap. So, we can be more optimistic
than ever before regarding these issues probably.

> Boxing: You can make the experiment with a more reduced code: reverse the
> array content in place. Just a loop and some indexing and some element
> swaps. This gives w/o @specialized: 1117ms (eleven hundred) and
> with @specialized: 11ms. (eleven). Here no boxing is involved at all. The
> difference in the generated general and specialized codes are calls to
> "java/lang/reflect/Array.get" and "java/lang/reflect/Array.set" in "public
> void reverse(java.lang.Object, int, int)"  where just some loads and stores
> are in "public void reverse$mIc$sp(int[], int, int)". One can read this as
> an example for the tremendous lever of @specialize or as a hint that
> something is wrong with the Arrays.
See above, JVM's objects and primitives are the cause.

Burkhard Ludwig
Joined: 2009-04-18,
User offline. Last seen 42 years 45 weeks ago.
Re: Arrays: Disappointed about ridiculous slow scala code comp
Johannes,
> As you noticed
> yourself, @specialized removes the reflective calls to Array.get but
> the performance doesn't change. That's because there are now explicit
> boxing calls before the call to Comparator.compare.

It did change, it did, a lot. Sigh.
> Yes, the speed penalty comes from boxing. And the boxing seems to dwarf the > reflection penalty. 
I'll go over my numbers once more
Citing myself now with bold face:
  • The apple-orange numbers for the simple sorter: w/o @specialized: 13924ms vs. with @specialized:  631ms (two apples) vs. Java: 266ms (an orange). The java speed is somewhat like the limit w/o most of the abstraction and whatever costs but still with the comparator.

> So whatever you want to argue about, it has to be about Comparator only working with Objects.
I agree that w/o a comparator it would be faster. But the comparator in the second row. Maybe it will make the difference from 630 to 270, but has few to do with the step from 13900 to 630. 630 vs. 270 is the price for an abstraction I made knowingly but 13900 vs. 630 is something different. There is an example in the code that comes w/o a comparator and has the same characteristics. Again citing myself:
  • Boxing: You can make the experiment with a more reduced code: reverse the array content in place. Just a loop and some indexing and some element swaps. This gives w/o @specialized: 1117ms (eleven hundred) and with @specialized: 11ms. (eleven). Here no boxing is involved at all. The difference in the generated general and specialized codes are calls to "java/lang/reflect/Array.get" and "java/lang/reflect/Array.set" in "public void reverse(java.lang.Object, int, int)"  where just some loads and stores are in "public void reverse$mIc$sp(int[], int, int)". One can read this as an example for the tremendous lever of @specialize or as a hint that something is wrong with the Arrays.
    That was probably too terse: In the test setup I used, there was this dumb little code (scala only), now with specialization.
      def reverse[@specialized T](a: Array[T], first: Int, last: Int) {     var lo = first    var hi = last-1    while (lo < hi) {       val tmp = a(lo)      a(lo) = a(hi)       a(hi) = tmp      lo += 1      hi -=1     }  }
    This code is so simple, that I can read the javap-output a bit. Using -Yspecialize during compilation (and -optimize) produces two variants namely  a version which handles Objects, and a version which handles Array[Int] (I assume). The former uses calls to java.lang.reflect.Array.get and calls to java.lang.reflect.Array.set within the loop whereas the latter uses just loads and stores of any kind. That is the only difference. (To be precise: the get and set can be seen directly only with the compiler option -optimise, otherwise e.g.  scala.runtime.ScalaRuntime.array_apply is called but this wraps a call to ava.lang.reflect.Array.get) The former method needs 1100 ms to do its swappery while the latter uses a couple of seconds.
    So I am still at reflection. I might be mistaken, but I thought that java.lang.reflect.Array has to do with reflection. (No irony here, I am really guessing).
    The issue might well be unresolvable, but for me it is no simple lemma of the JVM.
    Greetings
    Burkhard
    dcsobral
    Joined: 2009-04-23,
    User offline. Last seen 38 weeks 5 days ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp
    Burkhard, have you tried using normal Array with the Java cass? Have you noticed it doesn't work? That means Scala is infinitely faster than Java (which can't do it) when you pass an Array[Int] to a method which receives parameterized arrays.   When comparing the Java code to Scala code receiving the same data type, I get, from your code, about the same time. About 7% slower for Scala.   You are passing an Array[Int] to a method expecting a parameterized (generics) Array. Think of what that means... An Array[Int] is a contiguous memory space of 32 bits integers. A generic Array is an Array[Object], a contiguous memory space of references to objects. So Scala has to translate each access to that array to a function which will, depending on the array type, either return the reference, or return a (newly created) reference to an Int. I don't know exactly how Scala is doing that, but it can't be any easy.   At any rate, that's what the java.lang.reflect.Arrays calls do.   For your generics Java function to be able to accept an int[], you'd have to make it call these very same functions.   Next, your sorting functions are really quite similar... except you arbitrarily made the Scala version curried, and the Java version works with an Array while the Scala works with a GenericArray.   Removing the currying may grant a bit of performance, as you remove a function call from the loop. But it depends o what the JIT is doing.   Making Scala work with arrays, just like the Java code, made it... faster. Well, in some runs. On others it was equal, which is what I expected.   I also played with specialized, and got no gains whatsoever. I'm very curious about this result, as I find it very surprising.   At any rate, the source code with the aditional cases follow. Burkhard, note how the arrayUncurried version were written.   C:\Workset>scala tst.Test_28
    *** warmup ***
    *** gather ***
    empty measurement: 0
    empty loop: 15
    reverse: 1328
    shell: 17312
    specialized: 16906
    java shell generic: 641
    generic shell: 734
    generic shell uncurried: 703
    array shell uncurried: 640
    insert: 19625
    insert specialized: 19969
    java insert: 625
    generic insert: 688
    generic insert uncurried: 703
    array insert uncurried: 594
    *** done ***
    On Wed, Nov 4, 2009 at 2:50 PM, Burkhard Ludwig <ludwig.burkhard@googlemail.com> wrote:
    Hello everybody,
    There is something funny about comments about performance. I observe this since dozens of years. Seems to be a sort of dirty words or you have to have kind of a license. But I finally got some advice.
    I want to summarize a bit. (I picked just one arbitrary post for this purpose).
    My starting point was that when I do what seems the obvious to me I get an unexpectedly slow code.
    The "obvious" was
    • store some data in an Array[Int]
    • write a method which is slightly more general than needed at the moment, namely a 
           def f[T](a: Array[T], ...) instead of def f(a: Array[Int], ...)
    • use the method with an Array[Int]
    The "unexpectedly slow" was to be quantified. B.T.W: It is unavoidable to compare apples and oranges in this context, simply because that is exactly the question: Apple or Orange. There was "Java" in the comparison since it provides the limit for what can be obtained. I would have won a bet that this causes annoyance.
    • I picked for an experiment (not a benchmark) something simple (shellSort) and implemented it in varied ways. The result was that the bare array-diddling-code is simply slow. Compared to e.g. a dumb java implementation by an order of magnitude. And my Java skills are limited. For me it is totally unclear where the time penalty does come from. It has probably few to do with boxing. Looks more like reflection. See below.
    There are a couple of solutions/habits
    • Do not use methods with type parameters: def f(a: Array[Int], ...). I did not try this in the experiment. Maybe I should have.
    • Do not use obvious apple Array[Int] but use either orange: Array[AnyRef] or GenericArray[Int]. This was unexpected for me, since I got the impression that GenericArrays are somewhat exotic. Both work best with bare java-code.
    • As Victor pointed out: Use @specialized for either all methods dealing with the Arrays or a "performance critical subset" thereof.  What is unfortunate is the "code bloat" be it real on just a fear and some maintenance issues: On what base do I decide what to specialize and what not?
    My bottom line is still: If I use simple techniques (Arrays and methods with type parameters) in a straightforward combination, I pay a high penalty for the type of nuts-and-bolts array-code I did exercise here. Not a least-surprise behavior and very annoying.This might be less pronounced when you use an array just like any other sequence, but then, what are arrays good for in the first place?
    The specialized trick helps. I have, admittedly subjective, objections against it but it works. A vaguely similar technique in C-++, namely explicit specialization never really worked for me. It has the status of a last resort in my experience. It can make a big difference but is hard to maintain once one gets productive. We will see wether this carries over to @specialized or not.
    Thanks for your help.
    Burkhard.

    PS.: Some more details:
    • The apple-orange numbers for the simple sorter: w/o @specialized: 13924ms vs. with @specialized:  631ms (two apples) vs. Java: 266ms (an orange). The java speed is somewhat like the limit w/o most of the abstraction and whatever costs but still with the comparator.
    • Boxing: You can make the experiment with a more reduced code: reverse the array content in place. Just a loop and some indexing and some element swaps. This gives w/o @specialized: 1117ms (eleven hundred) and with @specialized: 11ms. (eleven). Here no boxing is involved at all. The difference in the generated general and specialized codes are calls to "java/lang/reflect/Array.get" and "java/lang/reflect/Array.set" in "public void reverse(java.lang.Object, int, int)"  where just some loads and stores are in "public void reverse$mIc$sp(int[], int, int)". One can read this as an example for the tremendous lever of @specialize or as a hint that something is wrong with the Arrays.
    • I tried to add manifests but id did not make any difference. I did this for no other reason than not to omit something.



    --
    Daniel C. Sobral

    Veni, vidi, veterni.
    Burkhard Ludwig
    Joined: 2009-04-18,
    User offline. Last seen 42 years 45 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp
    James,
    > Just one quick clarification: @specialized is not like template specialization in C++. 
    You are completely right, I should have read my Stroustrup before posting. I was referring to explicit template instantiation which is completely different from specialization. The motivations and mechanisms are different(in C++ you want to reduce code size, in Scala you want some speed up and sacrifice code size) But the action is surprisingly similar: You have a class (or method in scala) with a type parameter and must decide whether you want to have an instance of the class (method) for a particular concrete type or not. The program you produce with such a specialization is supposed to behave in the same way as w/o it.
    The difference is in Scala that it shall run faster and is bigger while in C++ it is probably just smaller (and sometimes uses fewer memory to compile. (I really should look this up). The mechanisms are different, in Scala you have only the primitive types for specialization available and you have to make your decisions on the library side, in C++ this all very different.  But one "task" is surprisingly similar: You have to decide what to specialize and what not.
    Coming back to scala: if I am not mistaken, when you write def f[@specialized T] ... you get 9 instead of 1 methods, if you have def g[@specialize T, @specialize U] you have 81. That soon grows to unmanageable and you have to reduce the combinations and you will do that based on what policy ever and you have to maintain the decision. This maintenance means that you have to check and recheck your decision.  If the effect would be minor, so what, but if the effect is large you have a lot of opportunities to miss or expectations to match.
    I am looking forward to the upcoming scala-library to learn from their policy, e.g. which Functions and Tuples will be specialized, and what user habits will this produce. Will (Int, Int) => Boolean be a specialized type ? Or (T) => Boolean ? Which others?
    This is what I meant with "lets see whether old experiences carry over".

    Burkhard

    2009/11/4 James Iry <jamesiry@gmail.com>
    Just one quick clarification: @specialized is not like template specialization in C++.   In C++ when a programmer specializes a template then he or she creates source code that must be maintained.   Scala's @specialized tag works more like unspecialized C++ template instantiation: the compiler is in charge of expanding the "template."*  In other words, @specialized does not require or even allow the programmer to write custom specializations.  For more about @specialized see http://lamp.epfl.ch/~dragos/files/scala-spec.pdf.

    * Even with @specialized it's not really correct to call these templates, but the idea is about the same
    On Wed, Nov 4, 2009 at 8:50 AM, Burkhard Ludwig <ludwig.burkhard@googlemail.com> wrote:


    The specialized trick helps. I have, admittedly subjective, objections against it but it works. A vaguely similar technique in C-++, namely explicit specialization never really worked for me. It has the status of a last resort in my experience. It can make a big difference but is hard to maintain once one gets productive. We will see wether this carries over to @specialized or not.

    Johannes Rudolph
    Joined: 2008-12-17,
    User offline. Last seen 29 weeks 20 hours ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp

    On Wed, Nov 4, 2009 at 7:34 PM, Burkhard Ludwig
    wrote:
    > Johannes,
    >> As you noticed
    >> yourself, @specialized removes the reflective calls to Array.get but
    >> the performance doesn't change. That's because there are now explicit
    >> boxing calls before the call to Comparator.compare.
    >
    > It did change, it did, a lot. Sigh.

    I meant in your original case.

    > a version which handles Objects, and a version which handles Array[Int] (I
    > assume). The former uses calls to java.lang.reflect.Array.get and calls to
    > java.lang.reflect.Array.set within the loop whereas the latter uses just
    > loads and stores of any kind. That is the only difference. (To be precise:
    > the get and set can be seen directly only with the compiler option
    > -optimise, otherwise e.g.  scala.runtime.ScalaRuntime.array_apply is called
    > but this wraps a call to ava.lang.reflect.Array.get) The former method needs
    > 1100 ms to do its swappery while the latter uses a couple of seconds.
    > So I am still at reflection. I might be mistaken, but I thought that
    > java.lang.reflect.Array has to do with reflection. (No irony here, I am
    > really guessing).

    Mmmh, I think we are now thinking the same but have a different point
    of view. So let's look at a possible implementation of Array.get [1]:

    public static Object get(Object array, int index){
    if (array instanceof Object[])
    return ((Object[])array)[index];
    else if (array instanceof int[])
    return Interger.valueOf(((int[])array)[index]);
    // for all primitive types ...
    }

    So, the main cause that Array.get exists to offer a general API to
    handle arrays. It only needs to do some instanceof-checking. And it
    boxes primitive results. We can now argue about which is more
    expensive the instanceof-checkings or boxing...

    > The issue might well be unresolvable, but for me it is no simple lemma of
    > the JVM.

    Actually, the simple need for the @specialized annotation is evidence
    for this fact.

    Johannes Rudolph
    Joined: 2008-12-17,
    User offline. Last seen 29 weeks 20 hours ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp

    With some implicits-foo I got an extended scheme working:

    type |[A,B] = Either[A,B]

    implicit def makeLeft[X,I,B](implicit p:X => I):X => Either[I,B] =
    x => Left(p(x))

    implicit def makeRight[X,I,A](implicit p:X => I):X => Either[A,I] =
    x => Right(p(x))

    If you now have a function which wants to accept the union type Int |
    Boolean | String you can define it like that:

    def eitherUser[X<%Int|Boolean|String](x:X) : Int|Boolean|String = x

    Using X with <% instead of the type directly, lets us write

    scala> eitherUser(4)
    res0: |[|[Int,Boolean],String] = Left(Left(4))

    scala> eitherUser("test")
    res1: |[|[Int,Boolean],String] = Right(test)

    scala> eitherUser(true)
    res2: |[|[Int,Boolean],String] = Left(Right(true))

    On Wed, Nov 4, 2009 at 3:28 PM, Ricky Clarkson wrote:
    > I had an idea something like that was possible.  Does that scale up to
    > X | Y | Z, and work for pattern-matching?

    How should it interact with pattern-matching?

    val x: Int | String | Boolean = //...

    // hypothetical
    x match { case i:Int =>
    // ...
    instead of

    x match { case Left(Left(i)) =>

    ?

    Naftoli Gugenheim
    Joined: 2008-12-17,
    User offline. Last seen 42 years 45 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp
    Why not? Can't you have an implicit going back to the plain value, allowing to pattern match as you said?
    x match { case i:Int =>
      // ...
    instead of

    x match { case Left(Left(i)) =>



    On Wed, Nov 4, 2009 at 5:32 PM, Johannes Rudolph <johannes.rudolph@googlemail.com> wrote:
    With some implicits-foo I got an extended scheme working:

    type |[A,B] = Either[A,B]

    implicit def makeLeft[X,I,B](implicit p:X => I):X => Either[I,B] =
     x => Left(p(x))

    implicit def makeRight[X,I,A](implicit p:X => I):X => Either[A,I] =
     x => Right(p(x))

    If you now have a function which wants to accept the union type Int |
    Boolean | String you can define it like that:

    def eitherUser[X<%Int|Boolean|String](x:X) : Int|Boolean|String = x

    Using X with <% instead of the type directly, lets us write

    scala> eitherUser(4)
    res0: |[|[Int,Boolean],String] = Left(Left(4))

    scala> eitherUser("test")
    res1: |[|[Int,Boolean],String] = Right(test)

    scala> eitherUser(true)
    res2: |[|[Int,Boolean],String] = Left(Right(true))

    On Wed, Nov 4, 2009 at 3:28 PM, Ricky Clarkson <ricky.clarkson@gmail.com> wrote:
    > I had an idea something like that was possible.  Does that scale up to
    > X | Y | Z, and work for pattern-matching?

    How should it interact with pattern-matching?

    val x: Int | String | Boolean = //...

    // hypothetical
    x match { case i:Int =>
      // ...
    instead of

    x match { case Left(Left(i)) =>

    --
    Johannes

    -----------------------------------------------
    Johannes Rudolph
    http://virtual-void.net

    Johannes Rudolph
    Joined: 2008-12-17,
    User offline. Last seen 29 weeks 20 hours ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp

    On Thu, Nov 5, 2009 at 2:54 AM, Naftoli Gugenheim wrote:
    > Why not? Can't you have an implicit going back to the plain value, allowing
    > to pattern match as you said?

    I have no clue how to do so. Perhaps it's because I haven't
    comprehended all intricacies of pattern matching yet...

    I tried to use a pair of extractors defined like that for the simple case:

    object Extract{
    def unapply[A,B](x:Either[A,B]):Option[A] = x match {
    case Left(a) => Some(a)
    case _ => None
    }
    def unapply[A,B](x:Either[A,B]):Option[B] = x match {
    case Right(b) => Some(b)
    case _ => None
    }
    }

    But those do not work because both methods seem to have the same
    type. (IMO the compiler could choose an overload when given such a
    piece of code:

    val x:Either[Int,String] = //...

    x match {
    case Extract(x:Int) => // ...
    case Extract(x:String) => // ...
    }

    because both the input and output types for the extractor are known.)

    Is there any pattern to overload and select an extractor by the inner type ?

    Kevin Wright
    Joined: 2009-06-09,
    User offline. Last seen 49 weeks 3 days ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp

    I may be missing your point here, but what's wrong with:

    x match {
    case Left(x:Int) => // ...
    case Right(x:String) => // ...
    }

    On Thu, Nov 5, 2009 at 8:12 AM, Johannes Rudolph
    wrote:
    > On Thu, Nov 5, 2009 at 2:54 AM, Naftoli Gugenheim wrote:
    >> Why not? Can't you have an implicit going back to the plain value, allowing
    >> to pattern match as you said?
    >
    > I have no clue how to do so. Perhaps it's because I haven't
    > comprehended all intricacies of pattern matching yet...
    >
    > I tried to use a pair of extractors defined like that for the simple case:
    >
    > object Extract{
    >  def unapply[A,B](x:Either[A,B]):Option[A] = x match {
    >    case Left(a) => Some(a)
    >    case _ => None
    >  }
    >  def unapply[A,B](x:Either[A,B]):Option[B] = x match {
    >    case Right(b) => Some(b)
    >    case _ => None
    >  }
    > }
    >
    > But  those do not work because both methods seem to have the same
    > type. (IMO the compiler could choose an overload when given such a
    > piece of code:
    >
    > val x:Either[Int,String] = //...
    >
    > x match {
    >  case Extract(x:Int) => // ...
    >  case Extract(x:String) => // ...
    > }
    >
    > because both the input and output types for the extractor are known.)
    >
    > Is there any pattern to overload and select an extractor by the inner type ?
    >
    > --
    > Johannes
    >
    > -----------------------------------------------
    > Johannes Rudolph
    > http://virtual-void.net
    >

    Ricky Clarkson
    Joined: 2008-12-19,
    User offline. Last seen 3 years 2 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp

    The awkwardness involved when there are 3 or more types.

    2009/11/5 Kevin Wright :
    > I may be missing your point here, but what's wrong with:
    >
    > x match {
    >  case Left(x:Int) => // ...
    >  case Right(x:String) => // ...
    > }
    >
    > On Thu, Nov 5, 2009 at 8:12 AM, Johannes Rudolph
    > wrote:
    >> On Thu, Nov 5, 2009 at 2:54 AM, Naftoli Gugenheim wrote:
    >>> Why not? Can't you have an implicit going back to the plain value, allowing
    >>> to pattern match as you said?
    >>
    >> I have no clue how to do so. Perhaps it's because I haven't
    >> comprehended all intricacies of pattern matching yet...
    >>
    >> I tried to use a pair of extractors defined like that for the simple case:
    >>
    >> object Extract{
    >>  def unapply[A,B](x:Either[A,B]):Option[A] = x match {
    >>    case Left(a) => Some(a)
    >>    case _ => None
    >>  }
    >>  def unapply[A,B](x:Either[A,B]):Option[B] = x match {
    >>    case Right(b) => Some(b)
    >>    case _ => None
    >>  }
    >> }
    >>
    >> But  those do not work because both methods seem to have the same
    >> type. (IMO the compiler could choose an overload when given such a
    >> piece of code:
    >>
    >> val x:Either[Int,String] = //...
    >>
    >> x match {
    >>  case Extract(x:Int) => // ...
    >>  case Extract(x:String) => // ...
    >> }
    >>
    >> because both the input and output types for the extractor are known.)
    >>
    >> Is there any pattern to overload and select an extractor by the inner type ?
    >>
    >> --
    >> Johannes
    >>
    >> -----------------------------------------------
    >> Johannes Rudolph
    >> http://virtual-void.net
    >>
    >

    Iulian Dragos 2
    Joined: 2009-02-10,
    User offline. Last seen 42 years 45 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp


    On Wed, Nov 4, 2009 at 7:51 PM, Daniel Sobral <dcsobral@gmail.com> wrote:
    You are passing an Array[Int] to a method expecting a parameterized (generics) Array. Think of what that means... An Array[Int] is a contiguous memory space of 32 bits integers. A generic Array is an Array[Object], a contiguous memory space of references to objects. So Scala has to translate each access to that array to a function which will, depending on the array type, either return the reference, or return a (newly created) reference to an Int. I don't know exactly how Scala is doing that, but it can't be any easy.

    I understand the surprise on someone who does not know (and should not care) about how scala Arrays are implemented, how generics are erased, and so on. Of course, your explanation is exact, but the similarity between the Java code and Scala code makes one expect similar performance. I would go as far as to issue a warning when a generic array is erased to plain Objects.
     
    I also played with specialized, and got no gains whatsoever. I'm very curious about this result, as I find it very surprising.

    Specialization is not enabled by default, so you need to pass -Yspecialize to scalac. If you did that, please file a ticket, it's a bug.

    thanks,
    iulian
     
      At any rate, the source code with the aditional cases follow. Burkhard, note how the arrayUncurried version were written.   C:\Workset>scala tst.Test_28
    *** warmup ***
    *** gather ***
    empty measurement: 0
    empty loop: 15
    reverse: 1328
    shell: 17312
    specialized: 16906
    java shell generic: 641
    generic shell: 734
    generic shell uncurried: 703
    array shell uncurried: 640
    insert: 19625
    insert specialized: 19969
    java insert: 625
    generic insert: 688
    generic insert uncurried: 703
    array insert uncurried: 594
    *** done ***
    On Wed, Nov 4, 2009 at 2:50 PM, Burkhard Ludwig <ludwig.burkhard@googlemail.com> wrote:
    Hello everybody,
    There is something funny about comments about performance. I observe this since dozens of years. Seems to be a sort of dirty words or you have to have kind of a license. But I finally got some advice.
    I want to summarize a bit. (I picked just one arbitrary post for this purpose).
    My starting point was that when I do what seems the obvious to me I get an unexpectedly slow code.
    The "obvious" was
    • store some data in an Array[Int]
    • write a method which is slightly more general than needed at the moment, namely a 
           def f[T](a: Array[T], ...) instead of def f(a: Array[Int], ...)
    • use the method with an Array[Int]
    The "unexpectedly slow" was to be quantified. B.T.W: It is unavoidable to compare apples and oranges in this context, simply because that is exactly the question: Apple or Orange. There was "Java" in the comparison since it provides the limit for what can be obtained. I would have won a bet that this causes annoyance.
    • I picked for an experiment (not a benchmark) something simple (shellSort) and implemented it in varied ways. The result was that the bare array-diddling-code is simply slow. Compared to e.g. a dumb java implementation by an order of magnitude. And my Java skills are limited. For me it is totally unclear where the time penalty does come from. It has probably few to do with boxing. Looks more like reflection. See below.
    There are a couple of solutions/habits
    • Do not use methods with type parameters: def f(a: Array[Int], ...). I did not try this in the experiment. Maybe I should have.
    • Do not use obvious apple Array[Int] but use either orange: Array[AnyRef] or GenericArray[Int]. This was unexpected for me, since I got the impression that GenericArrays are somewhat exotic. Both work best with bare java-code.
    • As Victor pointed out: Use @specialized for either all methods dealing with the Arrays or a "performance critical subset" thereof.  What is unfortunate is the "code bloat" be it real on just a fear and some maintenance issues: On what base do I decide what to specialize and what not?
    My bottom line is still: If I use simple techniques (Arrays and methods with type parameters) in a straightforward combination, I pay a high penalty for the type of nuts-and-bolts array-code I did exercise here. Not a least-surprise behavior and very annoying.This might be less pronounced when you use an array just like any other sequence, but then, what are arrays good for in the first place?
    The specialized trick helps. I have, admittedly subjective, objections against it but it works. A vaguely similar technique in C-++, namely explicit specialization never really worked for me. It has the status of a last resort in my experience. It can make a big difference but is hard to maintain once one gets productive. We will see wether this carries over to @specialized or not.
    Thanks for your help.
    Burkhard.

    PS.: Some more details:
    • The apple-orange numbers for the simple sorter: w/o @specialized: 13924ms vs. with @specialized:  631ms (two apples) vs. Java: 266ms (an orange). The java speed is somewhat like the limit w/o most of the abstraction and whatever costs but still with the comparator.
    • Boxing: You can make the experiment with a more reduced code: reverse the array content in place. Just a loop and some indexing and some element swaps. This gives w/o @specialized: 1117ms (eleven hundred) and with @specialized: 11ms. (eleven). Here no boxing is involved at all. The difference in the generated general and specialized codes are calls to "java/lang/reflect/Array.get" and "java/lang/reflect/Array.set" in "public void reverse(java.lang.Object, int, int)"  where just some loads and stores are in "public void reverse$mIc$sp(int[], int, int)". One can read this as an example for the tremendous lever of @specialize or as a hint that something is wrong with the Arrays.
    • I tried to add manifests but id did not make any difference. I did this for no other reason than not to omit something.



    Kevin Wright
    Joined: 2009-06-09,
    User offline. Last seen 49 weeks 3 days ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp

    I think there's no easy way out here, the return type of the extractor
    has to be statically determined. You *can* create an extractor over
    Either[A,B], but it would have to return a common supertype of A and
    B. In this case the only supertype of Sting and Int is Any

    I've not tested it, but maybe something like this would work, using
    existential types:

    object Extract{
    def unapply(x:Either[_,_]):Option[Any] = x match {
    case Left(a) => Some(a)
    case Right(b) => Some(b)
    case _ => None
    }
    }

    val x:Either[Int,String] = //...

    x match {
    case Extract(x:Int) => // ...
    case Extract(x:String) => // ...
    }

    in which case "x:Int" and "x:String" are themselves extractors over
    the results of the "Extract" extractor

    On Thu, Nov 5, 2009 at 10:50 AM, Ricky Clarkson
    wrote:
    > The awkwardness involved when there are 3 or more types.
    >
    > 2009/11/5 Kevin Wright :
    >> I may be missing your point here, but what's wrong with:
    >>
    >> x match {
    >>  case Left(x:Int) => // ...
    >>  case Right(x:String) => // ...
    >> }
    >>
    >> On Thu, Nov 5, 2009 at 8:12 AM, Johannes Rudolph
    >> wrote:
    >>> On Thu, Nov 5, 2009 at 2:54 AM, Naftoli Gugenheim wrote:
    >>>> Why not? Can't you have an implicit going back to the plain value, allowing
    >>>> to pattern match as you said?
    >>>
    >>> I have no clue how to do so. Perhaps it's because I haven't
    >>> comprehended all intricacies of pattern matching yet...
    >>>
    >>> I tried to use a pair of extractors defined like that for the simple case:
    >>>
    >>> object Extract{
    >>>  def unapply[A,B](x:Either[A,B]):Option[A] = x match {
    >>>    case Left(a) => Some(a)
    >>>    case _ => None
    >>>  }
    >>>  def unapply[A,B](x:Either[A,B]):Option[B] = x match {
    >>>    case Right(b) => Some(b)
    >>>    case _ => None
    >>>  }
    >>> }
    >>>
    >>> But  those do not work because both methods seem to have the same
    >>> type. (IMO the compiler could choose an overload when given such a
    >>> piece of code:
    >>>
    >>> val x:Either[Int,String] = //...
    >>>
    >>> x match {
    >>>  case Extract(x:Int) => // ...
    >>>  case Extract(x:String) => // ...
    >>> }
    >>>
    >>> because both the input and output types for the extractor are known.)
    >>>
    >>> Is there any pattern to overload and select an extractor by the inner type ?
    >>>
    >>> --
    >>> Johannes
    >>>
    >>> -----------------------------------------------
    >>> Johannes Rudolph
    >>> http://virtual-void.net
    >>>
    >>
    >
    >
    >
    > --
    > Ricky Clarkson
    > Java and Scala Programmer, AD Holdings
    > +44 1565 770804
    > Skype: ricky_clarkson
    > Google Talk: ricky.clarkson@gmail.com
    > Google Wave: ricky.clarkson@googlewave.com
    >

    Iulian Dragos 2
    Joined: 2009-02-10,
    User offline. Last seen 42 years 45 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp


    On Wed, Nov 4, 2009 at 8:17 PM, Burkhard Ludwig <ludwig.burkhard@googlemail.com> wrote:
    Coming back to scala: if I am not mistaken, when you write def f[@specialized T] ... you get 9 instead of 1 methods, if you have def g[@specialize T, @specialize U] you have 81. That soon grows to unmanageable and you have to reduce the combinations and you will do that based on what policy ever and you have to maintain the decision. This maintenance means that you have to check and recheck your decision.  If the effect would be minor, so what, but if the effect is large you have a lot of opportunities to miss or expectations to match.

    You are right, definition-site specialization has its downsides. However, it has its pluses, and that's separate compilation (and no classloaders involved) and mixing specialized and generic code. As with anything programming, there's a trade-off. In this case the burden is on the library writer. Time will tell if this flies.

    Note that you can specify a subset of primitive types for specialization, for instance specializing on anything less than Int might be unnecessary (though it would matter for arrays).
     
    I am looking forward to the upcoming scala-library to learn from their policy, e.g. which Functions and Tuples will be specialized, and what user habits will this produce. Will (Int, Int) => Boolean be a specialized type ? Or (T) => Boolean ? Which others?

    We'll specialize functions up to two arguments and probably the same for tuples. The hard decision will be on collections, most likely anything involving arrays and parallel collections.
     

    This is what I meant with "lets see whether old experiences carry over".

    Looking forward.

    cheers,
    iulian
     


    Burkhard

    2009/11/4 James Iry <jamesiry@gmail.com>
    Just one quick clarification: @specialized is not like template specialization in C++.   In C++ when a programmer specializes a template then he or she creates source code that must be maintained.   Scala's @specialized tag works more like unspecialized C++ template instantiation: the compiler is in charge of expanding the "template."*  In other words, @specialized does not require or even allow the programmer to write custom specializations.  For more about @specialized see http://lamp.epfl.ch/~dragos/files/scala-spec.pdf.

    * Even with @specialized it's not really correct to call these templates, but the idea is about the same
    On Wed, Nov 4, 2009 at 8:50 AM, Burkhard Ludwig <ludwig.burkhard@googlemail.com> wrote:


    The specialized trick helps. I have, admittedly subjective, objections against it but it works. A vaguely similar technique in C-++, namely explicit specialization never really worked for me. It has the status of a last resort in my experience. It can make a big difference but is hard to maintain once one gets productive. We will see wether this carries over to @specialized or not.




    --
    « Je déteste la montagne, ça cache le paysage »
    Alphonse Allais
    Paolo Losi
    Joined: 2009-11-05,
    User offline. Last seen 42 years 45 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code com

    Iulian Dragos wrote:

    > I would go as far as to issue a warning when a generic
    > array is erased to plain Objects.

    I think "performance" warnings of this kind (maybe triggered by an
    optional command line switch) would be very handy indeed.

    Paolo

    phkoester
    Joined: 2009-08-23,
    User offline. Last seen 42 years 45 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp

    Johannes Rudolph wrote:
    > I tell you probably nothing new but that's somewhat possible right now
    > (aside from the ugly prefix notation in outputs)
    >
    > scala> type |[A,B] = Either[A,B]
    > defined type alias $bar
    >
    > scala> Array[Int | Long](Left(5),Right(12L))
    > res6: Array[|[Int,Long]] = Array(Left(5), Right(12))
    >
    > and even
    >
    > scala> implicit def makeLeft[A,B](a:A):Either[A,B] = Left(a)
    > makeLeft: [A,B](A)Either[A,B]
    >
    > scala> implicit def makeRight[A,B](b:B):Either[A,B] = Right(b)
    > makeRight: [A,B](B)Either[A,B]
    >
    > scala> Array[Int|String](5,"Wurst",123,2,6)
    > res8: Array[|[Int,String]] = Array(Left(5), Right(Wurst), Left(123),
    > Left(2), Left(6))

    This looks interesting, but I lost track of the origin of this
    discussion. How would `Either', `Left' and `Right' need to be defined to
    make this compile and work?

    ---Ph.

    Tony Morris 2
    Joined: 2009-03-20,
    User offline. Last seen 42 years 45 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp

    See scala.Either, scala.Left and scala.Right.

    Philip Köster wrote:
    >
    >
    > Johannes Rudolph wrote:
    >> I tell you probably nothing new but that's somewhat possible right now
    >> (aside from the ugly prefix notation in outputs)
    >>
    >> scala> type |[A,B] = Either[A,B]
    >> defined type alias $bar
    >>
    >> scala> Array[Int | Long](Left(5),Right(12L))
    >> res6: Array[|[Int,Long]] = Array(Left(5), Right(12))
    >>
    >> and even
    >>
    >> scala> implicit def makeLeft[A,B](a:A):Either[A,B] = Left(a)
    >> makeLeft: [A,B](A)Either[A,B]
    >>
    >> scala> implicit def makeRight[A,B](b:B):Either[A,B] = Right(b)
    >> makeRight: [A,B](B)Either[A,B]
    >>
    >> scala> Array[Int|String](5,"Wurst",123,2,6)
    >> res8: Array[|[Int,String]] = Array(Left(5), Right(Wurst), Left(123),
    >> Left(2), Left(6))
    >
    > This looks interesting, but I lost track of the origin of this
    > discussion. How would `Either', `Left' and `Right' need to be defined
    > to make this compile and work?
    >
    > ---Ph.
    >

    phkoester
    Joined: 2009-08-23,
    User offline. Last seen 42 years 45 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp

    > See scala.Either, scala.Left and scala.Right.

    Ah okay, I tried to write these classes myself. :)

    Now what I don't understand is this (taken from `Either.scala'):

    final case class Left[+A, +B](a: A) extends Either[A, B] {
    def isLeft = true
    def isRight = false
    }

    When I say

    implicit def makeLeft[A, B](a: A): Either[A, B] = Left(a)

    it works, but why does this work? What happens to the unspecified type
    param `B' here? My expectation would be that `Left' would remain
    abstract, but apparently this is isn't so ...

    ---Ph.

    Stefan Langer
    Joined: 2009-10-23,
    User offline. Last seen 42 years 45 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp
    It is undefined until you actually use your implicit method. At this point you will have to specify the B as well or else the compiler will complain.

    Regards Stefan

    2009/11/6 Philip Köster <philip.koester@web.de>
    See scala.Either, scala.Left and scala.Right.

    Ah okay, I tried to write these classes myself. :)

    Now what I don't understand is this (taken from `Either.scala'):

           final case class Left[+A, +B](a: A) extends Either[A, B] {
                   def isLeft = true
                   def isRight = false
           }

    When I say

                   implicit def makeLeft[A, B](a: A): Either[A, B] = Left(a)

    it works, but why does this work? What happens to the unspecified type param `B' here? My expectation would be that `Left' would remain abstract, but apparently this is isn't so ...

    ---Ph.

    phkoester
    Joined: 2009-08-23,
    User offline. Last seen 42 years 45 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp

    > It is undefined until you actually use your implicit method. At this
    > point you will have to specify the B as well or else the compiler will
    > complain.

    Let's forget about the implicit method. When I say

    val l = Left(1)

    then `A' is `Int', and `B' is ...?

    ---Ph.

    Michal Politowski 2
    Joined: 2009-09-15,
    User offline. Last seen 42 years 45 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code compa

    On Fri, 6 Nov 2009 12:22:37 +0100, Philip Köster wrote:
    [...]
    > Let's forget about the implicit method. When I say
    >
    > val l = Left(1)
    >
    > then `A' is `Int', and `B' is ...?

    Whatever the compiler infers it to be, which in this case is:

    $ scala
    Welcome to Scala version 2.7.7final (Java HotSpot(TM) Server VM, Java 1.6.0_16).
    Type in expressions to have them evaluated.
    Type :help for more information.

    scala> val l = Left(1)
    l: Left[Int,Nothing] = Left(1)

    Tony Morris 2
    Joined: 2009-03-20,
    User offline. Last seen 42 years 45 weeks ago.
    Re: Arrays: Disappointed about ridiculous slow scala code comp

    Philip Köster wrote:
    >> It is undefined until you actually use your implicit method. At this
    >> point you will have to specify the B as well or else the compiler
    >> will complain.
    >
    > Let's forget about the implicit method. When I say
    >
    > val l = Left(1)
    >
    > then `A' is `Int', and `B' is ...?
    >
    > ---Ph.
    >
    l: Left[Int,Nothing] = Left(1)

    Copyright © 2012 École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland