This page is no longer maintained — Please continue to the home page at www.scala-lang.org

overloading vs implicit resolution

32 replies
Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.

object Test extends Application {
def m(o: Object) = println("object!")

def m(s: String) = println("string")

case class A

implicit def a2s(a:A):String = a.toString

m(A())
}

results to: object!

So implicit conversion is not applied here. Is this supposed to be this way?

> scala -version
Scala code runner version 2.7.7.final -- Copyright 2002-2009, LAMP/EPFL

Jesper Nordenberg
Joined: 2008-12-27,
User offline. Last seen 42 years 45 weeks ago.
Re: overloading vs implicit resolution

Vladimir Kirichenko wrote:
> object Test extends Application {
> def m(o: Object) = println("object!")
>
> def m(s: String) = println("string")
>
> case class A
>
> implicit def a2s(a:A):String = a.toString
>
> m(A())
> }
>
> results to: object!
>
> So implicit conversion is not applied here. Is this supposed to be this way?

Yes, there's no reason for the compiler to apply an implicit conversions
here.

/Jesper Nordenberg

Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

> Yes, there's no reason for the compiler to apply an implicit conversions
> here.

May be availability of implicit conversion to more specific type is a
good enough reason for compiler to apply it? It causes real pain in
the ass with integration with java libraries with 'object' methods.

Jesper Nordenberg
Joined: 2008-12-27,
User offline. Last seen 42 years 45 weeks ago.
Re: overloading vs implicit resolution

Vladimir Kirichenko wrote:
>> Yes, there's no reason for the compiler to apply an implicit conversions
>> here.
>
> May be availability of implicit conversion to more specific type is a
> good enough reason for compiler to apply it?

No, implicit conversions are only applied if the expression isn't valid
using the original type.

/Jesper Nordenberg

Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Jesper Nordenberg wrote:
> Vladimir Kirichenko wrote:
>>> Yes, there's no reason for the compiler to apply an implicit conversions
>>> here.
>>
>> May be availability of implicit conversion to more specific type is a
>> good enough reason for compiler to apply it?
>
> No, implicit conversions are only applied if the expression isn't valid
> using the original type.

I understand this. Question is why? One of the use of implicit
conversions is type adaptation, and this feature have no use if there is
Any/Object overload in the hierarchy. BTW it causes some "dynamic
language" problem - add this kind of overload to the base hierarchy -
and the existing code still "statically ok" but does not work anymore.
So predictability of implicit application becomes in question. Is there
any reason (other than "this is how it is now") to ignore implicit
conversions with more accurate types in favor of more common types?

ichoran
Joined: 2009-08-14,
User offline. Last seen 2 years 3 weeks ago.
Re: Re: overloading vs implicit resolution
In general, you don't want to do an expensive implicit conversion from one class to another instead of a completely free use of a subclass in place of a superclass.  So allowing implicit conversions to work in this case is bad:
  class MyCollection extends BasicCollection { ... }
  class WeirdHighlyDerivedCollection extends BasicCollection { ... }
  implicit def mycol2weird(m:MyCollection):WeirdHighlyDerivedCollection { ... }
  def foo(c:BasicCollection) { ... }
  def foo(c:WeirdHighlyDerivedCollection) { ... }
  val a = new MyCollection
  foo(a)  // Do you really, really want to make the weird collection?

However, there is some incentive to make java.lang.Object a special case to make interoperability with Java code easier.

On the other hand, you can request the type that you want something to be, and the implicit conversion will do it if it can:
  class A
  class B extends A
  class C extends A
  implicit def b2c(b:B) = { println("Turning B into C"); new C }
  val b = new B
  object O {
    def foo(a:A) { println("A") }
    def foo(c:C) { println("C") }
  }
  O.foo(b)  // Prints "A"
  O.foo((b:C))  // Prints "Turning B into C" , "C"

So you don't actually need to remember what the implicit conversion function is called, just the non-Object type that you want.  (This is handy, if depressingly verbose, for getting the right integer conversion into System.out.printf, if you use it.)

  --Rex

On Thu, Oct 29, 2009 at 5:57 PM, Vladimir Kirichenko <vladimir.kirichenko@gmail.com> wrote:
Jesper Nordenberg wrote:
> Vladimir Kirichenko wrote:
>>> Yes, there's no reason for the compiler to apply an implicit conversions
>>> here.
>>
>> May be availability of implicit conversion to more specific type is a
>> good enough reason for compiler to apply it?
>
> No, implicit conversions are only applied if the expression isn't valid
> using the original type.

I understand this. Question is why? One of the use of implicit
conversions is type adaptation, and this feature have no use if there is
Any/Object overload in the hierarchy. BTW it causes some "dynamic
language" problem - add this kind of overload to the base hierarchy -
and the existing code still "statically ok" but does not work anymore.
So predictability of implicit application becomes in question. Is there
any reason (other than "this is how it is now") to ignore implicit
conversions with more accurate types in favor of more common types?


--
Best Regards,
Vladimir Kirichenko


Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Rex Kerr wrote:
> In general, you don't want to do an expensive implicit conversion from one
> class to another instead of a completely free use of a subclass in place of
> a superclass.

Your examples both related to inheritance. I didn't mean inheritance in
the first place, I meant Object/non object case, when classes are
totally unrelated. This situation occurred in integration with java
library with two overloaded methods one with object parameter and the
second one with... let say parameter of type Matcher. Implicit
conversion suppose to convert more convenient scala object to that
"Matcher".

So inheritance does not matter here, the real problem is with the
methods with parameters of type java.lang.Object - it's more an
integration issue - there is no way to use implicit conversion in this
case...meanwhile compilation is ok. The explicit import of implicit
conversion just doesn't matter :( And this kind of behavior is not the
least surprise.

extempore
Joined: 2008-12-17,
User offline. Last seen 35 weeks 3 days ago.
Re: Re: overloading vs implicit resolution

On Fri, Oct 30, 2009 at 01:12:31AM +0200, Vladimir Kirichenko wrote:
> The explicit import of implicit conversion just doesn't matter :( And
> this kind of behavior is not the least surprise.

I consider what you propose to be way more surprising.

Here is the surprise function: if it type checks, don't do any implicit
search. It type checks, so no search. If it's surprising to you that
no implicit conversion takes place in this case, which clearly type
checks, then you are applying an inadequate mental model.

I don't even know how I'd go about forming an intuition about what was
going to happen if implicits might kick in and start converting things
even when I'm making a perfectly valid well-typed method call. I guess
having no expectation whatsoever about what is going to happen next is
one way of never being surprised...

Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Paul Phillips wrote:
> On Fri, Oct 30, 2009 at 01:12:31AM +0200, Vladimir Kirichenko wrote:
>> The explicit import of implicit conversion just doesn't matter :( And
>> this kind of behavior is not the least surprise.
>
> I consider what you propose to be way more surprising.
>
> Here is the surprise function: if it type checks, don't do any implicit
> search. It type checks, so no search. If it's surprising to you that
> no implicit conversion takes place in this case, which clearly type
> checks, then you are applying an inadequate mental model.

Surprising moment is that my _explicit_ _import_ declaration is silently
ignored and ClassCastException is what I get in "perfectly valid
well-typed method call". There should be a least warning that there is
implicit conversion (that have more specific types) that is ambiguous
with real types "hey dude what did you mean importing this kind of stuff"?

Jesper Nordenberg
Joined: 2008-12-27,
User offline. Last seen 42 years 45 weeks ago.
Re: overloading vs implicit resolution

Vladimir Kirichenko wrote:
> Surprising moment is that my _explicit_ _import_ declaration is silently
> ignored and ClassCastException is what I get in "perfectly valid
> well-typed method call". There should be a least warning that there is
> implicit conversion (that have more specific types) that is ambiguous
> with real types "hey dude what did you mean importing this kind of stuff"?

There is no ambiguity. Having a warning that says that if an implicit
conversion was applied the call would have been made to another method
would be pretty darn annoying. The only reasonable warning the compiler
could give is that your import is unused, but that wouldn't help much in
general as most implicits are imported using wildcard syntax.

/Jesper Nordenberg

Tony Morris 2
Joined: 2009-03-20,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

It is impractical to use an implicit simply "because you can" when other
alternatives type-check. Better is to use an implicit "because you can't
(type check)."

What ClassCastException do you get for a perfect valid well-typed method
call?

Vladimir Kirichenko wrote:
> Paul Phillips wrote:
>
>> On Fri, Oct 30, 2009 at 01:12:31AM +0200, Vladimir Kirichenko wrote:
>>
>>> The explicit import of implicit conversion just doesn't matter :( And
>>> this kind of behavior is not the least surprise.
>>>
>> I consider what you propose to be way more surprising.
>>
>> Here is the surprise function: if it type checks, don't do any implicit
>> search. It type checks, so no search. If it's surprising to you that
>> no implicit conversion takes place in this case, which clearly type
>> checks, then you are applying an inadequate mental model.
>>
>
> Surprising moment is that my _explicit_ _import_ declaration is silently
> ignored and ClassCastException is what I get in "perfectly valid
> well-typed method call". There should be a least warning that there is
> implicit conversion (that have more specific types) that is ambiguous
> with real types "hey dude what did you mean importing this kind of stuff"?
>
>
>
>

Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Tony Morris wrote:
> It is impractical to use an implicit simply "because you can" when other
> alternatives type-check. Better is to use an implicit "because you can't
> (type check)."

Why is it impractical? That one who writes 'import
some.kid.of.Implicits._' means something, doesn't he?

> What ClassCastException do you get for a perfect valid well-typed method
> call?

Th real example related to overloading of method 'contains' of
implementation of java.util.Collection with more specific matching
parameters. The one in j.u.Collection has signature Object -> boolean.
The overloaded one accepts matching strategy type. In the scala code it
could be expressed better than with the use of plain java strategy
object. The idea was to write implicit conversion and have fun. The
reality bitten in the ass with silently not working code because of
basic Object -> boolean implementation.

extempore
Joined: 2008-12-17,
User offline. Last seen 35 weeks 3 days ago.
Re: Re: overloading vs implicit resolution

On Fri, Oct 30, 2009 at 01:32:22AM +0200, Vladimir Kirichenko wrote:
> Surprising moment is that my _explicit_ _import_ declaration is
> silently ignored and ClassCastException is what I get in "perfectly
> valid well-typed method call".

Wait, what? You did not share any code involving casts. There weren't
any imports either, so that's a pretty mysterious rebuttal. Here is the
code you sent, in case it's unavailable to you:

object Test extends Application {
def m(o: Object) = println("object!")

def m(s: String) = println("string")

case class A

implicit def a2s(a:A):String = a.toString

m(A())
}

> There should be a least warning that there is implicit conversion
> (that have more specific types) that is ambiguous with real types "hey
> dude what did you mean importing this kind of stuff"?

I'm sure your patch implementing this improbable heuristic will be given
due consideration.

extempore
Joined: 2008-12-17,
User offline. Last seen 35 weeks 3 days ago.
Re: Re: overloading vs implicit resolution

On Fri, Oct 30, 2009 at 01:50:59AM +0200, Vladimir Kirichenko wrote:
> The reality bitten in the ass with silently not working code because
> of basic Object -> boolean implementation.

"Behaving as specified" is a funny definition of not working. Do you
blame the compiler for syntax errors? You chose a design which
fundamentally does not work. Reality does bite, but that wasn't reality
on this occasion, it was his distant cousin self-infliction.

Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Paul Phillips wrote:
> On Fri, Oct 30, 2009 at 01:50:59AM +0200, Vladimir Kirichenko wrote:
>> The reality bitten in the ass with silently not working code because
>> of basic Object -> boolean implementation.
>
> "Behaving as specified" is a funny definition of not working.

Question was "why" this is specified this way, and maybe there will be
profit specifying this other way?

> You chose a design which
> fundamentally does not work.

I'm sorry for j.u.Collection interface and it's type parameters those
allows use of any kind of crap in it's remove, containsAll, retainAll,
contains, containsAll methods. I'll be more careful some other days:)

Tony Morris 2
Joined: 2009-03-20,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Vladimir Kirichenko wrote:
> Tony Morris wrote:
>
>> It is impractical to use an implicit simply "because you can" when other
>> alternatives type-check. Better is to use an implicit "because you can't
>> (type check)."
>>
>
> Why is it impractical? That one who writes 'import
> some.kid.of.Implicits._' means something, doesn't he?
>
I'm hesitant to explain why it is impractical. Yes of course writing
imports means something, but this is a side issue to the one at hand.

>
>> What ClassCastException do you get for a perfect valid well-typed method
>> call?
>>
>
> Th real example related to overloading of method 'contains' of
> implementation of java.util.Collection with more specific matching
> parameters. The one in j.u.Collection has signature Object -> boolean.
> The overloaded one accepts matching strategy type. In the scala code it
> could be expressed better than with the use of plain java strategy
> object. The idea was to write implicit conversion and have fun. The
> reality bitten in the ass with silently not working code because of
> basic Object -> boolean implementation.
>
>
I've not seen such an example, but I will conjecture that "not working"
is an incorrect description.

Dave Griffith
Joined: 2009-01-14,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Vladimir Kirichenko-2 wrote:
>
>
> Why is it impractical? That one who writes 'import
> some.kid.of.Implicits._' means something, doesn't he?
>
>

You would think so, but in the absence of IDEs that automate dead import
removal, this turns out not to be the case. The experience from pre-IDE
Java was that production code quickly evolves to the point where most of the
import statements in any given file are "dead", as code is added and deleted
(or worse cut-and-pasted) but corresponding imports are not pruned.

--Dave Griffith

Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

The final one:

case class X

//its a very far away library that lies almost there where star wars
//suppose to happen

trait InAFarFarAwayLibrary {
def foo( a: AnyRef ) = println( "oops!" )
}

//many different classes and objects located in different parts of the
//project and maybe in different libraries.

object A extends InAFarFarAwayLibrary {
def foo( a: String ) = println( "yeah baby" )
}

object B {
def foo( a: String ) = println( "yeah baby" )
}

object C {
def foo( a: String ) = println( "yeah baby" )
}

class D {
def foo( a: String ) = println( "yeah baby" )
}

//this is magical one - this one at bright side
object I {
implicit def x2s( x: X ) = x.toString
}

//some usage of these classes that appeared at the same place - where
//the final battle happened

object Test extends Application {
import I._ //haga, notice this one

val x = X()
A.foo( x )
B.foo( x )
C.foo( x )
val d = new D
d.foo( x )
val z = new D with InAFarFarAwayLibrary
z.foo( x )
}

================
oops!
yeah baby
yeah baby
yeah baby
oops!

if this is perfect way to go, and nothing can be improved here, so what
can I say.

Jesper Nordenberg
Joined: 2008-12-27,
User offline. Last seen 42 years 45 weeks ago.
Re: overloading vs implicit resolution

Looks fine to me. Instead of complaining about the current behavior, why
don't you write down the rules on how you want overloading and implicit
conversions to work and exactly when the compiler should issue a
warning. Then we have something to discuss.

/Jesper Nordenberg

Vladimir Kirichenko wrote:
> The final one:
>
> case class X
>
> //its a very far away library that lies almost there where star wars
> //suppose to happen
>
> trait InAFarFarAwayLibrary {
> def foo( a: AnyRef ) = println( "oops!" )
> }
>
> //many different classes and objects located in different parts of the
> //project and maybe in different libraries.
>
> object A extends InAFarFarAwayLibrary {
> def foo( a: String ) = println( "yeah baby" )
> }
>
> object B {
> def foo( a: String ) = println( "yeah baby" )
> }
>
> object C {
> def foo( a: String ) = println( "yeah baby" )
> }
>
> class D {
> def foo( a: String ) = println( "yeah baby" )
> }
>
> //this is magical one - this one at bright side
> object I {
> implicit def x2s( x: X ) = x.toString
> }
>
> //some usage of these classes that appeared at the same place - where
> //the final battle happened
>
> object Test extends Application {
> import I._ //haga, notice this one
>
> val x = X()
> A.foo( x )
> B.foo( x )
> C.foo( x )
> val d = new D
> d.foo( x )
> val z = new D with InAFarFarAwayLibrary
> z.foo( x )
> }
>
> ================
> oops!
> yeah baby
> yeah baby
> yeah baby
> oops!
>
>
> if this is perfect way to go, and nothing can be improved here, so what
> can I say.
>

Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Jesper Nordenberg wrote:
> Looks fine to me. Instead of complaining about the current behavior, why
> don't you write down the rules on how you want overloading and implicit
> conversions to work and exactly when the compiler should issue a
> warning. Then we have something to discuss.

For compiler: implicit conversions in scope should be used in type
checks. Implicit conversion should be applied where it proposes
conversion to more specific type. Actually it's consistent with regular
resolution of the overloaded methods, just one addition: overloaded
method should be chosen with check of available implicit conversions.

For warning: flash a warning if there is an implicit conversion in scope
that could cause of different overloaded method with more specific type
to be chosen if it was applied. It's based on assumption that if someone
imports implicit conversion he expects it to be applied, and if it is
not applied in a presence of "potentially matching overloading" - it's a
potential bug in there.

As for me the proposal for compiler would be perfect if it will not
cause some greedy behavior in implicit checks that will make whole idea
unusable.

Warning could be usable in a presence of gazillion methods and imports
from different classes and libraries one of those have "dangerous"
AnyRef/j.l.Object overload that could cause code to be compiled but not
working as expected. There is no need to go far to find example -
java.util.* is a good one with a lots of j.l.Object parameters.

Tony Morris 2
Joined: 2009-03-20,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

FWIW, The current implementation is far from unusable. Scalaz is a
testament to this fact, which makes heavy use of implicits in emulating
type-classes and is used by many different people on their specific
applications. Otherwise, I'm sitting out on this one.

Vladimir Kirichenko wrote:
> Jesper Nordenberg wrote:
>
>> Looks fine to me. Instead of complaining about the current behavior, why
>> don't you write down the rules on how you want overloading and implicit
>> conversions to work and exactly when the compiler should issue a
>> warning. Then we have something to discuss.
>>
>
> For compiler: implicit conversions in scope should be used in type
> checks. Implicit conversion should be applied where it proposes
> conversion to more specific type. Actually it's consistent with regular
> resolution of the overloaded methods, just one addition: overloaded
> method should be chosen with check of available implicit conversions.
>
> For warning: flash a warning if there is an implicit conversion in scope
> that could cause of different overloaded method with more specific type
> to be chosen if it was applied. It's based on assumption that if someone
> imports implicit conversion he expects it to be applied, and if it is
> not applied in a presence of "potentially matching overloading" - it's a
> potential bug in there.
>
>
> As for me the proposal for compiler would be perfect if it will not
> cause some greedy behavior in implicit checks that will make whole idea
> unusable.
>
> Warning could be usable in a presence of gazillion methods and imports
> from different classes and libraries one of those have "dangerous"
> AnyRef/j.l.Object overload that could cause code to be compiled but not
> working as expected. There is no need to go far to find example -
> java.util.* is a good one with a lots of j.l.Object parameters.
>
>

Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Tony Morris wrote:
> FWIW, The current implementation is far from unusable.

No one say it is. The proposed changes will not break anything. It's
just one more case of implicit use that solves specific (but not rare)
situation in a consistent and stable way. For example there will not be
fear if the addition of some AnyRef overloaded method somewhere will
make some other code that was based on use of implicits to behave
differently w/o any notice.

Tony Morris 2
Joined: 2009-03-20,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Sorry I must have misread your previous statement.

Vladimir Kirichenko wrote:
> Tony Morris wrote:
>
>> FWIW, The current implementation is far from unusable.
>>
>
>
> No one say it is. The proposed changes will not break anything. It's
> just one more case of implicit use that solves specific (but not rare)
> situation in a consistent and stable way. For example there will not be
> fear if the addition of some AnyRef overloaded method somewhere will
> make some other code that was based on use of implicits to behave
> differently w/o any notice.
>
>
>

ichoran
Joined: 2009-08-14,
User offline. Last seen 2 years 3 weeks ago.
Re: Re: overloading vs implicit resolution
On Thu, Oct 29, 2009 at 7:32 PM, Vladimir Kirichenko <vladimir.kirichenko@gmail.com> wrote:
Surprising moment is that my _explicit_ _import_ declaration is silently
ignored and ClassCastException is what I get in "perfectly valid
well-typed method call". There should be a least warning that there is
implicit conversion (that have more specific types) that is ambiguous
with real types "hey dude what did you mean importing this kind of stuff"?

I think you're forgetting that everything inherits from Object = AnyRef.  This is why you can do hashing, equality, printing, etc. of arbitrary objects.

I think you are also underestimating how bad of an idea it is to use a "more specific type", and also how difficult it is to even decide what is a "more specific type".  For example,

class A
class B extends A
class C extends B
class D extends C
class E extends A
class F extends E
def foo(b:B) { println("Hi!") }
def foo(c:C) { println("Hello!") }
implicit def e2c(e:E) = new C
val e = new E
foo(e)

Should this print Hi! or Hello!?  It is true that C is a deeper node of the type tree than B, so perhaps e should be implicitly converted to C.  Then again, the tree-depth that you need to traverse is deeper for the E->C conversion (the common ancestor is at A) than the C->B one.  And should the answer change if you have a conversion to D instead?  What if it's F instead of E?

As an example of why an implicit is a really bad idea, consider

class M { val dataarray = new Array[String](1000000) ; def nicedata = dataarray.length }
class N extends M { override def nicedata = dataarray.length/2 }
class K
class L extends K { val datalist = new ListBuffer[String](1000000) }
implicit def n2l(n:N) = {
  val l = new L
  m.dataarray.foreach(l.datalist += _)
}
def foo(m:M) { println(m.dataarray(0)) }
def foo(l:L) { println(l.datalist.head) }
val n = new N
foo(n)

Here, L is more derived than M, but you really don't want the implicit conversion because of all the extra computation needed to simply print out the first element of a very long sequence.

  --Rex
 
Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Rex Kerr wrote:
> I think you are also underestimating how bad of an idea it is to use a "more
> specific type", and also how difficult it is to even decide what is a "more
> specific type".

Same rules as in regular overload resolution. Nothing new. Just
application of regular rules to implicits.

> class A
> class B extends A
> class C extends B
> class D extends C
> class E extends A
> class F extends E
> def foo(b:B) { println("Hi!") }
> def foo(c:C) { println("Hello!") }
> implicit def e2c(e:E) = new C
> val e = new E
> foo(e)
>
> Should this print Hi! or Hello!?

Hello. Same thing as for:

> def foo(b:B) { println("Hi!") }
> def foo(c:C) { println("Hello!") }

val c = new C
foo(x)

Nothing new - those are regular overload resolution rules. Implicits
just have to take part in it.

Think about you're example as:

val e = new E
foo(e2c(e))

no surprises here? Just plain overload resolution. That is actually
implicit conversion compiled to.

It is true that C is a deeper node of the
> type tree than B, so perhaps e should be implicitly converted to C. Then
> again, the tree-depth that you need to traverse is deeper for the E->C
> conversion (the common ancestor is at A) than the C->B one. And should the
> answer change if you have a conversion to D instead? What if it's F instead
> of E?

e2c's result type is C. It's exact. There is no questions here what
method to apply.

ichoran
Joined: 2009-08-14,
User offline. Last seen 2 years 3 weeks ago.
Re: Re: overloading vs implicit resolution
Sorry, I messed up my counterexample to Vladimir Kirichenko's implicits request while I was editing it.  I've fixed it here, in minimal form:
  class A
  class B extends A
  class C extends A
  def foo(a:A) { println("Hi") }
  def foo(c:C) { println("Hello") }
  implicit def b2c(b:B) = new C
  val b = new B
  foo(b)

There are two ways interpret the call to foo.  Either:
  foo((b:A))
which requires no extra runtime work at all; it just tells the compiler that we want to view B as an A now (which is exactly what you promise is always okay when you write "B extends A").  Or:
  foo(b2c(b))
which invokes an additional method with who-knows-how-much additional overhead.

The former is clearly the more parsimonious interpretation, and thus should be automatically chosen first.

Fortunately, Scala allows a simple way to get the second, if an implicit exists:
  foo((b:C))
will see that you really want b to be acted upon as if it's a C, will realize that this is a problem because B does not extend C, will then find the implicit def b2c and recognize it as a solution to the problem, and will translate it (without you needing to remember what function does the conversion) to
  foo(b2c(b))

So I think the present design is exactly right in terms of language design: do the simplest thing, and allow the programmer to ask for the next-most-simple thing in a relatively painless way.

Now, it is true that if A is actually Object, and if you're using a pre-generic Java library that is perhaps not so well designed for Java 1.5 and later, all your Scala classes will happily stick themselves into the Object version of the method since that is a free operation, even though you know you want something else.  Painful, yes, but I don't think that making the expensive implicit conversions supercede free superclass casting is the right solution.  I could envision some sort of annotation, at most, perhaps:
  @avoid(Object) myJavaClass.myOverloadedMethod(a)
so you don't even have to remember what the correct type is, just that you want to avoid Object unless there is no alternative, or perhaps
  myJavaClass.myOverloadedMethod(@implicit a)
which would direct the compiler to try to get an exact match to a with implicits before even trying a supertype.

But even then, I'm not sure it's worth the work.

  --Rex

Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Rex Kerr wrote:

> There are two ways interpret the call to foo. Either:
> foo((b:A))
> which requires no extra runtime work at all; it just tells the compiler that
> we want to view B as an A now (which is exactly what you promise is always
> okay when you write "B extends A"). Or:
> foo(b2c(b))
> which invokes an additional method with who-knows-how-much additional
> overhead.
>
> The former is clearly the more parsimonious interpretation, and thus should
> be automatically chosen first.

Why? You have explicitly defined conversion of b2c. Why did you do that?
Why this declaration should be ignoret? If it should why have you
provided it? Dot provide it - everything will work as you said. Provide
it - and overload selection will be refined with it.

> Fortunately, Scala allows a simple way to get the second, if an implicit
> exists:
> foo((b:C))

it's no a simple way if the type have a little longer that one letter
and couple of type parameters.... And another one - you have to find all
this occurences first - because compilation works fine in this case - it
just doesn't do what programmer want. It is actually dynamic vs static
typing kind of problem. Example

file a.scala:
===================
trait A {
//def foo(a:AnyRef) =.....
def foo(x:X) =.....
}
===================
somewhere else z.scala:

=================
implicit def y2x(y:Y):X = ...

val some = ... with A;

some.foo(new Y())
=================

The compilation result of the z.scala is significantly depends on where
commented line in trait A commented or not. W/o any warnings or static
typing errors. If while adding this kind of overload I have no idea
whether I break something somewhere else or not, and compiler will not
gona tell me - it's really scary thing. This is example where
statically typed languages should rule not suck.

> So I think the present design is exactly right in terms of language design:
> do the simplest thing, and allow the programmer to ask for the
> next-most-simple thing in a relatively painless way.

But I'm asking!:) I'm intentionally importing conversion - what else
should I do - jump and dance?:)

ichoran
Joined: 2009-08-14,
User offline. Last seen 2 years 3 weeks ago.
Re: Re: overloading vs implicit resolution
On Fri, Oct 30, 2009 at 6:39 PM, Vladimir Kirichenko <vladimir.kirichenko@gmail.com> wrote:
Rex Kerr wrote:

> There are two ways interpret the call to foo.  Either:
>   foo((b:A))
> which requires no extra runtime work at all; it just tells the compiler that
> we want to view B as an A now (which is exactly what you promise is always
> okay when you write "B extends A").  Or:
>   foo(b2c(b))
> which invokes an additional method with who-knows-how-much additional
> overhead.
>
> The former is clearly the more parsimonious interpretation, and thus should
> be automatically chosen first.

Why? You have explicitly defined conversion of b2c. Why did you do that?
Why this declaration should be ignoret? If it should why have you
provided it? Dot provide it - everything will work as you said. Provide
it - and overload selection will be refined with it.

I've explicitly defined a conversion of b2c, yes, but I also defined a foo that takes A and by extension a B.  *That* is the part that is problematic.  The b2c conversion works on *any* method that needs a C.  In this particular case, however, I provided a foo that you can use with less work than the foo(c:C) version.  You don't want to use it, because the easy version doesn't do what you want.
 

> Fortunately, Scala allows a simple way to get the second, if an implicit
> exists:
>   foo((b:C))

it's no a simple way if the type have a little longer that one letter
and couple of type parameters.... And another one - you have to find all
this occurences first - because compilation works fine in this case - it
just doesn't do what programmer want.

Agreed--it's simpler than remembering the function (perhaps, though if it's your implicit you could make that simpler and call it explcitly), but not simple.

I'm assuming that it's also not an option to wrap the misbehaving Java class(es) in Scala class(es)?
 
> So I think the present design is exactly right in terms of language design:
> do the simplest thing, and allow the programmer to ask for the
> next-most-simple thing in a relatively painless way.

But I'm asking!:) I'm intentionally importing conversion - what else
should I do - jump and dance?:)

Well, one could add a @jumpanddance annotation :).  But, yes, if you make available a complicated thing and there's a simple alternative, I think you either need a way to make the simple thing go out of view or an additional directive to say that the complicated thing should be used preferentially.  That's why I'd suggest a call-site annotation, or perhaps a way to retype the class to deprecate methods you don't like
  import my.java.Library.badClass { @deprecate badMethod(o:Object) }
so at least you'd only have to do it once ever.

If Scala were *only* a language for interoperating with big Object-strewn Java libraries, I would perhaps agree with you that implicits should be changed.  But at least as it is, Scala makes it no harder than using Java in this case, and at that point I think making Scala a good language overall should take priority over making Scala a good language for using Java libraries that you don't want to use in Java.

  --Rex
 
Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Rex Kerr wrote:

> The b2c conversion works on *any* method that needs a C.

in any method that needs C and where you *passed* B with a *presence* of
conversion. Looks like there was a lot of efforts to give compiler an
idea what programmer actually wants:)

> I'm assuming that it's also not an option to wrap the misbehaving Java
> class(es) in Scala class(es)?

Nope. But the question is arisen in more general way now:

x.scala:

import impl.a2c;
//foo :: C -> ...
X.foo(new A())

y.scala:

import impl.a2c;
//foo :: C -> ...
Y.foo(new A())

Is these codes similar? Who knows - depends on overloads those X and Y
have. If the *object X* have *def foo(a:AnyRef) = ..* - we're in trouble
(/whispering/ but we don't know it before get compiled but not working
code running - tadaa surprise - somewhere in the hierarchy...). This is
where the implicits are inconsistent with the method resolution rules.
While it's enough for y.scala to import conversion to make it work, it
is not enough for the exact same situation with x.scala, just because of
overloads.

ichoran
Joined: 2009-08-14,
User offline. Last seen 2 years 3 weeks ago.
Re: Re: overloading vs implicit resolution
On Fri, Oct 30, 2009 at 7:21 PM, Vladimir Kirichenko <vladimir.kirichenko@gmail.com> wrote:
Rex Kerr wrote:

> The b2c conversion works on *any* method that needs a C.

in any method that needs C and where you *passed* B with a *presence* of
conversion. Looks like there was a lot of efforts to give compiler an
idea what programmer actually wants:)

That is what some programmers want.  But consider this example.

class Data(val data : Array[Double]) {
  def dataExists = (data!=null && data.length>0)
  def canBePairs = (data.length&0x1)==0
  def goodData = dataExists && canBePairs
  def toList = new Data(data.toList)
}
class ListData(val datalist : List[Double]) extends Data(new Array[Double](0)) {
  override def dataExists = (datalist!=null && datalist!=Nil)
  override def canBePairs = (datalist.length&0x1)==0
  override def toList = this
}
def readyForDispatch(d:Data) = d.goodData
def readyForDispatch(ld:ListData) = ld.dataExists  // Check len later, expensive!
def dispatchForProcessing(a:Actor,ld:ListData) = a ! ProcessMe(ld)

Fairly straightforward stuff: you have a mutable copy of the data and you can convert it to an immutable copy for use with actors so you don't have to worry about concurrent updates.  Plus you have some utility methods that defer some processing on the list until you're parallel.

Okay, now suppose you decide to extend this framework:

class SafeData(data:Array[Double]) extends Data(data) {
  require( super.goodData )
  override def dataExist = true
  override def canBePairs = true
}
implicit def safedata2listdata(sd:SafeData) = sd.toList
val sd = new SafeData(myData)
dispatchForProcessing(myActor,sd)  // Yay, implicit conversion!

Looks good so far, yes?

readyForDispatch(sd)

Now, readyForDispatch should be super-cheap for SafeData because it already knows the answer.  Instead, bizarrely, it takes forever, because it converts it to a list, even though there's a perfectly simple Data-taking readyForDispatch.  This is certainly not something I'd expect: sd already knows perfectly well how to tell if it's ready for dispatch, so why is it being converted to a different data type just to find out the exact same thing when passed into a function that merely asks sd to tell us if it's ready?

> I'm assuming that it's also not an option to wrap the misbehaving Java
> class(es) in Scala class(es)?

Nope. But the question is arisen in more general way now:

x.scala:

import impl.a2c;
//foo :: C -> ...
X.foo(new A())

y.scala:

import impl.a2c;
//foo :: C -> ...
Y.foo(new A())

Is these codes similar? Who knows - depends on overloads those X and Y
have. If the *object X* have *def foo(a:AnyRef) = ..* - we're in trouble

Fine, but if we adopt your suggestion, it depends on what implicit conversions are defined, which unlike X, where you know where to look for def foo (especially if you use scaladoc or an IDE that can do completion), could be anywhere upstream in the namespace.  So we get weird unwanted conversions for perfectly usable types.

The core of the problem is that declaring def foo(a:AnyRef) essentially throws away the type system.  You (or the library creator) told it to not help you out at all.  There is no good, type-safe solution to this.  If you make expensive implicits have priority, you may unexpectedly call costly code with a type that is not what you expect instead of the generic version (and this surprise type change may cause runtime errors for you later if you try to recover your original class).  If you make inexpensive widening to a superclass have priority, you may fail to use an explicit that you added exactly because you wanted to avoid the generic case (and again, you may have runtime errors).  Of the two, the latter seems less evil to me--at least you failed to do what was wished for without doing expensive computations.

I am sympathetic to the problems one encounters when one doesn't have a functioning type system due to library design, but not so sympathetic that I want to start invoking code that is tens or millions of times more computationally expensive (and unexpected!) because inheritance is interrupted by implicits.

Anyway, I think at this point I have either made myself clear or will continue to fail to do so with additional emails, so I shall leave the above as my opinion with justifications thereof.

  --Rex


Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Rex Kerr wrote:
> Instead, bizarrely, it takes forever, because it converts
> it to a list, even though there's a perfectly simple Data-taking
> readyForDispatch. This is certainly not something I'd expect

So why have you made conversion available in the scope? If you don't
want conversion - just don't use it, don't import it.

Johannes Rudolph
Joined: 2008-12-17,
User offline. Last seen 29 weeks 20 hours ago.
Re: Re: overloading vs implicit resolution

From my experience using implicits I could imagine that the implicit
resolution in the compiler is costly to do. So, it was restricted only
to the cases when other possibilities fail. I acknowledge that there
is a usecase for taking implicits into account in the overload
resolution but I'm not sure if we can really oversee the consequences.
Think of all the implicits defined in Predef (or should they not apply
because these implicits are implicitly imported?) or came otherwise
into scope. What would the scope be for implicits to be imported
explicitly enough to take part in overload resolution? What are the
simple rules to handle these cases?

Johannes

On Sat, Oct 31, 2009 at 4:49 AM, Vladimir Kirichenko
wrote:
> Rex Kerr wrote:
>> Instead, bizarrely, it takes forever, because it converts
>> it to a list, even though there's a perfectly simple Data-taking
>> readyForDispatch. This is certainly not something I'd expect
>
> So why have you made conversion available in the scope? If you don't
> want conversion - just don't use it, don't import it.
>
>
> --
> Best Regards,
> Vladimir Kirichenko
>
>

Vladimir Kirichenko
Joined: 2009-02-19,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: overloading vs implicit resolution

Johannes Rudolph wrote:
>>From my experience using implicits I could imagine that the implicit
> resolution in the compiler is costly to do. So, it was restricted only
> to the cases when other possibilities fail. I acknowledge that there
> is a usecase for taking implicits into account in the overload
> resolution but I'm not sure if we can really oversee the consequences.
> Think of all the implicits defined in Predef (or should they not apply
> because these implicits are implicitly imported?) or came otherwise
> into scope. What would the scope be for implicits to be imported
> explicitly enough to take part in overload resolution? What are the
> simple rules to handle these cases?

Yes but think of this as an anther method overload issue. Should they be
taken into account? Sure - we dont asking this question abut reguar
class imports. The imported - so they in the scope.

Predef richness of implicits is another question. Whether there will be
resolution rule I'm advocating for or not - problem still there, and
there was issues before with masking of another methods in other
libraries with Predef implicits. I'm pretty sure that scala will reach
the point when the grows of third party libraries and frequent conflict
with Predef will raise a question about where we want all this implicits
to be in Predef or there could be better solution with putting them to
package that is not imported by default.

The question about Predef is good, but I'm not agree with the position
of "these will be imported for not being used".

And this is not only the improvement case, as I mentioned before
situation when absolutely similar situations could not be solved with
similar solution is not good, and silent dependency of compilation
result from existence of overload when "good working code become not
working" is no good too.

Copyright © 2012 École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland