This page is no longer maintained — Please continue to the home page at www.scala-lang.org

Absence of the subsumption rule

21 replies
DaveScala
Joined: 2011-03-18,
User offline. Last seen 1 year 21 weeks ago.

I came across this paper "Is the Java Type System Sound?"
See 3.9.1 Absence of the subsumption rule in
http://pubs.doc.ic.ac.uk/JavaTypesSound/JavaTypesSound.pdf

The subsumption rule states that if an expression e has type T, and T
is a subtype of S, then e has type S as well.
If e: T and T <: S, e: S
See: http://gbracha.blogspot.com/2009/03/subsuming-packages-and-other-stories...

I converted the program of the paper to Scala to see if Scala's
(2.9.1.RC2) type inference system is
missing the subsumption rule too and it does, like JDK7. With the
subsumption rule the answers should be A.

Is the absence of the subsumption rule still a valid issue or are
there new insights in program language theory?
It looks like the second method is chosen because of class inheritance
implementation (subclass A1 is more specific than superclass A), not
because of a specific interface subtyping rule.

subsumptionrule.scala
=====================
package subsumptionrule

class A
class A1 extends A
class B {
def f(x : A) : Char = 'A'
def f(y : A1) : Int = 1
def g(z: A1) {println(f(z))}
}
object Main extends App {
val a = new A
val a1 = new A1
val b = new B
b.g(a1)
}

C:\scala-2.9.1.RC2\examples>scala subsumptionrule.Main
1

SR.java
=======
package subsumptionrule2;

class A {};
class A1 extends A {};
class B {
char f(A x) { return'A';}
int f(A1 y) { return 1;}
void g(A1 z) { System.out.println(f(z));}
}
class SR {
public static void main(String[] args) {
A a = new A();
A1 a1 = new A1();
B b = new B();
b.g(a1);
}
}

C:\scala-2.9.1.RC2\examples>java subsumptionrule2.SR
1

Alex Repain
Joined: 2010-07-27,
User offline. Last seen 1 year 31 weeks ago.
Re: Absence of the subsumption rule


2011/8/13 Dave <dave.mahabiersing@hotmail.com>
I came across this paper "Is the Java Type System Sound?"
See 3.9.1 Absence of the subsumption rule in
http://pubs.doc.ic.ac.uk/JavaTypesSound/JavaTypesSound.pdf

The subsumption rule states that if an expression e has type T, and T
is a subtype of S, then e has type S as well.
If e: T and T <: S, e: S
See: http://gbracha.blogspot.com/2009/03/subsuming-packages-and-other-stories.html

I converted the program of the paper to Scala to see if Scala's
(2.9.1.RC2) type inference system is
missing the subsumption rule too and it does, like JDK7. With the
subsumption rule the answers should be A.

Is the absence of the subsumption rule still a valid issue or are
there new insights in program language theory?
It looks like the second method is chosen because of class inheritance
implementation (subclass A1 is more specific than superclass A), not
because of a specific interface subtyping rule.

Indeed. If you remove the second definition of f, the first will be applied instead.

You can also check the intuitive subsumption rule with language utilities :

class A
class A1 extends A
val a1 = new A1
a1.isInstanceOf[A] // not safest way, but good enough for that use case

res2: Boolean = true


subsumptionrule.scala
=====================
package subsumptionrule

class A
class A1 extends A
class B {
   def f(x : A) : Char = 'A'
   def f(y : A1) : Int = 1
   def g(z: A1) {println(f(z))}
}
object Main extends App {
   val a = new A
   val a1 = new A1
   val b = new B
   b.g(a1)
}

C:\scala-2.9.1.RC2\examples>scala subsumptionrule.Main
1

SR.java
=======
package subsumptionrule2;

class A {};
class A1 extends A {};
class B {
   char f(A x) { return'A';}
   int f(A1 y) { return 1;}
   void g(A1 z) { System.out.println(f(z));}
}
class SR {
   public static void main(String[] args) {
       A a = new A();
       A1 a1 = new A1();
       B b = new B();
       b.g(a1);
   }
}

C:\scala-2.9.1.RC2\examples>java subsumptionrule2.SR
1






--
Alex REPAIN
ENSEIRB-MATMECA - student
TECHNICOLOR R&D - intern
BORDEAUX I      - master's student
SCALA           - enthusiast


daniel
Joined: 2008-08-20,
User offline. Last seen 44 weeks 14 hours ago.
Re: Absence of the subsumption rule

What you are testing here is not subsumption but overloading. Specifically, your test case demonstrates that Scala's overloading is statically resolved, just like Java's. This has nothing to do with subsumption though.

Like all properties of a type system, subsumption is something you test statically, not with a runtime check. It is very easy to see in the following one liner:

val a: AnyRef = "hello"

String <: AnyRef, therefore any value of type String should also have type AnyRef. If that were false, then the above line would fail to compile.

Daniel

On Aug 13, 2011, at 8:28 AM, "Dave" wrote:

> I came across this paper "Is the Java Type System Sound?"
> See 3.9.1 Absence of the subsumption rule in
> http://pubs.doc.ic.ac.uk/JavaTypesSound/JavaTypesSound.pdf
>
> The subsumption rule states that if an expression e has type T, and T
> is a subtype of S, then e has type S as well.
> If e: T and T <: S, e: S
> See: http://gbracha.blogspot.com/2009/03/subsuming-packages-and-other-stories...
>
> I converted the program of the paper to Scala to see if Scala's
> (2.9.1.RC2) type inference system is
> missing the subsumption rule too and it does, like JDK7. With the
> subsumption rule the answers should be A.
>
> Is the absence of the subsumption rule still a valid issue or are
> there new insights in program language theory?
> It looks like the second method is chosen because of class inheritance
> implementation (subclass A1 is more specific than superclass A), not
> because of a specific interface subtyping rule.
>
> subsumptionrule.scala
> =====================
> package subsumptionrule
>
> class A
> class A1 extends A
> class B {
> def f(x : A) : Char = 'A'
> def f(y : A1) : Int = 1
> def g(z: A1) {println(f(z))}
> }
> object Main extends App {
> val a = new A
> val a1 = new A1
> val b = new B
> b.g(a1)
> }
>
> C:\scala-2.9.1.RC2\examples>scala subsumptionrule.Main
> 1
>
> SR.java
> =======
> package subsumptionrule2;
>
> class A {};
> class A1 extends A {};
> class B {
> char f(A x) { return'A';}
> int f(A1 y) { return 1;}
> void g(A1 z) { System.out.println(f(z));}
> }
> class SR {
> public static void main(String[] args) {
> A a = new A();
> A1 a1 = new A1();
> B b = new B();
> b.g(a1);
> }
> }
>
> C:\scala-2.9.1.RC2\examples>java subsumptionrule2.SR
> 1
>
>
>

DaveScala
Joined: 2011-03-18,
User offline. Last seen 1 year 21 weeks ago.
Re: Absence of the subsumption rule

@Daniel
On 13 aug, 16:31, Daniel Spiewak wrote:
> What you are testing here is not subsumption but overloading.  Specifically, your test case demonstrates that Scala's overloading is statically resolved, just like Java's.  This has nothing to do with subsumption though.

That is the whole point. The type inferencer resolved it statically in
the wrong way, because of the absence of the subsumption rule.
The overloading is just a use case to see what the type inferencer is
choosing, because two expressions with a subtyping relation is needed.
It could have also two other expression with a subtyping relation.

It has two methods to choose from but it chooses the wrong one just
like in Java.

@Alex
You can also check the intuitive subsumption rule with language
utilities :

class A
>class A1 extends A
>val a1 = new A1
>a1.isInstanceOf[A] // not safest way, but good enough for that use case

This only checking a subtype relation, namely whether A1 <: A.
There are no two expressions to choose from.
It is not a real subsumption rule test.

See my post:
The subsumption rule states that if an expression e has type T, and T
is a subtype of S, then e has type S as well.
If e: T and T <: S, e: S

so it is not only: Is T <: S ?

extempore
Joined: 2008-12-17,
User offline. Last seen 35 weeks 3 days ago.
Re: Re: Absence of the subsumption rule

On 8/13/11 9:22 AM, Dave wrote:
> That is the whole point. The type inferencer resolved it statically in
> the wrong way, because of the absence of the subsumption rule.

You're right of course. It is interesting to me that this particular
overload is allowed, because it is intuitively problematic:

def f(x : A) : Char = 'A'
def f(y : A1) : Int = 1

I'm not suggesting we do anything about it, as I wouldn't have the first
idea how to spec it in more complicated situations, but here we have an
overload where the argument types have a subtype relationship but the
return types do not. And to me that screams recipe for unfortunate
behaviors.

I imagine said behaviors arise by letting return types be overloaded
across unrelated types, regardless of argument types. I've never been
tempted to do this so I'm not hip to either what the useful
opportunities for that are or what the motivation is (if any) for
allowing it. I assume unless told otherwise it's java compatibility and
nothing else.

Alex Repain
Joined: 2010-07-27,
User offline. Last seen 1 year 31 weeks ago.
Re: Re: Absence of the subsumption rule


2011/8/13 Dave <dave.mahabiersing@hotmail.com>
@Daniel
On 13 aug, 16:31, Daniel Spiewak <djspie...@gmail.com> wrote:
> What you are testing here is not subsumption but overloading.  Specifically, your test case demonstrates that Scala's overloading is statically resolved, just like Java's.  This has nothing to do with subsumption though.

That is the whole point. The type inferencer resolved it statically in
the wrong way, because of the absence of the subsumption rule.
The overloading is just a use case to see what the type inferencer is
choosing, because two expressions with a subtyping relation is needed.
It could have also two other expression with a subtyping relation.

It has two methods to choose from but it chooses the wrong one just
like in Java.


@Alex
You can also check the intuitive subsumption rule with language
utilities :


class A
>class A1 extends A
>val a1 = new A1
>a1.isInstanceOf[A] // not safest way, but good enough for that use case

This only checking a subtype relation, namely whether A1 <: A.
There are no two expressions to choose from.
It is not a real subsumption rule test.

See my post:
The subsumption rule states that if an expression e has type T, and T
is a subtype of S, then e has type S as well.
If e: T and T <: S, e: S

That does not mean that e is ONLY of type S, which would be breaking all OOP principles ever created. So, we have e : S and e : T . The compiler is still free to choose which type to consider first. The logical order used until now in OOP is to start from specific types to parent types.

so it is not only: Is T <: S ?

{new A1} is an expression, as well as a call to value a1 is also an expression... I think you are overthinking the subsumption rule. It is actually very intuitive and logical, you can't conceive OOP without subsumption. That is why it is fundamental, but it does not mean it has to be complicated.



--
Alex REPAIN
ENSEIRB-MATMECA - student
TECHNICOLOR R&D - intern
BORDEAUX I      - master's student
SCALA           - enthusiast


ichoran
Joined: 2009-08-14,
User offline. Last seen 2 years 3 weeks ago.
Re: Re: Absence of the subsumption rule
On Sat, Aug 13, 2011 at 12:22 PM, Dave <dave.mahabiersing@hotmail.com> wrote:
@Daniel
On 13 aug, 16:31, Daniel Spiewak <djspie...@gmail.com> wrote:
> What you are testing here is not subsumption but overloading.  Specifically, your test case demonstrates that Scala's overloading is statically resolved, just like Java's.  This has nothing to do with subsumption though.

That is the whole point. The type inferencer resolved it statically in
the wrong way, because of the absence of the subsumption rule.

Could you explain exactly why that is the wrong way to resolve it, step-by-step?  The current behavior looks perfectly reasonable to me, and does not as far as I can tell violate the subsumption rule.  I don't think the paper is any clearer in explaining why this is problematic.

The input type is A1.  You _could_ call f(A) or f(A1), and therefore the return type is certainly going to be in the union type Char \/ Int.  This doesn't mean that the return type _ought to be_ Char \/ Int, and it certainly doesn't mean it should be Char!

  --Rex

extempore
Joined: 2008-12-17,
User offline. Last seen 35 weeks 3 days ago.
Re: Re: Absence of the subsumption rule

On 8/13/11 12:21 PM, Rex Kerr wrote:
> Could you explain exactly why that is the wrong way to resolve it,
> step-by-step?

Oh, and I should clarify that I didn't intend to endorse the
characterization that it is "wrong" - I was saying it seems dangerous
for the compiler to choose anything at all (i.e. I believe in a blue sky
world I would prefer it was ambiguous, although maybe I'd feel otherwise
if I ever knowingly overloaded like that.)

Of course in a blue sky world, statically resolved overloading would be
one of the first things against the wall.

ichoran
Joined: 2009-08-14,
User offline. Last seen 2 years 3 weeks ago.
Re: Re: Absence of the subsumption rule
On Sat, Aug 13, 2011 at 3:29 PM, Paul Phillips <paulp@improving.org> wrote:
On 8/13/11 12:21 PM, Rex Kerr wrote:
Could you explain exactly why that is the wrong way to resolve it,
step-by-step?

Oh, and I should clarify that I didn't intend to endorse the characterization that it is "wrong" - I was saying it seems dangerous for the compiler to choose anything at all (i.e. I believe in a blue sky world I would prefer it was ambiguous, although maybe I'd feel otherwise if I ever knowingly overloaded like that.)

I view overloading as syntactic sugar for pattern matching on single methods that take union types, where argument-tuple fusion is implemented.  That is,

  def f(s: String): Int = ...
  def f(b: Boolean, f: Float): Char = ...

means, in some sense,

  def f(x: String \/ (Boolean, Float)): Int \/ Char = x match {
    case s: String => ...
    case (b: Boolean, f: Float) => ...
  }

except the former is less flexible but _way_ more convenient.  The only question with the overloaded methods is which one to call when either will apply; you there have the same problem as in pattern matching, and the rule there is the order you wrote them.  Since overloaded methods are not typically restricted to being next to each other (maybe they should be?), order is more dangerous, so a more-to-less-specific rule makes sense to me.

In any case, I don't think one can make too strong an argument about ambiguous-looking overloaded methods without also arguing against union types with pattern matching.  (Including of the Either variety.)

  --Rex 

extempore
Joined: 2008-12-17,
User offline. Last seen 35 weeks 3 days ago.
Re: Re: Absence of the subsumption rule

On 8/13/11 12:45 PM, Rex Kerr wrote:
> In any case, I don't think one can make too strong an argument about
> ambiguous-looking overloaded methods without also arguing against union
> types with pattern matching. (Including of the Either variety.)

You can, because pattern matching defers the test until runtime. The
problem with basing code path execution on static types is that it means
a change in type inference can completely alter the meaning of a
program. If you've got something like

def f(xs: Iterable[Int]): Int = ...
def f(xs: Seq[Int]): Int = ...

then you had better hope that your List(ListBuffer(1),
Vector(1)).flatten is doing the same thing today that it was yesterday.
I realize nobody sane would write an overload like the above and have
it matter much which was selected, but I'm sure you can construct
examples which do not require the introduction of insanity. (Not that
it ever hurts.)

I know multimethods have their own problems, but multimethods have
always sounded like a better tradeoff to me.

ichoran
Joined: 2009-08-14,
User offline. Last seen 2 years 3 weeks ago.
Re: Re: Absence of the subsumption rule
On Sat, Aug 13, 2011 at 4:44 PM, Paul Phillips <paulp@improving.org> wrote:
On 8/13/11 12:45 PM, Rex Kerr wrote:
In any case, I don't think one can make too strong an argument about
ambiguous-looking overloaded methods without also arguing against union
types with pattern matching.  (Including of the Either variety.)

You can, because pattern matching defers the test until runtime.  The problem with basing code path execution on static types is that it means a change in type inference can completely alter the meaning of a program.

I agree, given your example, that it's not exactly the same as the pattern matching/union type version.  (This reminds me that I would like static pattern matching on union types and sealed class hierarchies as a way to write specialized code, but that's another issue.)

But the problem with basing code path execution on dynamic types is that the compiler can't tell you what is actually going on, so you're liable to make mistakes.  You can change the underlying implementation (and thus the run-time type) without changing the interface at all.  And then, magically, you take the wrong code path.  How is this better?

You can always use explicit typing if you like re-using the same function name but are worried that you don't understand which types are which (or that someone else is going to change an interface in a way that will cause non-obvious problems).

  --Rex

odersky
Joined: 2008-07-29,
User offline. Last seen 45 weeks 6 days ago.
Re: Re: Absence of the subsumption rule
Scala indeed does not have a subsumption rule. It can't have one, because of the problems with overloading that were mentioned in this thread. What Scala has instead is an on-demand widening when the target type of an expression is a supertype of its computed type. Check all occurrences of the term "expected to conform" in the spec to see where that happens.
Most OO languages in widespread use follow the same principle.

Cheers

 -- Martin


On Sun, Aug 14, 2011 at 3:39 PM, Rex Kerr <ichoran@gmail.com> wrote:
On Sat, Aug 13, 2011 at 4:44 PM, Paul Phillips <paulp@improving.org> wrote:
On 8/13/11 12:45 PM, Rex Kerr wrote:
In any case, I don't think one can make too strong an argument about
ambiguous-looking overloaded methods without also arguing against union
types with pattern matching.  (Including of the Either variety.)

You can, because pattern matching defers the test until runtime.  The problem with basing code path execution on static types is that it means a change in type inference can completely alter the meaning of a program.

I agree, given your example, that it's not exactly the same as the pattern matching/union type version.  (This reminds me that I would like static pattern matching on union types and sealed class hierarchies as a way to write specialized code, but that's another issue.)

But the problem with basing code path execution on dynamic types is that the compiler can't tell you what is actually going on, so you're liable to make mistakes.  You can change the underlying implementation (and thus the run-time type) without changing the interface at all.  And then, magically, you take the wrong code path.  How is this better?

You can always use explicit typing if you like re-using the same function name but are worried that you don't understand which types are which (or that someone else is going to change an interface in a way that will cause non-obvious problems).

  --Rex




--
Martin Odersky
Prof., EPFL and Chairman, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967

DaveScala
Joined: 2011-03-18,
User offline. Last seen 1 year 21 weeks ago.
Re: Absence of the subsumption rule

> Could you explain exactly why that is the wrong way to resolve it,
> step-by-step?  The current behavior looks perfectly reasonable to me, and
> does not as far as I can tell violate the subsumption rule.  I don't think
> the paper is any clearer in explaining why this is problematic.
>
> The input type is A1.  You _could_ call f(A) or f(A1), and therefore the
> return type is certainly going to be in the union type Char \/ Int.  This
> doesn't mean that the return type _ought to be_ Char \/ Int, and it
> certainly doesn't mean it should be Char!
>
>   --Rex

It's not about choice of return types but about the choice between
methods.
When g(z: A1) then an object A1 is denied calling f(x : A) while when
g(z: A) an object A1 is allowed to call f(x : A). The printing of the
return types are for seeing what choice is being made so it not the
focus.

I try to explain it below:

With "in the wrong way" I mean it's solved not based on subtyping with
subsumption rule but based on an OO class inheritance scheme and
runtime pattern matching without taking in account the subtyping
relation. The correct subtyping scheme is IMO the subsumption rule.
It is now implementation based and not interface based and the types
are rather treated separately instead of with a subtyping relation.
An object A1 is denied access to f(x : A) which violates LSP because
it is a valid and equal second choice according to argument
contravariance.
If there is a method f(x : A) and f(y : A1) then to solve this problem
of equal choices is to subsume f(y : A1) into f(x : A) because A1 <: A
which is the subsumption rule.
So then you have one method f(x : A) that is statically checked and
valid for both objects of A1 and A.

One well-known OO adage is "program to the interfaces not
implementation" which states the importance of subtyping and the OO
principle LSP stems also from subtyping.

I would intuitively choose the basetype (first method) as the authors
of the paper, but the compiler must raise a
compile time error in the example as a consequence of subsumption,
because of unreachable code since the second method is never reached
under subsumption rule. So the developer must correct this.

I agree that the paper is vague. It says also that the Java type
system becomes non-deterministic under subsumption rule.
Do they mean because of the library or because of the compiler or the
unreachable code? Is it a backward compatability problem or is it the
Java language itself? They don't explain why. Only that someone is
fixing this for Java.

Step by step
============

package subsumptionrule

class A
class A1 extends A
class B {
def f(x : A) : Char = 'A'
def f(y : A1) : Int = 1
def g(z: A1) {println(f(z))}

}

object Main extends App {
val a = new A
val a1 = new A1
val b = new B
b.g(a1)

}

I think the type inferencer has 4 ways to resolve f(z)
1. calling f(x : A)
2. calling f(y : A1)
3. calling f(x : A) and f(y : A1) (both)
4. calling nothing

and they are all equal choices in a sense

Dictionary
==========
sub·sume (sb-sm)
tr.v. sub·sumed, sub·sum·ing, sub·sumes
To classify, include, or incorporate in a more comprehensive category
or under a general principle

to include or place within something larger or more comprehensive :
encompass as a subordinate or
component element red, green, and yellow are subsumed under the term
“color”

In the above dictionary example color is a basetype and red, green and
yellow are specific colors so subtypes.
So subsumption is the inclusion under a basetype (specific to
generic).

Recalling subsumption rule:
===========================
The subsumption rule states that if an expression e has type T, and T
is a subtype of S, then e has type S as well.
If e: T and T <: S, e: S

I would therefore go for the expression with the base type which is
x : A
because the expression y : A1 in the other method subsumes under A1 <:
A into y : A which is x : A
Thus f(y : A1) favours f(x : A) too under subsumption rule (it's like
f(y : A1) merges into f(x : A)).
The other way around is not possible x: A cannot subsume into x: A1 so
there is an
kind of built-in favour for the expression with the basetype.

Another approach is maybe more intuitive:
def f(x : A) : Char = 'A'
can accept type A and type A1
but
def f(y : A1) : Int = 1
can only accept type A1
so the second method f is superfluous and unnecessary and will never
be reached.

A third approach is to look at this trees:
A,B,C are traits or interfaces and thus basetypes
These are the subtyping relations:
A22 <: A12, A11 <: A, A12 <: A, A13 <: A
B33 <: B23, B12 <: B, B23 <: B, B33 <: B
C11 <: C

A
A11 A12 A13
A22

B
B11 B12 B13
B23
B33

C
C11

Then consider a list filled with these types with an imaginary method
subsume:
List(A,A11,A12,A13,A22,B,B11,B12,B13,B33,C,C11).subsume

results in:
List(A,B,C)

This is my interpretation of subsuming (I think this is intuitive and
logical) and you can do it for expressions too like in the subsumption
rule:
List(e:A,e:A11,e:A12,e:A13,e:A22,e:B,e:B11,e:B12,e:B13,e:B33,e:C,e:C11).subsume
subsumes in:
List(e:A,e:B,e:C)

The 4th approach is with pattern matching
If I see the pattern matching with the case expressions a sort of
aggregated algorithmic basetype
as a substitution for the basetype.
So instead of A1 <: A
then it is possible to write
A1 <: x: A match {
case t: A1 => 1
case s: A => 'A'
}

And then the subsumption rule
If e: T and T <: S, e: S
is like this
If A1 and A1 <: A, x: A match {
case t: A1 => 1
case s: A => 'A'
}

but then
x: A must be equivalent to x: A match {
case t: A1 => 1
case s: A => 'A'
}

To fix also the results for the subsumption rule must all cases return
the basetype case result:
x: A must be equivalent to x: A match {
case t: A1 => 'A'
case s: A => 'A'
}
If x: A1 and A1 <: A, x: A match {
case t: A1 => 'A'
case s: A => 'A'
}
which is actually
If x: A1 and A1 <: A, x: A match {
case s: A => 'A'
}

In all these 4 approaches the subtype subsumes to the basetype not teh
other way around and there is also no disjunction between basetype and
subtype because that is violating A1 <: A.

Now it doesn't recognize the subtyping relation A1 <: A in the
expressions, because there is no subsumption rule
but there is class inheritance from specific class to generic class
with pattern matching and it act like they are all separate typed
expressions.
Thus there is now mutual exclusion of subtyped expressions instead of
inclusion (=subsumption). The same is with pattern matching itself.
See below.

With the subsumption rule, one method is always favoured and the
method with subtyped expression is thus never reached.
So that generates a compile time error which makes sense:
Under subsumption rule def f(y : A1) : Int = 1 is never reached
The developer can then fix this to make the program correct and
typesafe.

You mentioned pattern matching as your view of overloading and there
is a second proof of the absence of the subsumption rule:

srpattern.scala
===============
package srpattern
class A
class A1 extends A
class B {
def f(x: A) = x match {
case t: A1 => 1
case s: A => 'A'
}
def g(z: A1) {println(f(z))}
}
object Main extends App {
val a = new A
val a1 = new A1
val b = new B
b.g(a1)
}

C:\scala-2.9.1.RC2\examples>scala srpattern.Main
1

srpattern2.scala
================
package srpattern2
class A
class A1 extends A
class B {
def f(x: A) = x match {
case s: A => 'A'
case t: A1 => 1
}
def g(z: A1) {println(f(z))}
}
object Main extends App {
val a = new A
val a1 = new A1
val b = new B
b.g(a1)
}

C:\scala-2.9.1.RC2\examples>scalac srpattern2.scala
srpattern2.scala:7: error: unreachable code
case t: A1 => 1
^
one error found

There is an unnecessary different behaviour between these two code
snippets and the reason is the absence of the subsumption rule.
The only difference is the order of the case expressions with a
subtype relation.
So with the subsumption rule the specific case expresion can subsume
into the generic case expression which makes the whole
thing typesafer and evaluation order independent.
IMO it should have been in both snippets compile time error
unreachable code for case t: A1 => 1 under
subsumption rule.

The simplest way to explain incorrect behaviour is that the behaviour
with class inheritance and pattern matching mechanism differs from
behaviour based on the subsumption rule
in the given example.
Subsumption rule chooses the first method and then the compiler should
IMO consequently generate an error unreachable code for the second
method.
Class inheritance by pattern matching chooses the second method (for
the subtype object) and the first method for the basetype object and
runs it without hesitation, but violates A1 <: A so it is not correct:
objects A1 and A don't operate according to A1 <: A. So it's a
typesafety issue because of incorrectness according to broken
subtyping relation.

ichoran
Joined: 2009-08-14,
User offline. Last seen 2 years 3 weeks ago.
Re: Re: Absence of the subsumption rule
On Sun, Aug 14, 2011 at 11:34 AM, Dave <dave.mahabiersing@hotmail.com> wrote:
> Could you explain exactly why that is the wrong way to resolve it,
> step-by-step?

When g(z: A1) then an object A1 is denied calling f(x : A) while when
g(z: A) an object A1 is allowed to call f(x : A).

It's not denied anything.  You can specify what you want:
  def g(z: A1) = f(z: A)
is perfectly valid, and gives you access to f(x: A).
 
With "in the wrong way" I mean it's solved not based on subtyping with
subsumption rule but based on an OO class inheritance scheme and
runtime pattern matching

Huh?  There's nothing about runtime properties in either what you said or what is done now.  It's all entirely static.
 
An object A1 is denied access to f(x : A) which violates LSP because
it is a valid and equal second choice according to argument
contravariance.

It's valid; whether it is _equal_ isn't specified by LSP.
 
If there is a method f(x : A) and f(y : A1) then to solve this problem
of equal choices is to subsume f(y : A1) into f(x : A) because A1 <: A
which is the subsumption rule.
So then you have one method f(x : A) that is statically checked and
valid for both objects of A1 and A.

Which is exactly what you have if you just don't write f(x: A1).  So you're really just arguing that overloading methods of direct descendants should be invalid.  For example,

  def print(a: Any) { ... }
  def print(i: Int) { ... }
 
is not an okay thing to do.

   def f(x : A) : Char = 'A'
   def f(y : A1) : Int = 1
   def g(z: A1) {println(f(z))}

I think the type inferencer has 4 ways to resolve f(z)
1. calling f(x : A)
2. calling f(y : A1)
3. calling f(x : A) and f(y : A1) (both)
4. calling nothing

and they are all equal choices in a sense

(3) might make sense in Prolog, but otherwise I can't see how a single method call should ever be turned into two method calls.  (4) doesn't make sense in any context I can envision (unless you mean it's a compiler error (ambiguity)).  So I think we practically just have to decide between (1) and (2).
 

Recalling subsumption rule:
===========================
The subsumption rule states that if an expression e has type T, and T
is a subtype of S, then e has type S as well.

I would therefore go for the expression with the base type which is
x : A
because the expression y : A1 in the other method subsumes under A1 <:
A into y : A which is x : A

You're kind of assuming your conclusion here: you assume that you want to subsume, and then when you see an opportunity to do so, you take it.  Then, noticing that you have taken it, you argue that you should.  This doesn't follow to me at all; as long as one _could_ call f(A) if one notated it appropriately (and one can), then LSP is preserved.  Whether subsumption is obeyed depends on whether you view it as a strong rule or a weak one (i.e. if you have types T and S, must the compiler be unable to distinguish you from something of type S?).
 
Another approach is maybe more intuitive:
def f(x : A) : Char = 'A'
can accept type A and type A1
but
def f(y : A1) : Int = 1
can only accept type A1
so the second method f is superfluous and unnecessary and will never
be reached.

You are again assuming your conclusion, but at least here it's clearest what the argument is: there should be one unique possible method to call given overloaded methods.

Of course, that's not a very useful stance to take with object oriented programming.  It basically says that you shouldn't ever override methods:

  class A {
    def parse(s: String): String = ...
  }

is equivalent to

  def parse(a: A, s: String): String = ...

(and in fact that's what happens in the bytecode).

Now if we

  class B extends A {
    override def parse(s: String): String = ...
  }

then we have

  def parse(b: B, s: String): String = ...

which violates subsumption.  So you can't override any method.  So much for .toString!

You mentioned pattern matching as your view of overloading and there
is a second proof of the absence of the subsumption rule:

srpattern.scala
===============
package srpattern
class A
class A1 extends A
class B {
   def f(x: A) = x match {
       case t: A1 => 1
       case s: A => 'A'
   }
   def g(z: A1) {println(f(z))}
}
object Main extends App {
   val a = new A
   val a1 = new A1
   val b = new B
   b.g(a1)
}

C:\scala-2.9.1.RC2\examples>scala srpattern.Main
1

srpattern2.scala
================
package srpattern2
class A
class A1 extends A
class B {
   def f(x: A) = x match {
       case s: A => 'A'
       case t: A1 => 1
   }
   def g(z: A1) {println(f(z))}
}
object Main extends App {
   val a = new A
   val a1 = new A1
   val b = new B
   b.g(a1)
}

C:\scala-2.9.1.RC2\examples>scalac srpattern2.scala
srpattern2.scala:7: error: unreachable code
       case t: A1 => 1
                     ^
one error found

There is an unnecessary different behaviour between these two code
snippets and the reason is the absence of the subsumption rule.
The only difference is the order of the case expressions with a
subtype relation.
So with the subsumption rule the specific case expresion can subsume
into the generic case expression which makes the whole
thing typesafer and evaluation order independent.

The difference isn't unnecessary; it's entirely sensible given that the set of children might be huge or unknown, yet it's useful to catch generic cases.  For example,

  try { blah } catch {
    case ioe: IOException => closeSomething()
    case iae: IllegalArgumentException => tellUserSomething()
    case e: Exception => closeGracefullyBecause(e)
  }

This obviously won't work if order is not important.  Instead we would need to do something like

  catch {
    case e: Exception =>
      e partialmatch {
        case ioe ...
        case iae ...
      } orElse {
        closeGracefullyBecause(e)
      }
  }

which I can't really interpret as an improvement; it's just more boilerplate for exactly the same functionality.

Order dependence is hardly new or dumbfounding.  Take this for example:

    5 - 3

It's not the same as 3 - 5!

I guess it really boils down to whether you want strong subsumption or overriding of methods.  I think the latter has some use-cases that are sufficiently powerful that the simplicity offered by the former is not a good enough trade-off.

  --Rex

DaveScala
Joined: 2011-03-18,
User offline. Last seen 1 year 21 weeks ago.
Re: Absence of the subsumption rule

> It's not denied anything.  You can specify what you want:
>   def g(z: A1) = f(z: A)
> is perfectly valid, and gives you access to f(x: A).
This shows that you didn't understand the whole issue.
The paper and I are talking from the perspective of the type
inference.
Of course if a programmer hints the type inferencer by an explicitly
type annotation then the type inferencer chooses the first method.
But the setup of the example is such that the type inferencer has to
decide and has two choices.
> It's valid; whether it is _equal_ isn't specified by LSP.
They are equal choices. There is no priority of one method over the
other.

> Which is exactly what you have if you just don't write f(x: A1).  So you're
> really just arguing that overloading methods of direct descendants should be
> invalid.  For example,
>
>   def print(a: Any) { ... }
>   def print(i: Int) { ... }
>
> is not an okay thing to do.
>
I didn't say that. I only say that if there is no subsubtion rule
calling one of the 2 methods is arbitrary.

> > I think the type inferencer has 4 ways to resolve f(z)
> > 1. calling f(x : A)
> > 2. calling f(y : A1)
> > 3. calling f(x : A) and f(y : A1) (both)
> > 4. calling nothing
>
> > and they are all equal choices in a sense
>
> (3) might make sense in Prolog, but otherwise I can't see how a single
> method call should ever be turned into two method calls.  (4) doesn't make
> sense in any context I can envision (unless you mean it's a compiler error
> (ambiguity)).  So I think we practically just have to decide between (1) and
> (2).
The difference is that you are thinking in how a programming language
concretely handle these choices and I am think what the abstract
choices are whether they make sense or not is irrelevant.
> You're kind of assuming your conclusion here: you assume that you want to
> subsume, and then when you see an opportunity to do so, you take it.  Then,
> noticing that you have taken it, you argue that you should.  This doesn't
> follow to me at all; as long as one _could_ call f(A) if one notated it
> appropriately (and one can), then LSP is preserved.  Whether subsumption is
> obeyed depends on whether you view it as a strong rule or a weak one (i.e.
> if you have types T and S, must the compiler be unable to distinguish you
> from something of type S?).
I don't assume anything here. I show that process of the subsuming
rule has a natural direction.
I carry the subsumption rule out and show the result
> You are again assuming your conclusion, but at least here it's clearest what
> the argument is: there should be one unique possible method to call given
> overloaded methods.
Just another variant. I am just carrying out the subsumption rule.

>
> Of course, that's not a very useful stance to take with object oriented
> programming.  It basically says that you shouldn't ever override methods:
>
>   class A {
>     def parse(s: String): String = ...
>   }
>
> is equivalent to
>
>   def parse(a: A, s: String): String = ...
>
> (and in fact that's what happens in the bytecode).
>
> Now if we
>
>   class B extends A {
>     override def parse(s: String): String = ...
>   }
>
> then we have
>
>   def parse(b: B, s: String): String = ...
>
> which violates subsumption.  So you can't override any method.  So much for
> .toString!
Well, I don't see that and maybe it is not right to compare bytecode
level with sourcecode level to make your point.
You aren't proving anything here.

>
> The difference isn't unnecessary; it's entirely sensible given that the set
> of children might be huge or unknown, yet it's useful to catch generic
> cases.  For example,
>
>   try { blah } catch {
>     case ioe: IOException => closeSomething()
>     case iae: IllegalArgumentException => tellUserSomething()
>     case e: Exception => closeGracefullyBecause(e)
>   }
>
> This obviously won't work if order is not important.  Instead we would need
> to do something like
>
>   catch {
>     case e: Exception =>
>       e partialmatch {
>         case ioe ...
>         case iae ...
>       } orElse {
>         closeGracefullyBecause(e)
>       }
>   }
>
> which I can't really interpret as an improvement; it's just more boilerplate
> for exactly the same functionality.
Nice! From subtyping perspective it's not the same. The the exceptions
in the partialmatch have no
subtyping relation so this is good.

>
> Order dependence is hardly new or dumbfounding.  Take this for example:
>
>     5 - 3
>
> It's not the same as 3 - 5!

Here you are proving your point with something that is obvious.
I was talking about order independence according the associative law
a + (b+c) = (a+b) +c thus order independence in the sense of order of
operations
not of operands and you are proving your point with a non-commutative
operator
It's like saying 1+1=2 so I'm right.

> I guess it really boils down to whether you want strong subsumption or
> overriding of methods.  I think the latter has some use-cases that are
> sufficiently powerful that the simplicity offered by the former is not a
> good enough trade-off.
There is no weak subsumption rule and no strong subsumption rule at
the moment in Scala.
And subsumption rule has nothing to do with overriding or not
overriding.
I see only an implementation of an orphanization rule in which parent
types and child types are separated.
The class inheritance and pattern matching solution is not checking
type relations.

But now I have given some counter-proofs and reference paper. But I
haven't seen any form of validation that the current solution has a
subsumption rule. How do you know that the implementation is correct
besides intuition and assumption?
And that it justifies the choice for the second method. Do you have a
reference specification or some source code example in a language that
is known to have a subsumption rule so that I can check that?

ichoran
Joined: 2009-08-14,
User offline. Last seen 2 years 3 weeks ago.
Re: Re: Absence of the subsumption rule
On Sun, Aug 14, 2011 at 4:13 PM, Dave <dave.mahabiersing@hotmail.com> wrote:
> It's not denied anything.  You can specify what you want:
>   def g(z: A1) = f(z: A)
> is perfectly valid, and gives you access to f(x: A).
This shows that you didn't understand the whole issue.
The paper and I are talking from the perspective of the type
inference.

The type system knows that A1 is a subclass of A, and therefore f(x: A) is okay to call; you can prove that by not giving it the option of f(x: A1), or by explicitly typing the f(x: A) version.  The type inferencer does not treat A1 identically to A, true, but A1 _can_ substitute anywhere that A can.
 
> It's valid; whether it is _equal_ isn't specified by LSP.
They are equal choices. There is no priority of one method over the
other.

According to LSP, no.  So we could choose either the more specific one, in which case there are two methods, or the less specific one, in which case there is one method and a compile-time error.  Either way is as compatible with LSP as the other.  You couldn't pass A in to g anyway, so there aren't any provable properties that have been broken.
 

> Which is exactly what you have if you just don't write f(x: A1).  So you're
> really just arguing that overloading methods of direct descendants should be
> invalid.  For example,
>
>   def print(a: Any) { ... }
>   def print(i: Int) { ... }
>
> is not an okay thing to do.
>
I didn't say that. I only say that if there is no subsubtion rule
calling one of the 2 methods is arbitrary.

If there is a substitution rule, calling the second is impossible, and you must call the first.  Since the second is impossible to call, it ought not be okay to write that code.
 

> > I think the type inferencer has 4 ways to resolve f(z)
> > 1. calling f(x : A)
> > 2. calling f(y : A1)
> > 3. calling f(x : A) and f(y : A1) (both)
> > 4. calling nothing
>
> > and they are all equal choices in a sense
I am think what the abstract
choices are whether they make sense or not is irrelevant.

As long as you don't assume f(z) means method invocation, I agree.  Since f(x: A) and f(x: A1) are different methods, and nothing is not a method, (3) and (4) are out in that case (unless you view them as preludes to error conditions, since they are not a method call).  If you view it as some sort of predicate hierarchy, then (3) is cool, and possibly (4) in a stretch.
 
>
> Of course, that's not a very useful stance to take with object oriented
> programming.  It basically says that you shouldn't ever override methods:
>
>   class A {
>     def parse(s: String): String = ...
>   }
>
> is equivalent to
>
>   def parse(a: A, s: String): String = ...
>
> (and in fact that's what happens in the bytecode).
>
> Now if we
>
>   class B extends A {
>     override def parse(s: String): String = ...
>   }
>
> then we have
>
>   def parse(b: B, s: String): String = ...
>
> which violates subsumption.  So you can't override any method.  So much for
> .toString!

Well, I don't see that and maybe it is not right to compare bytecode
level with sourcecode level to make your point.
You aren't proving anything here.

Well, it depends on how you view object-oriented programming.  In general,

  x.f(y) === call(x, "f", y)

which in practice looks at x's method table, and is equivalent to your pattern match counterexample.  In statically-typed languages, this is the method table for the class, and thus when all classes are in view of the compiler (e.g. marked final or sealed), we have

  x.f(y) === call_f(x,y)

which is equivalent to my overloaded method example above.

>
> The difference isn't unnecessary; it's entirely sensible given that the set
> of children might be huge or unknown, yet it's useful to catch generic
> cases.  For example,
>
>   try { blah } catch {
>     case ioe: IOException => closeSomething()
>     case iae: IllegalArgumentException => tellUserSomething()
>     case e: Exception => closeGracefullyBecause(e)
>   }
>
> This obviously won't work if order is not important.  Instead we would need
> to do something like
>
>   catch {
>     case e: Exception =>
>       e partialmatch {
>         case ioe ...
>         case iae ...
>       } orElse {
>         closeGracefullyBecause(e)
>       }
>   }
>
> which I can't really interpret as an improvement; it's just more boilerplate
> for exactly the same functionality.
Nice! From subtyping perspective it's not the same. The the exceptions
in the partialmatch have no
subtyping relation so this is good.

They are functionally identical (i.e. for all exceptions e, the first code and the second do the same thing), and the latter is four lines longer.  I fail to see why this is "good", except that you dislike order-dependence.  It doesn't seem to have anything to do with typing, because any problematic case you can write for the first one will afflict the second one also.
 

>
> Order dependence is hardly new or dumbfounding.  Take this for example:
>
>     5 - 3
>
> It's not the same as 3 - 5!

Here you are proving your point with something that is obvious.
I was talking about order independence according the associative law
a + (b+c) = (a+b) +c thus order independence in the sense of order of
operations
 a-(b-c) != (a-b)-c, and matches are associative in the sense that you want in that
  x match {
    case a: A => f(a)
    case _ => x match {
      case b: B => g(b)
      case c: C => h(c)
    }
  }
is the same as
  x match {
    case A | B =>
      x match {
        case a: A => f(a)
        case b: B => g(b)
      }
    case c: C => h(c)
  }
so I really don't know what your point is.
 
not of operands and you are proving your point with a non-commutative
operator

Is "case" supposed to be commutative (or associative)?  Why?
 
> I guess it really boils down to whether you want strong subsumption or
> overriding of methods.  I think the latter has some use-cases that are
> sufficiently powerful that the simplicity offered by the former is not a
> good enough trade-off.

There is no weak subsumption rule and no strong subsumption rule at
the moment in Scala.

The weak rule, as I defined it, exists:

  class A
  class A1 extends A
  class B {
    def f(a: A) = 5
    def g(a1: A1) = f(a1)  // This compiles
  }

The compiler clearly knows about the subtyping relationship (via type-widening).

The strong rule, as I defined it, doesn't exist, since the most general form is not always chosen, as shown by your examples.

Run-time identification or pattern matching _at all_ can break subsumption.  Look at my partial match from before:

  x partialmatch {
    case a1: A1 => stuff()
  } orElse {
    case a: A => otherStuff()
  }

The syntax is irrelevant; the point is that if you pass an A1 in here, you run different code than A.  In particular, if it's an A1, it should match case a1: A1 because it is of type A1, but it should equally well _not_ match, because it's also of type A (and Object).  The only thing you could possibly allow is pattern matching on sealed class hierarchies that exhaustively list every leaf node, and where only leaves are not abstract:

  sealed abstract class A
  sealed class A1 extends A
  sealed class A2 extends A
  sealed abstract class A3 extends A
  sealed class A31 extends A3
  sealed class A32 extends A3
  def aMatch(a: A) = a match {
    case a1: A1 => ...
    case a2: A2 => ...
    case a31: A31 => ...
    case a32: A32 => ...
  }

Anything else falls afoul of not treating X <: A the same way as you treat A.

(This is giving me some insight into why certain people say OO is broken and one should just use FP.  Also, since it seems to just be the two of us disagreeing, we should probably take it off-list or simply stop the discussion.)

  --Rex

Alex Repain
Joined: 2010-07-27,
User offline. Last seen 1 year 31 weeks ago.
Re: Re: Absence of the subsumption rule


2011/8/14 Rex Kerr <ichoran@gmail.com>


(This is giving me some insight into why certain people say OO is broken and one should just use FP.  Also, since it seems to just be the two of us disagreeing, we should probably take it off-list or simply stop the discussion.)

 Please don't, I'm following it. This is getting interesting and is good food for thoughts and knowledge.

  --Rex




--
Alex REPAIN
ENSEIRB-MATMECA - student
TECHNICOLOR R&D - intern
BORDEAUX I      - master's student
SCALA           - enthusiast


Artem Khodyush
Joined: 2011-02-15,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: Absence of the subsumption rule
On Sun, Aug 14, 2011 at 1:13 PM, Dave <dave.mahabiersing@hotmail.com> wrote:
> > which violates subsumption.  So you can't override any method.  So much for
> .toString!
Well, I don't see that and maybe it is not right to compare bytecode
level with sourcecode level to make your point.
You aren't proving anything here.

So, are you allowing dynamic dispatch to violate subsumption rule? Because, if I see
def f(a: A) {    a.method }
calling anything other than method defined in A i'd say it violates subsumption rule as you stated it.
If you allow it for dynamic dispatch, what is the benefit of different rules for static dispatch? If you don't allow it, I don't see how virtual methods and overriding is possible at all.
Thanks,Artem
 
Adam Jorgensen
Joined: 2011-04-17,
User offline. Last seen 42 years 45 weeks ago.
Re: Re: Absence of the subsumption rule
For those of use less in the know, can someone provide some extra info onwhere the subsumption rule originates from and which, if any, OO languagesdo comply with it?

To my untrained, non-language designer eye the subsumption rule basically seems to say:
OO over-riding doesn't work if this rule is in place.
From that it seems to follow that the rule has no place in a useful OO language.
dcsobral
Joined: 2009-04-23,
User offline. Last seen 38 weeks 5 days ago.
Re: Re: Absence of the subsumption rule

On Mon, Aug 15, 2011 at 01:18, Adam Jorgensen
wrote:
> For those of use less in the know, can someone provide some extra info on
> where the subsumption rule originates from and which, if any, OO languages
> do comply with it?

I presume Newspeak comply with it, given Gilad's strong stance on it.
As for were it comes from, it's all in the links that were provided.

>
> To my untrained, non-language designer eye the subsumption rule basically
> seems to say:
> OO over-riding doesn't work if this rule is in place.
> From that it seems to follow that the rule has no place in a useful OO
> language.

DaveScala
Joined: 2011-03-18,
User offline. Last seen 1 year 21 weeks ago.
Re: Absence of the subsumption rule

> The type system knows that A1 is a subclass of A, and therefore f(x: A) is
> okay to call; you can prove that by not giving it the option of f(x: A1), or
> by explicitly typing the f(x: A) version. The type inferencer does not
> treat A1 identically to A, true, but A1 _can_ substitute anywhere that A
> can.
Still,
even if you subsume to the subtype and not the basetype and thus
chooses the second method then there should be an error for
unreachable code. If it does that then your implementation is
respecting A1 <: A too but now it runs without hesitation as if A1 and
A have no relationship and thus violating LSP argument contravariance.

> According to LSP, no. So we could choose either the more specific one, in
> which case there are two methods, or the less specific one, in which case
> there is one method and a compile-time error. Either way is as compatible
> with LSP as the other. You couldn't pass A in to g anyway, so there aren't
> any provable properties that have been broken.
See previous remark

>
> If there is a substitution rule, calling the second is impossible, and you
> must call the first. Since the second is impossible to call, it ought not
> be okay to write that code.
An compile time error must be raised and the programmer must resolve
ambiguity in the calls

>
> As long as you don't assume f(z) means method invocation, I agree. Since
> f(x: A) and f(x: A1) are different methods, and nothing is not a method, (3)
> and (4) are out in that case (unless you view them as preludes to error
> conditions, since they are not a method call). If you view it as some sort
> of predicate hierarchy, then (3) is cool, and possibly (4) in a stretch.
with "nothing" I mean calling nothing not the method nothing (it also
can mean raising an error)

> > > Of course, that's not a very useful stance to take with object oriented
> > > programming. It basically says that you shouldn't ever override methods:
>
> > > class A {
> > > def parse(s: String): String = ...
> > > }
>
> > > is equivalent to
>
> > > def parse(a: A, s: String): String = ...
>
> > > (and in fact that's what happens in the bytecode).
>
> > > Now if we
>
> > > class B extends A {
> > > override def parse(s: String): String = ...
> > > }
>
> > > then we have
>
> > > def parse(b: B, s: String): String = ...
>
> > > which violates subsumption. So you can't override any method. So much
> > for
> > > .toString!
>
> > Well, I don't see that and maybe it is not right to compare bytecode
> > level with sourcecode level to make your point.
> > You aren't proving anything here.
>
> Well, it depends on how you view object-oriented programming. In general,
>
> x.f(y) === call(x, "f", y)
>
> which in practice looks at x's method table, and is equivalent to your
> pattern match counterexample. In statically-typed languages, this is the
> method table for the class, and thus when all classes are in view of the
> compiler (e.g. marked final or sealed), we have
>
> x.f(y) === call_f(x,y)
>
> which is equivalent to my overloaded method example above.
>
>
With the subsumption rule you still can override methods.
The type inferencer works on source code level.
But if there where a type inferencer on bytecode level it would see
the the methods as different methods if it is applying the subsumption
rule again.
Subsumption rule is based on types. It is independent of whether there
is or not an OO paradigma.

>
> > > The difference isn't unnecessary; it's entirely sensible given that the
> > set
> > > of children might be huge or unknown, yet it's useful to catch generic
> > > cases. For example,
>
> > > try { blah } catch {
> > > case ioe: IOException => closeSomething()
> > > case iae: IllegalArgumentException => tellUserSomething()
> > > case e: Exception => closeGracefullyBecause(e)
> > > }
>
> > > This obviously won't work if order is not important. Instead we would
> > need
> > > to do something like
>
> > > catch {
> > > case e: Exception =>
> > > e partialmatch {
> > > case ioe ...
> > > case iae ...
> > > } orElse {
> > > closeGracefullyBecause(e)
> > > }
> > > }
>
> > > which I can't really interpret as an improvement; it's just more
> > boilerplate
> > > for exactly the same functionality.
> > Nice! From subtyping perspective it's not the same. The the exceptions
> > in the partialmatch have no
> > subtyping relation so this is good.
>
> They are functionally identical (i.e. for all exceptions e, the first code
> and the second do the same thing), and the latter is four lines longer. I
> fail to see why this is "good", except that you dislike order-dependence.
> It doesn't seem to have anything to do with typing, because any problematic
> case you can write for the first one will afflict the second one also.
Probably I rushed through this. Probably I didn't notice the e. It
still the same problem.

> a-(b-c) != (a-b)-c, and matches are associative in the sense that you want
> in that
> x match {
> case a: A => f(a)
> case _ => x match {
> case b: B => g(b)
> case c: C => h(c)
> }
> }
> is the same as
> x match {
> case A | B =>
> x match {
> case a: A => f(a)
> case b: B => g(b)
> }
> case c: C => h(c)
> }
> so I really don't know what your point is.

I think that you don't want that a programmer has to use case
statements in a pattern matching in a certain order because of
subtyping relation and the type inferencer is not noticing an
ambiguity.

> The weak rule, as I defined it, exists:
>
> class A
> class A1 extends A
> class B {
> def f(a: A) = 5
> def g(a1: A1) = f(a1) // This compiles
> }
>
> The compiler clearly knows about the subtyping relationship (via
> type-widening).
Yes but this has nothing to do with subsumption rule. There is simply
no ambiguity so it compiles and type widening is because of LSP
argument contravariance.

> The strong rule, as I defined it, doesn't exist, since the most general form
> is not always chosen, as shown by your examples.
>
> Run-time identification or pattern matching _at all_ can break subsumption.
> Look at my partial match from before:
>
> x partialmatch {
> case a1: A1 => stuff()
> } orElse {
> case a: A => otherStuff()
> }
>
> The syntax is irrelevant; the point is that if you pass an A1 in here, you
> run different code than A. In particular, if it's an A1, it should match
> case a1: A1 because it is of type A1, but it should equally well _not_
> match, because it's also of type A (and Object). The only thing you could
> possibly allow is pattern matching on sealed class hierarchies that
> exhaustively list every leaf node, and where only leaves are not abstract:
I think this should raise an error because of A1 <: A and subsumption
rule

DaveScala
Joined: 2011-03-18,
User offline. Last seen 1 year 21 weeks ago.
Re: Absence of the subsumption rule

Thanks, you answered my question.
So the conclusion is: there is a new insight for OO languages and the
absence of the subsumption rule in Scala is not really an issue.

On 14 aug, 17:21, martin odersky wrote:
> Scala indeed does not have a subsumption rule. It can't have one, because of
> the problems with overloading that were mentioned in this thread. What Scala
> has instead is an on-demand widening when the target type of an expression
> is a supertype of its computed type. Check all occurrences of the term
> "expected to conform" in the spec to see where that happens.
> Most OO languages in widespread use follow the same principle.
>
> Cheers
>
>  -- Martin
>
>
>
>
>
> On Sun, Aug 14, 2011 at 3:39 PM, Rex Kerr wrote:
> > On Sat, Aug 13, 2011 at 4:44 PM, Paul Phillips wrote:
>
> >> On 8/13/11 12:45 PM, Rex Kerr wrote:
>
> >>> In any case, I don't think one can make too strong an argument about
> >>> ambiguous-looking overloaded methods without also arguing against union
> >>> types with pattern matching.  (Including of the Either variety.)
>
> >> You can, because pattern matching defers the test until runtime.  The
> >> problem with basing code path execution on static types is that it means a
> >> change in type inference can completely alter the meaning of a program.
>
> > I agree, given your example, that it's not exactly the same as the pattern
> > matching/union type version.  (This reminds me that I would like static
> > pattern matching on union types and sealed class hierarchies as a way to
> > write specialized code, but that's another issue.)
>
> > But the problem with basing code path execution on dynamic types is that
> > the compiler can't tell you what is actually going on, so you're liable to
> > make mistakes.  You can change the underlying implementation (and thus the
> > run-time type) without changing the interface at all.  And then, magically,
> > you take the wrong code path.  How is this better?
>
> > You can always use explicit typing if you like re-using the same function
> > name but are worried that you don't understand which types are which (or
> > that someone else is going to change an interface in a way that will cause
> > non-obvious problems).
>
> >   --Rex
>
> --
> Martin Odersky
> Prof., EPFL and Chairman, Typesafe
> PSED, 1015 Lausanne, Switzerland
> Tel. EPFL: +41 21 693 6863
> Tel. Typesafe: +41 21 691 4967- Tekst uit oorspronkelijk bericht niet weergeven -
>
> - Tekst uit oorspronkelijk bericht weergeven -

Copyright © 2012 École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland