- About Scala
- Documentation
- Code Examples
- Software
- Scala Developers
Re: actors and the "inheritance anamoly"
Sun, 2011-06-05, 20:24
To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 20:47
#2
Re: actors and the "inheritance anamoly"
It's easy to avoid sharing mutable state with actors: don't do it.
But that doesn't save you.
The problem is that the compiler doesn't help you with the _logic_ of concurrency. If you're used to weakly typed languages, that's not a big deal. It's faintly terrifying if you rely upon the compiler to keep you from doing stupid things, however.
Avoiding mutable state helps remove _some_ aspects of the logic from your concern; now you never have to worry about state changing out from under you or being inconsistent, but you do have to worry about state being outdated and all your work being for naught. And you have to worry about whether you will have the ability to simultaneously have two pieces of non-outdated data at the same time, if you ever need more than one.
James Iry has blogged insightfully upon this issue in the context of Erlang:
http://james-iry.blogspot.com/search/label/erlang
What we really need from languages and compilers of the future is a mechanism for (and theory of) resource management, where if you need such-and-so to happen, you declare it, and the compiler yells at you if you make mutually contradictory declarations.
For example, I created a system that avoided global deadlocks by insisting that all resources be locked in a particular order; if you walk down your lock chain and hit something that's already locked, you have to free up your previous locks and try again. This worked (zero deadlocks), but wasn't particularly fast, and was written in Java and was incredibly awkward. Anyway, these sorts of problems are the ones that need to be solved, and actors only let you punt on them a little bit by freeing you of the temptation use all resources everywhere willy-nilly. But messages are a type of resource, so this only goes so far.
--Rex
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>
But that doesn't save you.
The problem is that the compiler doesn't help you with the _logic_ of concurrency. If you're used to weakly typed languages, that's not a big deal. It's faintly terrifying if you rely upon the compiler to keep you from doing stupid things, however.
Avoiding mutable state helps remove _some_ aspects of the logic from your concern; now you never have to worry about state changing out from under you or being inconsistent, but you do have to worry about state being outdated and all your work being for naught. And you have to worry about whether you will have the ability to simultaneously have two pieces of non-outdated data at the same time, if you ever need more than one.
James Iry has blogged insightfully upon this issue in the context of Erlang:
http://james-iry.blogspot.com/search/label/erlang
What we really need from languages and compilers of the future is a mechanism for (and theory of) resource management, where if you need such-and-so to happen, you declare it, and the compiler yells at you if you make mutually contradictory declarations.
For example, I created a system that avoided global deadlocks by insisting that all resources be locked in a particular order; if you walk down your lock chain and hit something that's already locked, you have to free up your previous locks and try again. This worked (zero deadlocks), but wasn't particularly fast, and was written in Java and was incredibly awkward. Anyway, these sorts of problems are the ones that need to be solved, and actors only let you punt on them a little bit by freeing you of the temptation use all resources everywhere willy-nilly. But messages are a type of resource, so this only goes so far.
--Rex
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>
As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 20:57
#3
Re: actors and the "inheritance anamoly"
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>
As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
Doesn't really matter either since any state can be changed through reflection, just setAccessible(true).
So it's more of a runtime problem than a language problem.
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 20:57
#4
Re: actors and the "inheritance anamoly"
I think the good part about actor is that the only place where you need to worry about the mutability problem is in messages.
Message-passing by reference is essentially just an optimization, and if you want to ensure not to shoot yourself in the foot, you could serialize/binarycopy all messages when doing the send, á la Erlang.
2011/6/5 Rex Kerr <ichoran@gmail.com>
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Message-passing by reference is essentially just an optimization, and if you want to ensure not to shoot yourself in the foot, you could serialize/binarycopy all messages when doing the send, á la Erlang.
2011/6/5 Rex Kerr <ichoran@gmail.com>
It's easy to avoid sharing mutable state with actors: don't do it.
But that doesn't save you.
The problem is that the compiler doesn't help you with the _logic_ of concurrency. If you're used to weakly typed languages, that's not a big deal. It's faintly terrifying if you rely upon the compiler to keep you from doing stupid things, however.
Avoiding mutable state helps remove _some_ aspects of the logic from your concern; now you never have to worry about state changing out from under you or being inconsistent, but you do have to worry about state being outdated and all your work being for naught. And you have to worry about whether you will have the ability to simultaneously have two pieces of non-outdated data at the same time, if you ever need more than one.
James Iry has blogged insightfully upon this issue in the context of Erlang:
http://james-iry.blogspot.com/search/label/erlang
What we really need from languages and compilers of the future is a mechanism for (and theory of) resource management, where if you need such-and-so to happen, you declare it, and the compiler yells at you if you make mutually contradictory declarations.
For example, I created a system that avoided global deadlocks by insisting that all resources be locked in a particular order; if you walk down your lock chain and hit something that's already locked, you have to free up your previous locks and try again. This worked (zero deadlocks), but wasn't particularly fast, and was written in Java and was incredibly awkward. Anyway, these sorts of problems are the ones that need to be solved, and actors only let you punt on them a little bit by freeing you of the temptation use all resources everywhere willy-nilly. But messages are a type of resource, so this only goes so far.
--Rex
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>
As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 21:07
#5
Re: actors and the "inheritance anamoly"
Another time that the so-called "problems with inheritance" came up for discussion (something to do with a bag collection...), the best explanation I got for how the functional alternatives are better was, that since functions have strong mathematic basis, theoretically in the future compilers can ensure that such programs are 100% safe, while with inheritance, there will always be some need to use your brain not to do dangerous things.
From the glimpse I took of the paper mentioned here, it doesn't seem much more logically sound than the reference that was referred to in the other discussion that time.
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
From the glimpse I took of the paper mentioned here, it doesn't seem much more logically sound than the reference that was referred to in the other discussion that time.
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
Doesn't really matter either since any state can be changed through reflection, just setAccessible(true).
So it's more of a runtime problem than a language problem.
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 21:37
#6
Re: actors and the "inheritance anamoly"
Hi Rex,
I completely agree, but my interpretation of Russ' question was much narrower: Do actors help to alleviate the so-called inheritance anomaly? Obviously actor-based programming is not the silver bullet for all concurrency related problems (although some people actually seem to think that).
Yes, but the issue is that it's not as easy to enforce (think many developers over many years) if the compiler doesn't help you, because this isn't easy to check using unit tests either.
Best regards
Daniel
2011/6/5 Rex Kerr <ichoran@gmail.com>
I completely agree, but my interpretation of Russ' question was much narrower: Do actors help to alleviate the so-called inheritance anomaly? Obviously actor-based programming is not the silver bullet for all concurrency related problems (although some people actually seem to think that).
It's easy to avoid sharing mutable state with actors: don't do it.
Yes, but the issue is that it's not as easy to enforce (think many developers over many years) if the compiler doesn't help you, because this isn't easy to check using unit tests either.
Best regards
Daniel
2011/6/5 Rex Kerr <ichoran@gmail.com>
It's easy to avoid sharing mutable state with actors: don't do it.
But that doesn't save you.
The problem is that the compiler doesn't help you with the _logic_ of concurrency. If you're used to weakly typed languages, that's not a big deal. It's faintly terrifying if you rely upon the compiler to keep you from doing stupid things, however.
Avoiding mutable state helps remove _some_ aspects of the logic from your concern; now you never have to worry about state changing out from under you or being inconsistent, but you do have to worry about state being outdated and all your work being for naught. And you have to worry about whether you will have the ability to simultaneously have two pieces of non-outdated data at the same time, if you ever need more than one.
James Iry has blogged insightfully upon this issue in the context of Erlang:
http://james-iry.blogspot.com/search/label/erlang
What we really need from languages and compilers of the future is a mechanism for (and theory of) resource management, where if you need such-and-so to happen, you declare it, and the compiler yells at you if you make mutually contradictory declarations.
For example, I created a system that avoided global deadlocks by insisting that all resources be locked in a particular order; if you walk down your lock chain and hit something that's already locked, you have to free up your previous locks and try again. This worked (zero deadlocks), but wasn't particularly fast, and was written in Java and was incredibly awkward. Anyway, these sorts of problems are the ones that need to be solved, and actors only let you punt on them a little bit by freeing you of the temptation use all resources everywhere willy-nilly. But messages are a type of resource, so this only goes so far.
--Rex
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>
As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 21:47
#7
Re: actors and the "inheritance anamoly"
On Sun, Jun 5, 2011 at 1:36 PM, Daniel Kristensen <daniel.kristensen@gmail.com> wrote:
Hi Rex,
I completely agree, but my interpretation of Russ' question was much narrower: Do actors help to alleviate the so-called inheritance anomaly? Obviously actor-based programming is not the silver bullet for all concurrency related problems (although some people actually seem to think that).
Yeah? I have personally never met or talked to anyone who thinks that.
It's easy to avoid sharing mutable state with actors: don't do it.
Yes, but the issue is that it's not as easy to enforce (think many developers over many years) if the compiler doesn't help you, because this isn't easy to check using unit tests either.
As I said, you can turn on serialization of messages, so then you don't accidentally share mutable state.
Best regards
Daniel
2011/6/5 Rex Kerr <ichoran@gmail.com>
It's easy to avoid sharing mutable state with actors: don't do it.
But that doesn't save you.
The problem is that the compiler doesn't help you with the _logic_ of concurrency. If you're used to weakly typed languages, that's not a big deal. It's faintly terrifying if you rely upon the compiler to keep you from doing stupid things, however.
Avoiding mutable state helps remove _some_ aspects of the logic from your concern; now you never have to worry about state changing out from under you or being inconsistent, but you do have to worry about state being outdated and all your work being for naught. And you have to worry about whether you will have the ability to simultaneously have two pieces of non-outdated data at the same time, if you ever need more than one.
James Iry has blogged insightfully upon this issue in the context of Erlang:
http://james-iry.blogspot.com/search/label/erlang
What we really need from languages and compilers of the future is a mechanism for (and theory of) resource management, where if you need such-and-so to happen, you declare it, and the compiler yells at you if you make mutually contradictory declarations.
For example, I created a system that avoided global deadlocks by insisting that all resources be locked in a particular order; if you walk down your lock chain and hit something that's already locked, you have to free up your previous locks and try again. This worked (zero deadlocks), but wasn't particularly fast, and was written in Java and was incredibly awkward. Anyway, these sorts of problems are the ones that need to be solved, and actors only let you punt on them a little bit by freeing you of the temptation use all resources everywhere willy-nilly. But messages are a type of resource, so this only goes so far.
--Rex
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>
As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 22:27
#8
Re: actors and the "inheritance anamoly"
Getting concurrent access to a shared resource right is hard. Getting concurrent access by many threads to multiple shared resources, some of which may be passed between threads dynamically, right is a Hard Thing indeed. Coming up with a formalism (type system?) that helps you avoid common errors while allowing you to still do interesting things and which can be understood by People In General rather than 1 1/2 PhD students and their yucca plant is Very Hard Indeed. Have any of you seen a full petri-net of an implementation of the TCP/IP stack? The last one I saw covered a complete sheet of A0 at a font size where you needed a telescope to read the labels.
Actors help a bit if you stick to immutable messages, as it serializes access to whatever actor-specific resources there are. It doesn't prevent inter-actor deadlock. Inheritance is a PITA with concurrency because a) the sub-class has at least as great a restriction on concurrent access as the super-class, and b) the compiler doesn't tell you this when you bork it up. Functional composition runs out of steam at some point, because you need to cleanly model joins as well as forks, and at that point you either burn cycles re-computing referentially transparent values (a-la haskell) or are back in resource-locking hell. There are no off-the-shelf easy solutions because concurrency is genuinely a Very Hard Problem.
That's my Sunday rant out of the way.
Matthew
--
Matthew Pocockmailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozer(0191) 2566550
Actors help a bit if you stick to immutable messages, as it serializes access to whatever actor-specific resources there are. It doesn't prevent inter-actor deadlock. Inheritance is a PITA with concurrency because a) the sub-class has at least as great a restriction on concurrent access as the super-class, and b) the compiler doesn't tell you this when you bork it up. Functional composition runs out of steam at some point, because you need to cleanly model joins as well as forks, and at that point you either burn cycles re-computing referentially transparent values (a-la haskell) or are back in resource-locking hell. There are no off-the-shelf easy solutions because concurrency is genuinely a Very Hard Problem.
That's my Sunday rant out of the way.
Matthew
--
Matthew Pocockmailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozer(0191) 2566550
Sun, 2011-06-05, 22:37
#9
Re: actors and the "inheritance anamoly"
+what Matthew said
2011/6/5 Matthew Pocock <turingatemyhamster@gmail.com>
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
2011/6/5 Matthew Pocock <turingatemyhamster@gmail.com>
Getting concurrent access to a shared resource right is hard. Getting concurrent access by many threads to multiple shared resources, some of which may be passed between threads dynamically, right is a Hard Thing indeed. Coming up with a formalism (type system?) that helps you avoid common errors while allowing you to still do interesting things and which can be understood by People In General rather than 1 1/2 PhD students and their yucca plant is Very Hard Indeed. Have any of you seen a full petri-net of an implementation of the TCP/IP stack? The last one I saw covered a complete sheet of A0 at a font size where you needed a telescope to read the labels.
Actors help a bit if you stick to immutable messages, as it serializes access to whatever actor-specific resources there are. It doesn't prevent inter-actor deadlock. Inheritance is a PITA with concurrency because a) the sub-class has at least as great a restriction on concurrent access as the super-class, and b) the compiler doesn't tell you this when you bork it up. Functional composition runs out of steam at some point, because you need to cleanly model joins as well as forks, and at that point you either burn cycles re-computing referentially transparent values (a-la haskell) or are back in resource-locking hell. There are no off-the-shelf easy solutions because concurrency is genuinely a Very Hard Problem.
That's my Sunday rant out of the way.
Matthew
--
Matthew Pocockmailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozer(0191) 2566550
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Sun, 2011-06-05, 22:37
#10
Re: actors and the "inheritance anamoly"
2011/6/5 Matthew Pocock <turingatemyhamster@gmail.com>
Getting concurrent access to a shared resource right is hard.
Clarification: shared _mutable_ resource.
Getting concurrent access by many threads to multiple shared resources, some of which may be passed between threads dynamically, right is a Hard Thing indeed. Coming up with a formalism (type system?) that helps you avoid common errors while allowing you to still do interesting things and which can be understood by People In General rather than 1 1/2 PhD students and their yucca plant is Very Hard Indeed. Have any of you seen a full petri-net of an implementation of the TCP/IP stack? The last one I saw covered a complete sheet of A0 at a font size where you needed a telescope to read the labels.
A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing.
Actors are essentially a model of distriuted computing, running them in one VM is just an optimization.
Actors help a bit if you stick to immutable messages, as it serializes access to whatever actor-specific resources there are. It doesn't prevent inter-actor deadlock.
See my email to Greg.
Inheritance is a PITA with concurrency because a) the sub-class has at least as great a restriction on concurrent access as the super-class, and b) the compiler doesn't tell you this when you bork it up. Functional composition runs out of steam at some point, because you need to cleanly model joins as well as forks, and at that point you either burn cycles re-computing referentially transparent values (a-la haskell) or are back in resource-locking hell. There are no off-the-shelf easy solutions because concurrency is genuinely a Very Hard Problem.
That's my Sunday rant out of the way.
:-)
Matthew
--
Matthew Pocockmailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozer(0191) 2566550
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 22:37
#11
Re: actors and the "inheritance anamoly"
Hi √iktor
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
It depends - if we want to guard against malicious code running with full privileges it is a platform problem. But if we're content guarding against mistakes compiler enforcement goes a long way.
That's reassuring :-) I have though.To avoid any risk of misunderstandings, I should clarify that I wasn't referring to you, anyone on the Akka team, or even anyone on this mailing list. This is of course more of an issue for people who don't think that much about concurrency. But I think part of the reason for this misunderstanding is that most introductions to actors tend to focus on the benefits compared to java-style locking, and are rather silent about the drawbacks.
Is there any tool to prevent a clue/reckless programmer from accesing global mutable state (singletons/statics)? I believe this is a more likely mistake than accidentally modifying a final field using reflection (sort of like accidentally using mutable fields in a servlet).
Best regards,
Daniel
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
Doesn't really matter either since any state can be changed through reflection, just setAccessible(true).
So it's more of a runtime problem than a language problem.
It depends - if we want to guard against malicious code running with full privileges it is a platform problem. But if we're content guarding against mistakes compiler enforcement goes a long way.
Hi Rex,
I completely agree, but my interpretation of Russ' question was much narrower: Do actors help to alleviate the so-called inheritance anomaly? Obviously actor-based programming is not the silver bullet for all concurrency related problems (although some people actually seem to think that).
Yeah? I have personally never met or talked to anyone who thinks that.
That's reassuring :-) I have though.To avoid any risk of misunderstandings, I should clarify that I wasn't referring to you, anyone on the Akka team, or even anyone on this mailing list. This is of course more of an issue for people who don't think that much about concurrency. But I think part of the reason for this misunderstanding is that most introductions to actors tend to focus on the benefits compared to java-style locking, and are rather silent about the drawbacks.
It's easy to avoid sharing mutable state with actors: don't do it.
Yes, but the issue is that it's not as easy to enforce (think many developers over many years) if the compiler doesn't help you, because this isn't easy to check using unit tests either.
As I said, you can turn on serialization of messages, so then you don't accidentally share mutable state.
Is there any tool to prevent a clue/reckless programmer from accesing global mutable state (singletons/statics)? I believe this is a more likely mistake than accidentally modifying a final field using reflection (sort of like accidentally using mutable fields in a servlet).
Best regards,
Daniel
Sun, 2011-06-05, 22:47
#12
Re: actors and the "inheritance anamoly"
Dear Viktor,
The point of the actor model is that there is message serialization. It's not even actors if there isn't message serialization at the mailbox. i don't know what you call a model that doesn't have a primitive serializer guarding the mailbox. Once you introduce that, then you absolutely must address fairness.
Further, serialization will not guard against deadlock. Both this and livelock become easier to fall into once you recognize that you have to narrow constraints as you specialize. The tighter the constraints, the easier it is to conclude you shouldn't process a message. Once a message isn't processed in a given window, availability codes kick in to restart or renew processes. You restart the same ill-conceived code which winds up in the same bad states. So, deadlock and livelock can show up as two sides of the same coin.
Shared mutable state is just the very beginning of the challenges of concurrent and distributed systems.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
The point of the actor model is that there is message serialization. It's not even actors if there isn't message serialization at the mailbox. i don't know what you call a model that doesn't have a primitive serializer guarding the mailbox. Once you introduce that, then you absolutely must address fairness.
Further, serialization will not guard against deadlock. Both this and livelock become easier to fall into once you recognize that you have to narrow constraints as you specialize. The tighter the constraints, the easier it is to conclude you shouldn't process a message. Once a message isn't processed in a given window, availability codes kick in to restart or renew processes. You restart the same ill-conceived code which winds up in the same bad states. So, deadlock and livelock can show up as two sides of the same coin.
Shared mutable state is just the very beginning of the challenges of concurrent and distributed systems.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
On Sun, Jun 5, 2011 at 1:36 PM, Daniel Kristensen <daniel.kristensen@gmail.com> wrote:
Hi Rex,
I completely agree, but my interpretation of Russ' question was much narrower: Do actors help to alleviate the so-called inheritance anomaly? Obviously actor-based programming is not the silver bullet for all concurrency related problems (although some people actually seem to think that).
Yeah? I have personally never met or talked to anyone who thinks that.
It's easy to avoid sharing mutable state with actors: don't do it.
Yes, but the issue is that it's not as easy to enforce (think many developers over many years) if the compiler doesn't help you, because this isn't easy to check using unit tests either.
As I said, you can turn on serialization of messages, so then you don't accidentally share mutable state.
Best regards
Daniel
2011/6/5 Rex Kerr <ichoran@gmail.com>
It's easy to avoid sharing mutable state with actors: don't do it.
But that doesn't save you.
The problem is that the compiler doesn't help you with the _logic_ of concurrency. If you're used to weakly typed languages, that's not a big deal. It's faintly terrifying if you rely upon the compiler to keep you from doing stupid things, however.
Avoiding mutable state helps remove _some_ aspects of the logic from your concern; now you never have to worry about state changing out from under you or being inconsistent, but you do have to worry about state being outdated and all your work being for naught. And you have to worry about whether you will have the ability to simultaneously have two pieces of non-outdated data at the same time, if you ever need more than one.
James Iry has blogged insightfully upon this issue in the context of Erlang:
http://james-iry.blogspot.com/search/label/erlang
What we really need from languages and compilers of the future is a mechanism for (and theory of) resource management, where if you need such-and-so to happen, you declare it, and the compiler yells at you if you make mutually contradictory declarations.
For example, I created a system that avoided global deadlocks by insisting that all resources be locked in a particular order; if you walk down your lock chain and hit something that's already locked, you have to free up your previous locks and try again. This worked (zero deadlocks), but wasn't particularly fast, and was written in Java and was incredibly awkward. Anyway, these sorts of problems are the ones that need to be solved, and actors only let you punt on them a little bit by freeing you of the temptation use all resources everywhere willy-nilly. But messages are a type of resource, so this only goes so far.
--Rex
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>
As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Sun, 2011-06-05, 22:47
#13
Re: actors and the "inheritance anamoly"
Dear Viktor,
Actor A is waiting for actor B to send a message before doing its part in some mission critical computation. Actor B is waiting for actor A to send a message before doing its part in some mission critical computation. The mission critical computation never gets done because A and B are in a deadly embrace, aka deadlock.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Actor A is waiting for actor B to send a message before doing its part in some mission critical computation. Actor B is waiting for actor A to send a message before doing its part in some mission critical computation. The mission critical computation never gets done because A and B are in a deadly embrace, aka deadlock.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
Dear Greg,
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
The point of the actor model is that there is message serialization. It's not even actors if there isn't message serialization at the mailbox. i don't know what you call a model that doesn't have a primitive serializer guarding the mailbox. Once you introduce that, then you absolutely must address fairness.
I was talking about data-serialization, as in marshalling/unmarshalling of data. Not as in queueing of messages. As a way to ensure that there cannot be any shared mutable state.
Further, serialization will not guard against deadlock. Both this and livelock become easier to fall into once you recognize that you have to narrow constraints as you specialize. The tighter the constraints, the easier it is to conclude you shouldn't process a message. Once a message isn't processed in a given window, availability codes kick in to restart or renew processes. You restart the same ill-conceived code which winds up in the same bad states. So, deadlock and livelock can show up as two sides of the same coin.
Define "deadlock" in the context of unbounded mailbox and fire-forget-semantics only for message sends.
Shared mutable state is just the very beginning of the challenges of concurrent and distributed systems.
Definitely.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
On Sun, Jun 5, 2011 at 1:36 PM, Daniel Kristensen <daniel.kristensen@gmail.com> wrote:
Hi Rex,
I completely agree, but my interpretation of Russ' question was much narrower: Do actors help to alleviate the so-called inheritance anomaly? Obviously actor-based programming is not the silver bullet for all concurrency related problems (although some people actually seem to think that).
Yeah? I have personally never met or talked to anyone who thinks that.
It's easy to avoid sharing mutable state with actors: don't do it.
Yes, but the issue is that it's not as easy to enforce (think many developers over many years) if the compiler doesn't help you, because this isn't easy to check using unit tests either.
As I said, you can turn on serialization of messages, so then you don't accidentally share mutable state.
Best regards
Daniel
2011/6/5 Rex Kerr <ichoran@gmail.com>
It's easy to avoid sharing mutable state with actors: don't do it.
But that doesn't save you.
The problem is that the compiler doesn't help you with the _logic_ of concurrency. If you're used to weakly typed languages, that's not a big deal. It's faintly terrifying if you rely upon the compiler to keep you from doing stupid things, however.
Avoiding mutable state helps remove _some_ aspects of the logic from your concern; now you never have to worry about state changing out from under you or being inconsistent, but you do have to worry about state being outdated and all your work being for naught. And you have to worry about whether you will have the ability to simultaneously have two pieces of non-outdated data at the same time, if you ever need more than one.
James Iry has blogged insightfully upon this issue in the context of Erlang:
http://james-iry.blogspot.com/search/label/erlang
What we really need from languages and compilers of the future is a mechanism for (and theory of) resource management, where if you need such-and-so to happen, you declare it, and the compiler yells at you if you make mutually contradictory declarations.
For example, I created a system that avoided global deadlocks by insisting that all resources be locked in a particular order; if you walk down your lock chain and hit something that's already locked, you have to free up your previous locks and try again. This worked (zero deadlocks), but wasn't particularly fast, and was written in Java and was incredibly awkward. Anyway, these sorts of problems are the ones that need to be solved, and actors only let you punt on them a little bit by freeing you of the temptation use all resources everywhere willy-nilly. But messages are a type of resource, so this only goes so far.
--Rex
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>
As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Sun, 2011-06-05, 22:57
#14
Re: actors and the "inheritance anamoly"
Dear Greg,
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
I was talking about data-serialization, as in marshalling/unmarshalling of data. Not as in queueing of messages. As a way to ensure that there cannot be any shared mutable state.
Define "deadlock" in the context of unbounded mailbox and fire-forget-semantics only for message sends.
Definitely.
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
Dear Viktor,
The point of the actor model is that there is message serialization. It's not even actors if there isn't message serialization at the mailbox. i don't know what you call a model that doesn't have a primitive serializer guarding the mailbox. Once you introduce that, then you absolutely must address fairness.
I was talking about data-serialization, as in marshalling/unmarshalling of data. Not as in queueing of messages. As a way to ensure that there cannot be any shared mutable state.
Further, serialization will not guard against deadlock. Both this and livelock become easier to fall into once you recognize that you have to narrow constraints as you specialize. The tighter the constraints, the easier it is to conclude you shouldn't process a message. Once a message isn't processed in a given window, availability codes kick in to restart or renew processes. You restart the same ill-conceived code which winds up in the same bad states. So, deadlock and livelock can show up as two sides of the same coin.
Define "deadlock" in the context of unbounded mailbox and fire-forget-semantics only for message sends.
Shared mutable state is just the very beginning of the challenges of concurrent and distributed systems.
Definitely.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
On Sun, Jun 5, 2011 at 1:36 PM, Daniel Kristensen <daniel.kristensen@gmail.com> wrote:
Hi Rex,
I completely agree, but my interpretation of Russ' question was much narrower: Do actors help to alleviate the so-called inheritance anomaly? Obviously actor-based programming is not the silver bullet for all concurrency related problems (although some people actually seem to think that).
Yeah? I have personally never met or talked to anyone who thinks that.
It's easy to avoid sharing mutable state with actors: don't do it.
Yes, but the issue is that it's not as easy to enforce (think many developers over many years) if the compiler doesn't help you, because this isn't easy to check using unit tests either.
As I said, you can turn on serialization of messages, so then you don't accidentally share mutable state.
Best regards
Daniel
2011/6/5 Rex Kerr <ichoran@gmail.com>
It's easy to avoid sharing mutable state with actors: don't do it.
But that doesn't save you.
The problem is that the compiler doesn't help you with the _logic_ of concurrency. If you're used to weakly typed languages, that's not a big deal. It's faintly terrifying if you rely upon the compiler to keep you from doing stupid things, however.
Avoiding mutable state helps remove _some_ aspects of the logic from your concern; now you never have to worry about state changing out from under you or being inconsistent, but you do have to worry about state being outdated and all your work being for naught. And you have to worry about whether you will have the ability to simultaneously have two pieces of non-outdated data at the same time, if you ever need more than one.
James Iry has blogged insightfully upon this issue in the context of Erlang:
http://james-iry.blogspot.com/search/label/erlang
What we really need from languages and compilers of the future is a mechanism for (and theory of) resource management, where if you need such-and-so to happen, you declare it, and the compiler yells at you if you make mutually contradictory declarations.
For example, I created a system that avoided global deadlocks by insisting that all resources be locked in a particular order; if you walk down your lock chain and hit something that's already locked, you have to free up your previous locks and try again. This worked (zero deadlocks), but wasn't particularly fast, and was written in Java and was incredibly awkward. Anyway, these sorts of problems are the ones that need to be solved, and actors only let you punt on them a little bit by freeing you of the temptation use all resources everywhere willy-nilly. But messages are a type of resource, so this only goes so far.
--Rex
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>
As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 22:57
#15
Re: actors and the "inheritance anamoly"
Dear Viktor,
i apologize but i can't parse your message. Actor A has some code that it will run as part of it's response to message M1. Actor B has some code that it will run as part of it's response to message M2. Unfortunately, the code block in A's response to M1 is to send M2; and, likewise, the code that B will run in response to M2 is to send M1.
The two actors just sit there, deadlocked. You can always defeat these situations with a timeout. Unfortunately, the code for the timeout is likely to be an error path code, such as to restart the system. So, A and B will be restarted. And just sit there. Then the timeout will be exercised. With a clever timeout back-off procedure you might avoid running out of resources... ;-)
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
i apologize but i can't parse your message. Actor A has some code that it will run as part of it's response to message M1. Actor B has some code that it will run as part of it's response to message M2. Unfortunately, the code block in A's response to M1 is to send M2; and, likewise, the code that B will run in response to M2 is to send M1.
The two actors just sit there, deadlocked. You can always defeat these situations with a timeout. Unfortunately, the code for the timeout is likely to be an error path code, such as to restart the system. So, A and B will be restarted. And just sit there. Then the timeout will be exercised. With a clever timeout back-off procedure you might avoid running out of resources... ;-)
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
Actor A is waiting for actor B to send a message before doing its part in some mission critical computation. Actor B is waiting for actor A to send a message before doing its part in some mission critical computation. The mission critical computation never gets done because A and B are in a deadly embrace, aka deadlock.
Waiting as in A) "blocking" or waiting as in B) "the next message ought to be an X message"?
For A, I don't think that the ability to do blocking is even desirable or required for an Actor Model implementation.
For B, it's a passive wait and if it is a problem to not get a certain message within a frame of time, a receive timeout can be used to signal that.
The passive wait is not a non-recoverable problem, which a deadlock essentially is (atleast on the JVM).
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
Dear Greg,
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
The point of the actor model is that there is message serialization. It's not even actors if there isn't message serialization at the mailbox. i don't know what you call a model that doesn't have a primitive serializer guarding the mailbox. Once you introduce that, then you absolutely must address fairness.
I was talking about data-serialization, as in marshalling/unmarshalling of data. Not as in queueing of messages. As a way to ensure that there cannot be any shared mutable state.
Further, serialization will not guard against deadlock. Both this and livelock become easier to fall into once you recognize that you have to narrow constraints as you specialize. The tighter the constraints, the easier it is to conclude you shouldn't process a message. Once a message isn't processed in a given window, availability codes kick in to restart or renew processes. You restart the same ill-conceived code which winds up in the same bad states. So, deadlock and livelock can show up as two sides of the same coin.
Define "deadlock" in the context of unbounded mailbox and fire-forget-semantics only for message sends.
Shared mutable state is just the very beginning of the challenges of concurrent and distributed systems.
Definitely.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
On Sun, Jun 5, 2011 at 1:36 PM, Daniel Kristensen <daniel.kristensen@gmail.com> wrote:
Hi Rex,
I completely agree, but my interpretation of Russ' question was much narrower: Do actors help to alleviate the so-called inheritance anomaly? Obviously actor-based programming is not the silver bullet for all concurrency related problems (although some people actually seem to think that).
Yeah? I have personally never met or talked to anyone who thinks that.
It's easy to avoid sharing mutable state with actors: don't do it.
Yes, but the issue is that it's not as easy to enforce (think many developers over many years) if the compiler doesn't help you, because this isn't easy to check using unit tests either.
As I said, you can turn on serialization of messages, so then you don't accidentally share mutable state.
Best regards
Daniel
2011/6/5 Rex Kerr <ichoran@gmail.com>
It's easy to avoid sharing mutable state with actors: don't do it.
But that doesn't save you.
The problem is that the compiler doesn't help you with the _logic_ of concurrency. If you're used to weakly typed languages, that's not a big deal. It's faintly terrifying if you rely upon the compiler to keep you from doing stupid things, however.
Avoiding mutable state helps remove _some_ aspects of the logic from your concern; now you never have to worry about state changing out from under you or being inconsistent, but you do have to worry about state being outdated and all your work being for naught. And you have to worry about whether you will have the ability to simultaneously have two pieces of non-outdated data at the same time, if you ever need more than one.
James Iry has blogged insightfully upon this issue in the context of Erlang:
http://james-iry.blogspot.com/search/label/erlang
What we really need from languages and compilers of the future is a mechanism for (and theory of) resource management, where if you need such-and-so to happen, you declare it, and the compiler yells at you if you make mutually contradictory declarations.
For example, I created a system that avoided global deadlocks by insisting that all resources be locked in a particular order; if you walk down your lock chain and hit something that's already locked, you have to free up your previous locks and try again. This worked (zero deadlocks), but wasn't particularly fast, and was written in Java and was incredibly awkward. Anyway, these sorts of problems are the ones that need to be solved, and actors only let you punt on them a little bit by freeing you of the temptation use all resources everywhere willy-nilly. But messages are a type of resource, so this only goes so far.
--Rex
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>
As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Sun, 2011-06-05, 22:57
#16
Re: actors and the "inheritance anamoly"
Dear Greg,
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
But you have never any guarantee that any message will be sent to an actor at all, so you have 0 guarantees that an actors behavior will be triggered at all.
Fortunately this is a scenario that doesn't deadlock in the JVM technical way, i.e. a monitor held and a Thread is blocking waiting for a monitorexit.
Yes, this is exactly what you would do, you would have it supervised with a maximum number of restarts.
Cheers,
√
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
Dear Viktor,
i apologize but i can't parse your message. Actor A has some code that it will run as part of it's response to message M1. Actor B has some code that it will run as part of it's response to message M2. Unfortunately, the code block in A's response to M1 is to send M2; and, likewise, the code that B will run in response to M2 is to send M1.
But you have never any guarantee that any message will be sent to an actor at all, so you have 0 guarantees that an actors behavior will be triggered at all.
Fortunately this is a scenario that doesn't deadlock in the JVM technical way, i.e. a monitor held and a Thread is blocking waiting for a monitorexit.
The two actors just sit there, deadlocked. You can always defeat these situations with a timeout. Unfortunately, the code for the timeout is likely to be an error path code, such as to restart the system. So, A and B will be restarted. And just sit there. Then the timeout will be exercised. With a clever timeout back-off procedure you might avoid running out of resources... ;-)
Yes, this is exactly what you would do, you would have it supervised with a maximum number of restarts.
Cheers,
√
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
Actor A is waiting for actor B to send a message before doing its part in some mission critical computation. Actor B is waiting for actor A to send a message before doing its part in some mission critical computation. The mission critical computation never gets done because A and B are in a deadly embrace, aka deadlock.
Waiting as in A) "blocking" or waiting as in B) "the next message ought to be an X message"?
For A, I don't think that the ability to do blocking is even desirable or required for an Actor Model implementation.
For B, it's a passive wait and if it is a problem to not get a certain message within a frame of time, a receive timeout can be used to signal that.
The passive wait is not a non-recoverable problem, which a deadlock essentially is (atleast on the JVM).
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
Dear Greg,
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
The point of the actor model is that there is message serialization. It's not even actors if there isn't message serialization at the mailbox. i don't know what you call a model that doesn't have a primitive serializer guarding the mailbox. Once you introduce that, then you absolutely must address fairness.
I was talking about data-serialization, as in marshalling/unmarshalling of data. Not as in queueing of messages. As a way to ensure that there cannot be any shared mutable state.
Further, serialization will not guard against deadlock. Both this and livelock become easier to fall into once you recognize that you have to narrow constraints as you specialize. The tighter the constraints, the easier it is to conclude you shouldn't process a message. Once a message isn't processed in a given window, availability codes kick in to restart or renew processes. You restart the same ill-conceived code which winds up in the same bad states. So, deadlock and livelock can show up as two sides of the same coin.
Define "deadlock" in the context of unbounded mailbox and fire-forget-semantics only for message sends.
Shared mutable state is just the very beginning of the challenges of concurrent and distributed systems.
Definitely.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
On Sun, Jun 5, 2011 at 1:36 PM, Daniel Kristensen <daniel.kristensen@gmail.com> wrote:
Hi Rex,
I completely agree, but my interpretation of Russ' question was much narrower: Do actors help to alleviate the so-called inheritance anomaly? Obviously actor-based programming is not the silver bullet for all concurrency related problems (although some people actually seem to think that).
Yeah? I have personally never met or talked to anyone who thinks that.
It's easy to avoid sharing mutable state with actors: don't do it.
Yes, but the issue is that it's not as easy to enforce (think many developers over many years) if the compiler doesn't help you, because this isn't easy to check using unit tests either.
As I said, you can turn on serialization of messages, so then you don't accidentally share mutable state.
Best regards
Daniel
2011/6/5 Rex Kerr <ichoran@gmail.com>
It's easy to avoid sharing mutable state with actors: don't do it.
But that doesn't save you.
The problem is that the compiler doesn't help you with the _logic_ of concurrency. If you're used to weakly typed languages, that's not a big deal. It's faintly terrifying if you rely upon the compiler to keep you from doing stupid things, however.
Avoiding mutable state helps remove _some_ aspects of the logic from your concern; now you never have to worry about state changing out from under you or being inconsistent, but you do have to worry about state being outdated and all your work being for naught. And you have to worry about whether you will have the ability to simultaneously have two pieces of non-outdated data at the same time, if you ever need more than one.
James Iry has blogged insightfully upon this issue in the context of Erlang:
http://james-iry.blogspot.com/search/label/erlang
What we really need from languages and compilers of the future is a mechanism for (and theory of) resource management, where if you need such-and-so to happen, you declare it, and the compiler yells at you if you make mutually contradictory declarations.
For example, I created a system that avoided global deadlocks by insisting that all resources be locked in a particular order; if you walk down your lock chain and hit something that's already locked, you have to free up your previous locks and try again. This worked (zero deadlocks), but wasn't particularly fast, and was written in Java and was incredibly awkward. Anyway, these sorts of problems are the ones that need to be solved, and actors only let you punt on them a little bit by freeing you of the temptation use all resources everywhere willy-nilly. But messages are a type of resource, so this only goes so far.
--Rex
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>
As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 23:07
#17
Re: actors and the "inheritance anamoly"
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
Dear Viktor,
Actor A is waiting for actor B to send a message before doing its part in some mission critical computation. Actor B is waiting for actor A to send a message before doing its part in some mission critical computation. The mission critical computation never gets done because A and B are in a deadly embrace, aka deadlock.
Waiting as in A) "blocking" or waiting as in B) "the next message ought to be an X message"?
For A, I don't think that the ability to do blocking is even desirable or required for an Actor Model implementation.
For B, it's a passive wait and if it is a problem to not get a certain message within a frame of time, a receive timeout can be used to signal that.
The passive wait is not a non-recoverable problem, which a deadlock essentially is (atleast on the JVM).
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
Dear Greg,
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
The point of the actor model is that there is message serialization. It's not even actors if there isn't message serialization at the mailbox. i don't know what you call a model that doesn't have a primitive serializer guarding the mailbox. Once you introduce that, then you absolutely must address fairness.
I was talking about data-serialization, as in marshalling/unmarshalling of data. Not as in queueing of messages. As a way to ensure that there cannot be any shared mutable state.
Further, serialization will not guard against deadlock. Both this and livelock become easier to fall into once you recognize that you have to narrow constraints as you specialize. The tighter the constraints, the easier it is to conclude you shouldn't process a message. Once a message isn't processed in a given window, availability codes kick in to restart or renew processes. You restart the same ill-conceived code which winds up in the same bad states. So, deadlock and livelock can show up as two sides of the same coin.
Define "deadlock" in the context of unbounded mailbox and fire-forget-semantics only for message sends.
Shared mutable state is just the very beginning of the challenges of concurrent and distributed systems.
Definitely.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
On Sun, Jun 5, 2011 at 1:36 PM, Daniel Kristensen <daniel.kristensen@gmail.com> wrote:
Hi Rex,
I completely agree, but my interpretation of Russ' question was much narrower: Do actors help to alleviate the so-called inheritance anomaly? Obviously actor-based programming is not the silver bullet for all concurrency related problems (although some people actually seem to think that).
Yeah? I have personally never met or talked to anyone who thinks that.
It's easy to avoid sharing mutable state with actors: don't do it.
Yes, but the issue is that it's not as easy to enforce (think many developers over many years) if the compiler doesn't help you, because this isn't easy to check using unit tests either.
As I said, you can turn on serialization of messages, so then you don't accidentally share mutable state.
Best regards
Daniel
2011/6/5 Rex Kerr <ichoran@gmail.com>
It's easy to avoid sharing mutable state with actors: don't do it.
But that doesn't save you.
The problem is that the compiler doesn't help you with the _logic_ of concurrency. If you're used to weakly typed languages, that's not a big deal. It's faintly terrifying if you rely upon the compiler to keep you from doing stupid things, however.
Avoiding mutable state helps remove _some_ aspects of the logic from your concern; now you never have to worry about state changing out from under you or being inconsistent, but you do have to worry about state being outdated and all your work being for naught. And you have to worry about whether you will have the ability to simultaneously have two pieces of non-outdated data at the same time, if you ever need more than one.
James Iry has blogged insightfully upon this issue in the context of Erlang:
http://james-iry.blogspot.com/search/label/erlang
What we really need from languages and compilers of the future is a mechanism for (and theory of) resource management, where if you need such-and-so to happen, you declare it, and the compiler yells at you if you make mutually contradictory declarations.
For example, I created a system that avoided global deadlocks by insisting that all resources be locked in a particular order; if you walk down your lock chain and hit something that's already locked, you have to free up your previous locks and try again. This worked (zero deadlocks), but wasn't particularly fast, and was written in Java and was incredibly awkward. Anyway, these sorts of problems are the ones that need to be solved, and actors only let you punt on them a little bit by freeing you of the temptation use all resources everywhere willy-nilly. But messages are a type of resource, so this only goes so far.
--Rex
2011/6/5 Naftoli Gugenheim <naftoligug@gmail.com>
As Daniel said Greg meant, "it's not a solution because avoiding shared mutable state when using actors in Scala relies on convention, as opposed to being enforced by the compiler."
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>To be honest, I don't see how Actors fit into this picture.
Could someone clarify?
On Sun, Jun 5, 2011 at 12:22 PM, Russ Paielli <russ.paielli@gmail.com> wrote:
Thanks for that explanation, Greg. Just to be sure I am "seeing the big picture," let me ask a basic question. If I understand you correctly, you are saying that inheritance and concurrency can be successfully combined if done carefully. Is that correct, or are you saying that inheritance and concurrency should never be used together? Or are you saying something else altogether?
--Russ P.
On Sat, Jun 4, 2011 at 8:46 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear Russ,
The list of places to go wrong with inheritance is pretty well explored from the fragile base class (arguably a variant of which just reared it's ugly head in combination with variance issues in Set vs List in Scala) to the synchronization inheritance anomaly. At the core of this issue is that is-a relationships are almost always in context. Inheritance sublimates the context. This is exactly the opposite of what is needed for maximal reuse -- and actually at odds with O-O principles. Instead, reify the context, give it an explicit computational representation and you will get more reuse.
Thus, in the example the Set vs List case, it's only in some context that we agree that a Set[A] may be interpreted as a function A => Boolean. The context has very specific assumptions about what a function is and what a Set is.[1] If we reify that context, we arrive at something like
trait SetsAreFunctions[A,B] { /* you could make this implicit */ def asFunction( s : Set[A] ) : A => B }
Notice that one "specialization" of this SetsAreFunctions[A,Float] is a candidate for representing fuzzy sets. Another "specialization" SetsAreFunctions[A,Option[Boolean]] is a candidate for various representations of partial functions. In this design our assumptions about the meaning of Set and function have been reified. Further, the act of specialization is much more clear, much more explicit and more highly governed by compiler constraints. If you compare it to what has been frozen into the Scala collections, it has a wider range, is mathematically and computationally more accurate and doesn't introduce an anomalous variance difference between collections that mathematical consistency demands be the same (List comes from the monad of the free monoid, Set is an algebra of this monad guaranteeing idempotency and commutativity of the operation -- variance should not differ!).
As for an example of inheritance synchronization anomaly, let's see if i can construct off the top of my head the standard example from the literature (which can be had by googling inheritance synchronization anomaly). We might imagine a family of Buffer classes with (atomic) put and get methods together with an accept predicate satisfying the following conditions:These represent synchronization constraints that allow for harmonious engagement between a Buffer and many clients. If a specialization of Buffer provides a consumer behavior getting more than 1 element at a time on a get, the synchronization constraints will be violated. Successful specializations have to narrow the constraints.
- b : Buffer & accept( b, get ) => size( b ) > 0
- b : Buffer & accept( b, put ) => size( b ) < Buffer.maxSize
While this example might seem contrived (and doubly so since i'm just pulling it off the top of my head) versions of this very example occurs more naturally in the wild if you consider the relationship between alternating bit protocol and sliding window protocol.
Best wishes,
--greg
[1] This sentence actually summarizes a major content of the interaction of computing and mathematics over the last 70 years. A real understanding of the various proposals of what a function is has been at the core of the dialogue between mathematics and computing. Category Theory, arguably, giving the most flexible and pragmatic account to date -- but seriously lacking (imho) in a decent account of concurrent computing.
On Sat, Jun 4, 2011 at 5:04 PM, Russ Paielli <russ.paielli@gmail.com> wrote:On Sat, Jun 4, 2011 at 3:17 PM, Meredith Gregory <lgreg.meredith@gmail.com> wrote:
Dear All,
i've been mumbling about this on-list for years, now. Having been one of the designers of one of the first high performance actor-based language, we ran into this issue early. It's usually called the inheritance-synchronization anomaly in the literature. If you specialize and relax synchronization constraints relied upon in inherited code, you will likely break the inherited code. The remedy is to ensure that you always only narrow synchronization constraints in specializations. Language-based inheritance constructs in popular languages don't provide any support for this. So, programmers have to be aware and guarantee this on their own.
Thanks for that explanation, but it's a bit over my head. If you have time to elaborate and perhaps provide a small example, that would be helpful to me.
It's yet another of many reasons to abandon inheritance as a language-based mechanism for reuse.
Interesting, but that statement invites a couple of questions. What are some of the other reasons to abandon inheritance? Also, are you implying that inheritance provides benefits other than re-use, or are you implying that it should just be avoided completely?
--Russ P
Actors, themselves, are also not a remedy for what ails you in concurrency. Of course, it's much worse when 'actors' doesn't actually mean actors, but is some loosey-goosey catch-all phrase for a programming convention that has a variety of semantics. So, 'actors' plus inheritance is guaranteed to be provide a rollicking good time when it comes to programming correctness. It is definitely a combination squarely in synch with the programmer full employment act.
Best wishes,
--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
http://RussP.us
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 23:07
#18
Re: actors and the "inheritance anamoly"
2011/6/5 Rex Kerr <ichoran@gmail.com>
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
Actor A is waiting for actor B to send a message before doing its part in some mission critical computation. Actor B is waiting for actor A to send a message before doing its part in some mission critical computation. The mission critical computation never gets done because A and B are in a deadly embrace, aka deadlock.
Waiting as in A) "blocking" or waiting as in B) "the next message ought to be an X message"?
For A, I don't think that the ability to do blocking is even desirable or required for an Actor Model implementation.
For B, it's a passive wait and if it is a problem to not get a certain message within a frame of time, a receive timeout can be used to signal that.
The passive wait is not a non-recoverable problem, which a deadlock essentially is (atleast on the JVM).
It can be nonrecoverable in that there is no possible way for the operation to succeed because of a logic error on the part of the programmer, as in Greg's example.
Even if the operation succeeds it might not be what the person who ordered the feature expects.
My main point is that there needen't be any deadlock in the sense that two or more JVM Threads are blocked indefinitely.
Saying that actors solve this problem is akin to saying that because of exception handling, you don't benefit from having a typed language. Yes, with exception handling, you don't segfault when you try to insert a set into your integer, but that doesn't mean that you can ever correctly run your code. Type checking can tell you, "That is an integer, you can't put a set in there". Likewise, one can envision compiler/language/library support that tells you, "A and B each want a message from the other before proceeding with this task", and Actors isn't it.
No, Actors (and no implementation thereof) guarantee that the feature does what the person who ordered it expects it to.
--Rex
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 23:17
#19
Re: actors and the "inheritance anamoly"
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
It can be nonrecoverable in that there is no possible way for the operation to succeed because of a logic error on the part of the programmer, as in Greg's example. Saying that actors solve this problem is akin to saying that because of exception handling, you don't benefit from having a typed language. Yes, with exception handling, you don't segfault when you try to insert a set into your integer, but that doesn't mean that you can ever correctly run your code. Type checking can tell you, "That is an integer, you can't put a set in there". Likewise, one can envision compiler/language/library support that tells you, "A and B each want a message from the other before proceeding with this task", and Actors isn't it.
--Rex
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
Actor A is waiting for actor B to send a message before doing its part in some mission critical computation. Actor B is waiting for actor A to send a message before doing its part in some mission critical computation. The mission critical computation never gets done because A and B are in a deadly embrace, aka deadlock.
Waiting as in A) "blocking" or waiting as in B) "the next message ought to be an X message"?
For A, I don't think that the ability to do blocking is even desirable or required for an Actor Model implementation.
For B, it's a passive wait and if it is a problem to not get a certain message within a frame of time, a receive timeout can be used to signal that.
The passive wait is not a non-recoverable problem, which a deadlock essentially is (atleast on the JVM).
It can be nonrecoverable in that there is no possible way for the operation to succeed because of a logic error on the part of the programmer, as in Greg's example. Saying that actors solve this problem is akin to saying that because of exception handling, you don't benefit from having a typed language. Yes, with exception handling, you don't segfault when you try to insert a set into your integer, but that doesn't mean that you can ever correctly run your code. Type checking can tell you, "That is an integer, you can't put a set in there". Likewise, one can envision compiler/language/library support that tells you, "A and B each want a message from the other before proceeding with this task", and Actors isn't it.
--Rex
Sun, 2011-06-05, 23:17
#20
Re: actors and the "inheritance anamoly"
Dear Viktor,
Thanks! i think a see the problem. For me the definition of deadlock is not limited to expression in the JVM. We've known for years what it means for a DBMS to become deadlocked (this was the impetus of a lot of work on transactions). We've known for years what it means for participants in a communication protocol to become deadlocked -- this was part of the impetus behind a lot of the early work on CCS and CSP. All of this happened before Java was a gleam in Gosling's eye. For a programmer who cut their teeth in Java deadlock might have a meaning in terms of Java threads. For an architect who is designing large scale systems -- only some of the components of which run in JVMs of any stripe -- deadlock means something else.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Thanks! i think a see the problem. For me the definition of deadlock is not limited to expression in the JVM. We've known for years what it means for a DBMS to become deadlocked (this was the impetus of a lot of work on transactions). We've known for years what it means for participants in a communication protocol to become deadlocked -- this was part of the impetus behind a lot of the early work on CCS and CSP. All of this happened before Java was a gleam in Gosling's eye. For a programmer who cut their teeth in Java deadlock might have a meaning in terms of Java threads. For an architect who is designing large scale systems -- only some of the components of which run in JVMs of any stripe -- deadlock means something else.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Rex Kerr <ichoran@gmail.com>2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
Actor A is waiting for actor B to send a message before doing its part in some mission critical computation. Actor B is waiting for actor A to send a message before doing its part in some mission critical computation. The mission critical computation never gets done because A and B are in a deadly embrace, aka deadlock.
Waiting as in A) "blocking" or waiting as in B) "the next message ought to be an X message"?
For A, I don't think that the ability to do blocking is even desirable or required for an Actor Model implementation.
For B, it's a passive wait and if it is a problem to not get a certain message within a frame of time, a receive timeout can be used to signal that.
The passive wait is not a non-recoverable problem, which a deadlock essentially is (atleast on the JVM).
It can be nonrecoverable in that there is no possible way for the operation to succeed because of a logic error on the part of the programmer, as in Greg's example.
Even if the operation succeeds it might not be what the person who ordered the feature expects.
My main point is that there needen't be any deadlock in the sense that two or more JVM Threads are blocked indefinitely.
Saying that actors solve this problem is akin to saying that because of exception handling, you don't benefit from having a typed language. Yes, with exception handling, you don't segfault when you try to insert a set into your integer, but that doesn't mean that you can ever correctly run your code. Type checking can tell you, "That is an integer, you can't put a set in there". Likewise, one can envision compiler/language/library support that tells you, "A and B each want a message from the other before proceeding with this task", and Actors isn't it.
No, Actors (and no implementation thereof) guarantee that the feature does what the person who ordered it expects it to.
--Rex
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Sun, 2011-06-05, 23:17
#21
Re: actors and the "inheritance anamoly"
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
Dear Viktor,
Thanks! i think a see the problem. For me the definition of deadlock is not limited to expression in the JVM.
We are obviously talking at different layers of abstraction of these concepts. I fully trust your expertise in the high abstractions of things.
We've known for years what it means for a DBMS to become deadlocked (this was the impetus of a lot of work on transactions). We've known for years what it means for participants in a communication protocol to become deadlocked -- this was part of the impetus behind a lot of the early work on CCS and CSP. All of this happened before Java was a gleam in Gosling's eye. For a programmer who cut their teeth in Java deadlock might have a meaning in terms of Java threads. For an architect who is designing large scale systems -- only some of the components of which run in JVMs of any stripe -- deadlock means something else.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Rex Kerr <ichoran@gmail.com>2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
Actor A is waiting for actor B to send a message before doing its part in some mission critical computation. Actor B is waiting for actor A to send a message before doing its part in some mission critical computation. The mission critical computation never gets done because A and B are in a deadly embrace, aka deadlock.
Waiting as in A) "blocking" or waiting as in B) "the next message ought to be an X message"?
For A, I don't think that the ability to do blocking is even desirable or required for an Actor Model implementation.
For B, it's a passive wait and if it is a problem to not get a certain message within a frame of time, a receive timeout can be used to signal that.
The passive wait is not a non-recoverable problem, which a deadlock essentially is (atleast on the JVM).
It can be nonrecoverable in that there is no possible way for the operation to succeed because of a logic error on the part of the programmer, as in Greg's example.
Even if the operation succeeds it might not be what the person who ordered the feature expects.
My main point is that there needen't be any deadlock in the sense that two or more JVM Threads are blocked indefinitely.
Saying that actors solve this problem is akin to saying that because of exception handling, you don't benefit from having a typed language. Yes, with exception handling, you don't segfault when you try to insert a set into your integer, but that doesn't mean that you can ever correctly run your code. Type checking can tell you, "That is an integer, you can't put a set in there". Likewise, one can envision compiler/language/library support that tells you, "A and B each want a message from the other before proceeding with this task", and Actors isn't it.
No, Actors (and no implementation thereof) guarantee that the feature does what the person who ordered it expects it to.
--Rex
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 23:27
#22
Re: actors and the "inheritance anamoly"
Dear Viktor,
"A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing."
That's a great claim! Can you back that up with argumentation? i can certainly design systems that are robust against a variety of physical failures and whose responses to failures are represented in types for concurrency. So, they can actually be type checked against the guarantees of providers.
i would also love to understand why it doesn't make sense to use type-level capabilities in distributed computing. If the researchers in this area (including myself and a long list of much, much smarter people) could be clued in we could avoid spending Sunday afternoon's on senseless work! ;-) Seriously, what intuitions do you have that suggest this is not a reasonable avenue of investigation?
My guess is that people don't really have a good sense of what is possible with types -- largely because their experience with types is all about structural guarantees. That is, they all essentially encode promises that data has such and such a shape. The intriguing thing about programs is that line between structure and function becomes blurred. For example, we all know that the constructs of Scala get represented as data structures and a REPL essentially walks those structures turning them into actions. When there's a really good fit of a structural representation of behavior (as in the lambda calculus or the π-calculus) you can turn a lot of structural checking into behavioral checking. Of course, this is just a cartoon of the situation, but i draw it to suggest thinking again about what is possible with types.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
"A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing."
That's a great claim! Can you back that up with argumentation? i can certainly design systems that are robust against a variety of physical failures and whose responses to failures are represented in types for concurrency. So, they can actually be type checked against the guarantees of providers.
i would also love to understand why it doesn't make sense to use type-level capabilities in distributed computing. If the researchers in this area (including myself and a long list of much, much smarter people) could be clued in we could avoid spending Sunday afternoon's on senseless work! ;-) Seriously, what intuitions do you have that suggest this is not a reasonable avenue of investigation?
My guess is that people don't really have a good sense of what is possible with types -- largely because their experience with types is all about structural guarantees. That is, they all essentially encode promises that data has such and such a shape. The intriguing thing about programs is that line between structure and function becomes blurred. For example, we all know that the constructs of Scala get represented as data structures and a REPL essentially walks those structures turning them into actions. When there's a really good fit of a structural representation of behavior (as in the lambda calculus or the π-calculus) you can turn a lot of structural checking into behavioral checking. Of course, this is just a cartoon of the situation, but i draw it to suggest thinking again about what is possible with types.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Matthew Pocock <turingatemyhamster@gmail.com>
Getting concurrent access to a shared resource right is hard.
Clarification: shared _mutable_ resource.
Getting concurrent access by many threads to multiple shared resources, some of which may be passed between threads dynamically, right is a Hard Thing indeed. Coming up with a formalism (type system?) that helps you avoid common errors while allowing you to still do interesting things and which can be understood by People In General rather than 1 1/2 PhD students and their yucca plant is Very Hard Indeed. Have any of you seen a full petri-net of an implementation of the TCP/IP stack? The last one I saw covered a complete sheet of A0 at a font size where you needed a telescope to read the labels.
A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing.
Actors are essentially a model of distriuted computing, running them in one VM is just an optimization.
Actors help a bit if you stick to immutable messages, as it serializes access to whatever actor-specific resources there are. It doesn't prevent inter-actor deadlock.
See my email to Greg.
Inheritance is a PITA with concurrency because a) the sub-class has at least as great a restriction on concurrent access as the super-class, and b) the compiler doesn't tell you this when you bork it up. Functional composition runs out of steam at some point, because you need to cleanly model joins as well as forks, and at that point you either burn cycles re-computing referentially transparent values (a-la haskell) or are back in resource-locking hell. There are no off-the-shelf easy solutions because concurrency is genuinely a Very Hard Problem.
That's my Sunday rant out of the way.
:-)
Matthew
--
Matthew Pocockmailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozer(0191) 2566550
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Sun, 2011-06-05, 23:27
#23
Re: actors and the "inheritance anamoly"
Dear Viktor,
Thanks for your engagement! If you look at Caires' paper here, you will see that an essentially actor model was one of the first to successfully yield a treatment via types for concurrency. (Actually, to my mind, Caires' approach in that paper is absolutely brilliant. By limiting the problem to a single channel along which to receive messages -- can you say mailbox? ;-) -- he side-stepped problems that were bogging down the progress on types for concurrency.) The model is specifically aimed at distributed systems.
In connection with actors, even with become you can provide a behavioral type system. Here's how we first thought of it back in the Rosette days. You can build a state machine over the enabled-sets. This governs the transitions between accepted messages. You can -- in principle -- check code against this state machine.
Behavioral types are structural in the sense meant by the dichotomy between nominal vs structural you mention. Terminologically, we have to be careful, however. A lot of work has been done to treat so-called nominal features of languages (like references in ML and names in the π-calculus). This work does so in a way that certain kinds of machinery can be factored out and exposed via types. See, for example, FreshML. Behavioral type systems can take advantage of this work and so might talk about 'nominal types' -- but mean something totally different than the nominal typing you mean when you oppose it to structural types.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Thanks for your engagement! If you look at Caires' paper here, you will see that an essentially actor model was one of the first to successfully yield a treatment via types for concurrency. (Actually, to my mind, Caires' approach in that paper is absolutely brilliant. By limiting the problem to a single channel along which to receive messages -- can you say mailbox? ;-) -- he side-stepped problems that were bogging down the progress on types for concurrency.) The model is specifically aimed at distributed systems.
In connection with actors, even with become you can provide a behavioral type system. Here's how we first thought of it back in the Rosette days. You can build a state machine over the enabled-sets. This governs the transitions between accepted messages. You can -- in principle -- check code against this state machine.
Behavioral types are structural in the sense meant by the dichotomy between nominal vs structural you mention. Terminologically, we have to be careful, however. A lot of work has been done to treat so-called nominal features of languages (like references in ML and names in the π-calculus). This work does so in a way that certain kinds of machinery can be factored out and exposed via types. See, for example, FreshML. Behavioral type systems can take advantage of this work and so might talk about 'nominal types' -- but mean something totally different than the nominal typing you mean when you oppose it to structural types.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
"A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing."
That's a great claim! Can you back that up with argumentation? i can certainly design systems that are robust against a variety of physical failures and whose responses to failures are represented in types for concurrency. So, they can actually be type checked against the guarantees of providers.
The claim is in the context of actors. The "become" operation rather makes it a dynamic construct.
i would also love to understand why it doesn't make sense to use type-level capabilities in distributed computing. If the researchers in this area (including myself and a long list of much, much smarter people) could be clued in we could avoid spending Sunday afternoon's on senseless work! ;-) Seriously, what intuitions do you have that suggest this is not a reasonable avenue of investigation?
See above, for non-actor distributed computing it is definitely viable.
My guess is that people don't really have a good sense of what is possible with types -- largely because their experience with types is all about structural guarantees. That is, they all essentially encode promises that data has such and such a shape. The intriguing thing about programs is that line between structure and function becomes blurred. For example, we all know that the constructs of Scala get represented as data structures and a REPL essentially walks those structures turning them into actions. When there's a really good fit of a structural representation of behavior (as in the lambda calculus or the π-calculus) you can turn a lot of structural checking into behavioral checking. Of course, this is just a cartoon of the situation, but i draw it to suggest thinking again about what is possible with types.
Absolutely, "types" is a very broad thing. And it might not even be sensible to talk about it at that level, I mean, are we talking nominal types, structural types or what?
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Matthew Pocock <turingatemyhamster@gmail.com>
Getting concurrent access to a shared resource right is hard.
Clarification: shared _mutable_ resource.
Getting concurrent access by many threads to multiple shared resources, some of which may be passed between threads dynamically, right is a Hard Thing indeed. Coming up with a formalism (type system?) that helps you avoid common errors while allowing you to still do interesting things and which can be understood by People In General rather than 1 1/2 PhD students and their yucca plant is Very Hard Indeed. Have any of you seen a full petri-net of an implementation of the TCP/IP stack? The last one I saw covered a complete sheet of A0 at a font size where you needed a telescope to read the labels.
A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing.
Actors are essentially a model of distriuted computing, running them in one VM is just an optimization.
Actors help a bit if you stick to immutable messages, as it serializes access to whatever actor-specific resources there are. It doesn't prevent inter-actor deadlock.
See my email to Greg.
Inheritance is a PITA with concurrency because a) the sub-class has at least as great a restriction on concurrent access as the super-class, and b) the compiler doesn't tell you this when you bork it up. Functional composition runs out of steam at some point, because you need to cleanly model joins as well as forks, and at that point you either burn cycles re-computing referentially transparent values (a-la haskell) or are back in resource-locking hell. There are no off-the-shelf easy solutions because concurrency is genuinely a Very Hard Problem.
That's my Sunday rant out of the way.
:-)
Matthew
--
Matthew Pocockmailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozer(0191) 2566550
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Sun, 2011-06-05, 23:37
#24
Re: actors and the "inheritance anamoly"
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
Dear Viktor,
"A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing."
That's a great claim! Can you back that up with argumentation? i can certainly design systems that are robust against a variety of physical failures and whose responses to failures are represented in types for concurrency. So, they can actually be type checked against the guarantees of providers.
The claim is in the context of actors. The "become" operation rather makes it a dynamic construct.
i would also love to understand why it doesn't make sense to use type-level capabilities in distributed computing. If the researchers in this area (including myself and a long list of much, much smarter people) could be clued in we could avoid spending Sunday afternoon's on senseless work! ;-) Seriously, what intuitions do you have that suggest this is not a reasonable avenue of investigation?
See above, for non-actor distributed computing it is definitely viable.
My guess is that people don't really have a good sense of what is possible with types -- largely because their experience with types is all about structural guarantees. That is, they all essentially encode promises that data has such and such a shape. The intriguing thing about programs is that line between structure and function becomes blurred. For example, we all know that the constructs of Scala get represented as data structures and a REPL essentially walks those structures turning them into actions. When there's a really good fit of a structural representation of behavior (as in the lambda calculus or the π-calculus) you can turn a lot of structural checking into behavioral checking. Of course, this is just a cartoon of the situation, but i draw it to suggest thinking again about what is possible with types.
Absolutely, "types" is a very broad thing. And it might not even be sensible to talk about it at that level, I mean, are we talking nominal types, structural types or what?
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Matthew Pocock <turingatemyhamster@gmail.com>
Getting concurrent access to a shared resource right is hard.
Clarification: shared _mutable_ resource.
Getting concurrent access by many threads to multiple shared resources, some of which may be passed between threads dynamically, right is a Hard Thing indeed. Coming up with a formalism (type system?) that helps you avoid common errors while allowing you to still do interesting things and which can be understood by People In General rather than 1 1/2 PhD students and their yucca plant is Very Hard Indeed. Have any of you seen a full petri-net of an implementation of the TCP/IP stack? The last one I saw covered a complete sheet of A0 at a font size where you needed a telescope to read the labels.
A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing.
Actors are essentially a model of distriuted computing, running them in one VM is just an optimization.
Actors help a bit if you stick to immutable messages, as it serializes access to whatever actor-specific resources there are. It doesn't prevent inter-actor deadlock.
See my email to Greg.
Inheritance is a PITA with concurrency because a) the sub-class has at least as great a restriction on concurrent access as the super-class, and b) the compiler doesn't tell you this when you bork it up. Functional composition runs out of steam at some point, because you need to cleanly model joins as well as forks, and at that point you either burn cycles re-computing referentially transparent values (a-la haskell) or are back in resource-locking hell. There are no off-the-shelf easy solutions because concurrency is genuinely a Very Hard Problem.
That's my Sunday rant out of the way.
:-)
Matthew
--
Matthew Pocockmailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozer(0191) 2566550
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Sun, 2011-06-05, 23:47
#25
Re: actors and the "inheritance anamoly"
Dear Greg,
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
Interesting! Thanks for the paper.
Just before I have a chance to read and digest the paper, how does it handle communicating with resident actors on other machines (since they can be exposed to multiple parties at the same time?)
Sounds cool, how do you encode this in a distributed setting where multiple parties may have a reference to the same actor on another node?
It's a jungle out there!
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
Dear Viktor,
Thanks for your engagement! If you look at Caires' paper here, you will see that an essentially actor model was one of the first to successfully yield a treatment via types for concurrency. (Actually, to my mind, Caires' approach in that paper is absolutely brilliant. By limiting the problem to a single channel along which to receive messages -- can you say mailbox? ;-) -- he side-stepped problems that were bogging down the progress on types for concurrency.) The model is specifically aimed at distributed systems.
Interesting! Thanks for the paper.
Just before I have a chance to read and digest the paper, how does it handle communicating with resident actors on other machines (since they can be exposed to multiple parties at the same time?)
In connection with actors, even with become you can provide a behavioral type system. Here's how we first thought of it back in the Rosette days. You can build a state machine over the enabled-sets. This governs the transitions between accepted messages. You can -- in principle -- check code against this state machine.
Sounds cool, how do you encode this in a distributed setting where multiple parties may have a reference to the same actor on another node?
Behavioral types are structural in the sense meant by the dichotomy between nominal vs structural you mention. Terminologically, we have to be careful, however. A lot of work has been done to treat so-called nominal features of languages (like references in ML and names in the π-calculus). This work does so in a way that certain kinds of machinery can be factored out and exposed via types. See, for example, FreshML. Behavioral type systems can take advantage of this work and so might talk about 'nominal types' -- but mean something totally different than the nominal typing you mean when you oppose it to structural types.
It's a jungle out there!
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
"A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing."
That's a great claim! Can you back that up with argumentation? i can certainly design systems that are robust against a variety of physical failures and whose responses to failures are represented in types for concurrency. So, they can actually be type checked against the guarantees of providers.
The claim is in the context of actors. The "become" operation rather makes it a dynamic construct.
i would also love to understand why it doesn't make sense to use type-level capabilities in distributed computing. If the researchers in this area (including myself and a long list of much, much smarter people) could be clued in we could avoid spending Sunday afternoon's on senseless work! ;-) Seriously, what intuitions do you have that suggest this is not a reasonable avenue of investigation?
See above, for non-actor distributed computing it is definitely viable.
My guess is that people don't really have a good sense of what is possible with types -- largely because their experience with types is all about structural guarantees. That is, they all essentially encode promises that data has such and such a shape. The intriguing thing about programs is that line between structure and function becomes blurred. For example, we all know that the constructs of Scala get represented as data structures and a REPL essentially walks those structures turning them into actions. When there's a really good fit of a structural representation of behavior (as in the lambda calculus or the π-calculus) you can turn a lot of structural checking into behavioral checking. Of course, this is just a cartoon of the situation, but i draw it to suggest thinking again about what is possible with types.
Absolutely, "types" is a very broad thing. And it might not even be sensible to talk about it at that level, I mean, are we talking nominal types, structural types or what?
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Matthew Pocock <turingatemyhamster@gmail.com>
Getting concurrent access to a shared resource right is hard.
Clarification: shared _mutable_ resource.
Getting concurrent access by many threads to multiple shared resources, some of which may be passed between threads dynamically, right is a Hard Thing indeed. Coming up with a formalism (type system?) that helps you avoid common errors while allowing you to still do interesting things and which can be understood by People In General rather than 1 1/2 PhD students and their yucca plant is Very Hard Indeed. Have any of you seen a full petri-net of an implementation of the TCP/IP stack? The last one I saw covered a complete sheet of A0 at a font size where you needed a telescope to read the labels.
A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing.
Actors are essentially a model of distriuted computing, running them in one VM is just an optimization.
Actors help a bit if you stick to immutable messages, as it serializes access to whatever actor-specific resources there are. It doesn't prevent inter-actor deadlock.
See my email to Greg.
Inheritance is a PITA with concurrency because a) the sub-class has at least as great a restriction on concurrent access as the super-class, and b) the compiler doesn't tell you this when you bork it up. Functional composition runs out of steam at some point, because you need to cleanly model joins as well as forks, and at that point you either burn cycles re-computing referentially transparent values (a-la haskell) or are back in resource-locking hell. There are no off-the-shelf easy solutions because concurrency is genuinely a Very Hard Problem.
That's my Sunday rant out of the way.
:-)
Matthew
--
Matthew Pocockmailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozer(0191) 2566550
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
Mon, 2011-06-06, 00:07
#26
Re: actors and the "inheritance anamoly"
Dear Viktor,
Iirc, the model in Caires' paper makes no assumptions about the locations of any of the actors, no assumptions about residence in a particular VM. The actor (called a service in Caires' paper) receives messages from any client that has its mailbox (called channel in Caires' paper).
In Rosette we looked at various approaches to distribution. You can attempt to hide the communication infrastructure. Then you can't distinguish the failure of an actor from the failure of the communication infrastructure. You can reify the communication infrastructure. We did this with an actor-based TupleSpace. Then you can separate out some of the concerns.
Interestingly, the enabled-set approach is completely immune to these choices. You still code an actor in terms of the messages its willing to process and you can specify up front the various transitions between the message sets and check that the written code matches this up front specification.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Iirc, the model in Caires' paper makes no assumptions about the locations of any of the actors, no assumptions about residence in a particular VM. The actor (called a service in Caires' paper) receives messages from any client that has its mailbox (called channel in Caires' paper).
In Rosette we looked at various approaches to distribution. You can attempt to hide the communication infrastructure. Then you can't distinguish the failure of an actor from the failure of the communication infrastructure. You can reify the communication infrastructure. We did this with an actor-based TupleSpace. Then you can separate out some of the concerns.
Interestingly, the enabled-set approach is completely immune to these choices. You still code an actor in terms of the messages its willing to process and you can specify up front the various transitions between the message sets and check that the written code matches this up front specification.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
Dear Greg,
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
Dear Viktor,
Thanks for your engagement! If you look at Caires' paper here, you will see that an essentially actor model was one of the first to successfully yield a treatment via types for concurrency. (Actually, to my mind, Caires' approach in that paper is absolutely brilliant. By limiting the problem to a single channel along which to receive messages -- can you say mailbox? ;-) -- he side-stepped problems that were bogging down the progress on types for concurrency.) The model is specifically aimed at distributed systems.
Interesting! Thanks for the paper.
Just before I have a chance to read and digest the paper, how does it handle communicating with resident actors on other machines (since they can be exposed to multiple parties at the same time?)
In connection with actors, even with become you can provide a behavioral type system. Here's how we first thought of it back in the Rosette days. You can build a state machine over the enabled-sets. This governs the transitions between accepted messages. You can -- in principle -- check code against this state machine.
Sounds cool, how do you encode this in a distributed setting where multiple parties may have a reference to the same actor on another node?
Behavioral types are structural in the sense meant by the dichotomy between nominal vs structural you mention. Terminologically, we have to be careful, however. A lot of work has been done to treat so-called nominal features of languages (like references in ML and names in the π-calculus). This work does so in a way that certain kinds of machinery can be factored out and exposed via types. See, for example, FreshML. Behavioral type systems can take advantage of this work and so might talk about 'nominal types' -- but mean something totally different than the nominal typing you mean when you oppose it to structural types.
It's a jungle out there!
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
"A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing."
That's a great claim! Can you back that up with argumentation? i can certainly design systems that are robust against a variety of physical failures and whose responses to failures are represented in types for concurrency. So, they can actually be type checked against the guarantees of providers.
The claim is in the context of actors. The "become" operation rather makes it a dynamic construct.
i would also love to understand why it doesn't make sense to use type-level capabilities in distributed computing. If the researchers in this area (including myself and a long list of much, much smarter people) could be clued in we could avoid spending Sunday afternoon's on senseless work! ;-) Seriously, what intuitions do you have that suggest this is not a reasonable avenue of investigation?
See above, for non-actor distributed computing it is definitely viable.
My guess is that people don't really have a good sense of what is possible with types -- largely because their experience with types is all about structural guarantees. That is, they all essentially encode promises that data has such and such a shape. The intriguing thing about programs is that line between structure and function becomes blurred. For example, we all know that the constructs of Scala get represented as data structures and a REPL essentially walks those structures turning them into actions. When there's a really good fit of a structural representation of behavior (as in the lambda calculus or the π-calculus) you can turn a lot of structural checking into behavioral checking. Of course, this is just a cartoon of the situation, but i draw it to suggest thinking again about what is possible with types.
Absolutely, "types" is a very broad thing. And it might not even be sensible to talk about it at that level, I mean, are we talking nominal types, structural types or what?
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Matthew Pocock <turingatemyhamster@gmail.com>
Getting concurrent access to a shared resource right is hard.
Clarification: shared _mutable_ resource.
Getting concurrent access by many threads to multiple shared resources, some of which may be passed between threads dynamically, right is a Hard Thing indeed. Coming up with a formalism (type system?) that helps you avoid common errors while allowing you to still do interesting things and which can be understood by People In General rather than 1 1/2 PhD students and their yucca plant is Very Hard Indeed. Have any of you seen a full petri-net of an implementation of the TCP/IP stack? The last one I saw covered a complete sheet of A0 at a font size where you needed a telescope to read the labels.
A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing.
Actors are essentially a model of distriuted computing, running them in one VM is just an optimization.
Actors help a bit if you stick to immutable messages, as it serializes access to whatever actor-specific resources there are. It doesn't prevent inter-actor deadlock.
See my email to Greg.
Inheritance is a PITA with concurrency because a) the sub-class has at least as great a restriction on concurrent access as the super-class, and b) the compiler doesn't tell you this when you bork it up. Functional composition runs out of steam at some point, because you need to cleanly model joins as well as forks, and at that point you either burn cycles re-computing referentially transparent values (a-la haskell) or are back in resource-locking hell. There are no off-the-shelf easy solutions because concurrency is genuinely a Very Hard Problem.
That's my Sunday rant out of the way.
:-)
Matthew
--
Matthew Pocockmailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozer(0191) 2566550
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Mon, 2011-06-06, 06:37
#27
Re: actors and the "inheritance anamoly"
Dear All,
i just want to acknowledge the quality of communication on this list. Thanks to everyone who contributed to this thread, and for the patient and diligent engagement! The technical ideas in this area are very, very challenging to say the least. Trying to talk about them over the string-between-tin-cans that is email is an even greater challenge. And, there's my own thickheadedness always to be gotten around. So, i really want to say thanks for the opportunity to engage and discuss this stuff.
Best wishes,
--greg
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
i just want to acknowledge the quality of communication on this list. Thanks to everyone who contributed to this thread, and for the patient and diligent engagement! The technical ideas in this area are very, very challenging to say the least. Trying to talk about them over the string-between-tin-cans that is email is an even greater challenge. And, there's my own thickheadedness always to be gotten around. So, i really want to say thanks for the opportunity to engage and discuss this stuff.
Best wishes,
--greg
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
Dear Viktor,
Iirc, the model in Caires' paper makes no assumptions about the locations of any of the actors, no assumptions about residence in a particular VM. The actor (called a service in Caires' paper) receives messages from any client that has its mailbox (called channel in Caires' paper).
In Rosette we looked at various approaches to distribution. You can attempt to hide the communication infrastructure. Then you can't distinguish the failure of an actor from the failure of the communication infrastructure. You can reify the communication infrastructure. We did this with an actor-based TupleSpace. Then you can separate out some of the concerns.
Interestingly, the enabled-set approach is completely immune to these choices. You still code an actor in terms of the messages its willing to process and you can specify up front the various transitions between the message sets and check that the written code matches this up front specification.
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
Dear Greg,
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
Dear Viktor,
Thanks for your engagement! If you look at Caires' paper here, you will see that an essentially actor model was one of the first to successfully yield a treatment via types for concurrency. (Actually, to my mind, Caires' approach in that paper is absolutely brilliant. By limiting the problem to a single channel along which to receive messages -- can you say mailbox? ;-) -- he side-stepped problems that were bogging down the progress on types for concurrency.) The model is specifically aimed at distributed systems.
Interesting! Thanks for the paper.
Just before I have a chance to read and digest the paper, how does it handle communicating with resident actors on other machines (since they can be exposed to multiple parties at the same time?)
In connection with actors, even with become you can provide a behavioral type system. Here's how we first thought of it back in the Rosette days. You can build a state machine over the enabled-sets. This governs the transitions between accepted messages. You can -- in principle -- check code against this state machine.
Sounds cool, how do you encode this in a distributed setting where multiple parties may have a reference to the same actor on another node?
Behavioral types are structural in the sense meant by the dichotomy between nominal vs structural you mention. Terminologically, we have to be careful, however. A lot of work has been done to treat so-called nominal features of languages (like references in ML and names in the π-calculus). This work does so in a way that certain kinds of machinery can be factored out and exposed via types. See, for example, FreshML. Behavioral type systems can take advantage of this work and so might talk about 'nominal types' -- but mean something totally different than the nominal typing you mean when you oppose it to structural types.
It's a jungle out there!
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
"A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing."
That's a great claim! Can you back that up with argumentation? i can certainly design systems that are robust against a variety of physical failures and whose responses to failures are represented in types for concurrency. So, they can actually be type checked against the guarantees of providers.
The claim is in the context of actors. The "become" operation rather makes it a dynamic construct.
i would also love to understand why it doesn't make sense to use type-level capabilities in distributed computing. If the researchers in this area (including myself and a long list of much, much smarter people) could be clued in we could avoid spending Sunday afternoon's on senseless work! ;-) Seriously, what intuitions do you have that suggest this is not a reasonable avenue of investigation?
See above, for non-actor distributed computing it is definitely viable.
My guess is that people don't really have a good sense of what is possible with types -- largely because their experience with types is all about structural guarantees. That is, they all essentially encode promises that data has such and such a shape. The intriguing thing about programs is that line between structure and function becomes blurred. For example, we all know that the constructs of Scala get represented as data structures and a REPL essentially walks those structures turning them into actions. When there's a really good fit of a structural representation of behavior (as in the lambda calculus or the π-calculus) you can turn a lot of structural checking into behavioral checking. Of course, this is just a cartoon of the situation, but i draw it to suggest thinking again about what is possible with types.
Absolutely, "types" is a very broad thing. And it might not even be sensible to talk about it at that level, I mean, are we talking nominal types, structural types or what?
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Matthew Pocock <turingatemyhamster@gmail.com>
Getting concurrent access to a shared resource right is hard.
Clarification: shared _mutable_ resource.
Getting concurrent access by many threads to multiple shared resources, some of which may be passed between threads dynamically, right is a Hard Thing indeed. Coming up with a formalism (type system?) that helps you avoid common errors while allowing you to still do interesting things and which can be understood by People In General rather than 1 1/2 PhD students and their yucca plant is Very Hard Indeed. Have any of you seen a full petri-net of an implementation of the TCP/IP stack? The last one I saw covered a complete sheet of A0 at a font size where you needed a telescope to read the labels.
A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing.
Actors are essentially a model of distriuted computing, running them in one VM is just an optimization.
Actors help a bit if you stick to immutable messages, as it serializes access to whatever actor-specific resources there are. It doesn't prevent inter-actor deadlock.
See my email to Greg.
Inheritance is a PITA with concurrency because a) the sub-class has at least as great a restriction on concurrent access as the super-class, and b) the compiler doesn't tell you this when you bork it up. Functional composition runs out of steam at some point, because you need to cleanly model joins as well as forks, and at that point you either burn cycles re-computing referentially transparent values (a-la haskell) or are back in resource-locking hell. There are no off-the-shelf easy solutions because concurrency is genuinely a Very Hard Problem.
That's my Sunday rant out of the way.
:-)
Matthew
--
Matthew Pocockmailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozer(0191) 2566550
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
Mon, 2011-06-06, 08:37
#28
Re: actors and the "inheritance anamoly"
On Jun 6, 2011, at 01:04 , Meredith Gregory wrote:
Dear Viktor,This sounds very interesting, indeed. Besides matching the declared and implemented transitions, the specification could then also be used to verify client code? I can see that getting pretty messy when each client needs to track the transitions stimulated by other clients. What remains in any case is that a specific input induces a set of possible transitions according to the possible initial states, and the corresponding set of final states and replies might be much smaller than the whole “phase space” (how do you label that?), meaning that some verification would be possible even without tracking the other actor’s state.
Iirc, the model in Caires' paper makes no assumptions about the locations of any of the actors, no assumptions about residence in a particular VM. The actor (called a service in Caires' paper) receives messages from any client that has its mailbox (called channel in Caires' paper).
In Rosette we looked at various approaches to distribution. You can attempt to hide the communication infrastructure. Then you can't distinguish the failure of an actor from the failure of the communication infrastructure. You can reify the communication infrastructure. We did this with an actor-based TupleSpace. Then you can separate out some of the concerns.
Interestingly, the enabled-set approach is completely immune to these choices. You still code an actor in terms of the messages its willing to process and you can specify up front the various transitions between the message sets and check that the written code matches this up front specification.
Regards,
Roland
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
Dear Greg,
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>
Dear Viktor,
Thanks for your engagement! If you look at Caires' paper here, you will see that an essentially actor model was one of the first to successfully yield a treatment via types for concurrency. (Actually, to my mind, Caires' approach in that paper is absolutely brilliant. By limiting the problem to a single channel along which to receive messages -- can you say mailbox? ;-) -- he side-stepped problems that were bogging down the progress on types for concurrency.) The model is specifically aimed at distributed systems.
Interesting! Thanks for the paper.
Just before I have a chance to read and digest the paper, how does it handle communicating with resident actors on other machines (since they can be exposed to multiple parties at the same time?)
In connection with actors, even with become you can provide a behavioral type system. Here's how we first thought of it back in the Rosette days. You can build a state machine over the enabled-sets. This governs the transitions between accepted messages. You can -- in principle -- check code against this state machine.
Sounds cool, how do you encode this in a distributed setting where multiple parties may have a reference to the same actor on another node?
Behavioral types are structural in the sense meant by the dichotomy between nominal vs structural you mention. Terminologically, we have to be careful, however. A lot of work has been done to treat so-called nominal features of languages (like references in ML and names in the π-calculus). This work does so in a way that certain kinds of machinery can be factored out and exposed via types. See, for example, FreshML. Behavioral type systems can take advantage of this work and so might talk about 'nominal types' -- but mean something totally different than the nominal typing you mean when you oppose it to structural types.
It's a jungle out there!
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Meredith Gregory <lgreg.meredith@gmail.com>Dear Viktor,
"A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing."
That's a great claim! Can you back that up with argumentation? i can certainly design systems that are robust against a variety of physical failures and whose responses to failures are represented in types for concurrency. So, they can actually be type checked against the guarantees of providers.
The claim is in the context of actors. The "become" operation rather makes it a dynamic construct.
i would also love to understand why it doesn't make sense to use type-level capabilities in distributed computing. If the researchers in this area (including myself and a long list of much, much smarter people) could be clued in we could avoid spending Sunday afternoon's on senseless work! ;-) Seriously, what intuitions do you have that suggest this is not a reasonable avenue of investigation?
See above, for non-actor distributed computing it is definitely viable.
My guess is that people don't really have a good sense of what is possible with types -- largely because their experience with types is all about structural guarantees. That is, they all essentially encode promises that data has such and such a shape. The intriguing thing about programs is that line between structure and function becomes blurred. For example, we all know that the constructs of Scala get represented as data structures and a REPL essentially walks those structures turning them into actions. When there's a really good fit of a structural representation of behavior (as in the lambda calculus or the π-calculus) you can turn a lot of structural checking into behavioral checking. Of course, this is just a cartoon of the situation, but i draw it to suggest thinking again about what is possible with types.
Absolutely, "types" is a very broad thing. And it might not even be sensible to talk about it at that level, I mean, are we talking nominal types, structural types or what?
Best wishes,
--greg
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>
2011/6/5 Matthew Pocock <turingatemyhamster@gmail.com>
Getting concurrent access to a shared resource right is hard.
Clarification: shared _mutable_ resource.
Getting concurrent access by many threads to multiple shared resources, some of which may be passed between threads dynamically, right is a Hard Thing indeed. Coming up with a formalism (type system?) that helps you avoid common errors while allowing you to still do interesting things and which can be understood by People In General rather than 1 1/2 PhD students and their yucca plant is Very Hard Indeed. Have any of you seen a full petri-net of an implementation of the TCP/IP stack? The last one I saw covered a complete sheet of A0 at a font size where you needed a telescope to read the labels.
A type system won't help you in the case of a network failure or an actor crashing, and I don't think it even makes sense to try to encode concurrency into a type-system when it comes to potentially distributed computing.
Actors are essentially a model of distriuted computing, running them in one VM is just an optimization.
Actors help a bit if you stick to immutable messages, as it serializes access to whatever actor-specific resources there are. It doesn't prevent inter-actor deadlock.
See my email to Greg.
Inheritance is a PITA with concurrency because a) the sub-class has at least as great a restriction on concurrent access as the super-class, and b) the compiler doesn't tell you this when you bork it up. Functional composition runs out of steam at some point, because you need to cleanly model joins as well as forks, and at that point you either burn cycles re-computing referentially transparent values (a-la haskell) or are back in resource-locking hell. There are no off-the-shelf easy solutions because concurrency is genuinely a Very Hard Problem.
That's my Sunday rant out of the way.
:-)
Matthew
--
Matthew Pocockmailto: turingatemyhamster@gmail.com gchat: turingatemyhamster@gmail.commsn: matthew_pocock@yahoo.co.uk irc.freenode.net: drdozer(0191) 2566550
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
--
Viktor Klang
Akka Tech LeadTypesafe - Enterprise-Grade Scala from the Experts
Twitter: @viktorklang
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SWSeattle, WA 98136
+1 206.650.3740
http://biosimilarity.blogspot.com
2011/6/5 √iktor Ҡlang <viktor.klang@gmail.com>