- About Scala
- Documentation
- Code Examples
- Software
- Scala Developers
Akka 2.x roadmap
Sun, 2011-09-18, 20:45
Hi there.
I thought it could be of general interest to post this to scala-user.
Here is a doc describing what we are working on right now. Will be
released in three steps: 2.0, 2.1 and 2.2.
https://docs.google.com/a/typesafe.com/document/pub?id=18W9-fKs55wiFNjXL...
Sun, 2011-09-18, 21:17
#2
Re: Akka 2.x roadmap
On 18 September 2011 21:44, Jonas Bonér <lists@jonasboner.com> wrote:
I get following error
We're sorry. This document is not published.
--
Grzegorz Kossakowski
Hi there.
I thought it could be of general interest to post this to scala-user.
Here is a doc describing what we are working on right now. Will be
released in three steps: 2.0, 2.1 and 2.2.
https://docs.google.com/a/typesafe.com/document/pub?id=18W9-fKs55wiFNjXL9q50PYOnR7-nnsImzJqHOPPbM4E
I get following error
We're sorry. This document is not published.
--
Grzegorz Kossakowski
Sun, 2011-09-18, 21:27
#3
Re: Re: Akka 2.x roadmap
I assume that the configuration-based deployment is either lightning-fast or optional; it strikes me as the kind of step that otherwise could lead to a large performance penalty for creating new actors. The rest all looks great in principle. Hopefully you won't run into any major practical barriers!
--Rex
On Sun, Sep 18, 2011 at 4:02 PM, Jonas Bonér <lists@jonasboner.com> wrote:
--Rex
On Sun, Sep 18, 2011 at 4:02 PM, Jonas Bonér <lists@jonasboner.com> wrote:
Sorry. I can't seem to create a public page from within Typesafe's domain on Google Docs. Copied it into my own. Sorry.
This should work now: https://docs.google.com/document/pub?id=1CMz_MEQA8oPcGw9oaFdq_KYYFB_5qZjsDYYwuXfZhBU
On Sun, Sep 18, 2011 at 9:44 PM, Jonas Bonér <lists@jonasboner.com> wrote:Hi there.
I thought it could be of general interest to post this to scala-user.
Here is a doc describing what we are working on right now. Will be
released in three steps: 2.0, 2.1 and 2.2.
https://docs.google.com/a/typesafe.com/document/pub?id=18W9-fKs55wiFNjXL9q50PYOnR7-nnsImzJqHOPPbM4E
--
Jonas Bonér
CTO
Typesafe <http://www.typesafe.com/> - Enterprise-Grade Scala from the
Experts
Phone: +46 733 777 123
Twitter: @jboner <http://twitter.com/jboner>
--
Jonas Bonér
CTO
Typesafe <http://www.typesafe.com/> - Enterprise-Grade Scala from the
Experts
Phone: +46 733 777 123
Twitter: @jboner <http://twitter.com/jboner>
Google+: http://gplus.to/jboner
Sun, 2011-09-18, 21:47
#4
Re: Re: Akka 2.x roadmap
On Sun, Sep 18, 2011 at 10:10 PM, Rex Kerr <ichoran@gmail.com> wrote:
I assume that the configuration-based deployment is either lightning-fast or optional; it strikes me as
It is a two step process. First is making sure that the deployment config is in sync at all nodes. This is done when the node joins the cluster. One time. Second is instantiating actual actors on node X. This is done lazily on demand, when needed.
the kind of step that otherwise could lead to a large performance penalty for creating new actors. The rest all looks great in principle. Hopefully you won't run into any major practical barriers!
Thank you.
--Rex
On Sun, Sep 18, 2011 at 4:02 PM, Jonas Bonér <lists@jonasboner.com> wrote:
Sorry. I can't seem to create a public page from within Typesafe's domain on Google Docs. Copied it into my own. Sorry.
This should work now: https://docs.google.com/document/pub?id=1CMz_MEQA8oPcGw9oaFdq_KYYFB_5qZjsDYYwuXfZhBU
On Sun, Sep 18, 2011 at 9:44 PM, Jonas Bonér <lists@jonasboner.com> wrote:Hi there.
I thought it could be of general interest to post this to scala-user.
Here is a doc describing what we are working on right now. Will be
released in three steps: 2.0, 2.1 and 2.2.
https://docs.google.com/a/typesafe.com/document/pub?id=18W9-fKs55wiFNjXL9q50PYOnR7-nnsImzJqHOPPbM4E
--
Jonas Bonér
CTO
Typesafe <http://www.typesafe.com/> - Enterprise-Grade Scala from the
Experts
Phone: +46 733 777 123
Twitter: @jboner <http://twitter.com/jboner>
--
Jonas Bonér
CTO
Typesafe <http://www.typesafe.com/> - Enterprise-Grade Scala from the
Experts
Phone: +46 733 777 123
Twitter: @jboner <http://twitter.com/jboner>
Google+: http://gplus.to/jboner
--
Jonas Bonér
CTO
Typesafe <http://www.typesafe.com/> - Enterprise-Grade Scala from the
Experts
Phone: +46 733 777 123
Twitter: @jboner <http://twitter.com/jboner>
Google+: http://gplus.to/jboner
Mon, 2011-09-19, 03:27
#5
Re: Re: Akka 2.x roadmap
Jonas, I don't understand what is meant by Location Transparency.
Doesn't ActorRef give that already?
On Sun, Sep 18, 2011 at 17:32, Jonas Bonér wrote:
>
>
> On Sun, Sep 18, 2011 at 10:10 PM, Rex Kerr wrote:
>>
>> I assume that the configuration-based deployment is either lightning-fast
>> or optional; it strikes me as
>
> It is a two step process. First is making sure that the deployment config is
> in sync at all nodes. This is done when the node joins the cluster. One
> time. Second is instantiating actual actors on node X. This is done lazily
> on demand, when needed.
>
>>
>> the kind of step that otherwise could lead to a large performance penalty
>> for creating new actors. The rest all looks great in principle. Hopefully
>> you won't run into any major practical barriers!
>
> Thank you.
>
>>
>> --Rex
>>
>>
>> On Sun, Sep 18, 2011 at 4:02 PM, Jonas Bonér wrote:
>>>
>>> Sorry. I can't seem to create a public page from within Typesafe's domain
>>> on Google Docs. Copied it into my own. Sorry.
>>> This should work now:
>>>
>>> https://docs.google.com/document/pub?id=1CMz_MEQA8oPcGw9oaFdq_KYYFB_5qZj...
>>>
>>> On Sun, Sep 18, 2011 at 9:44 PM, Jonas Bonér
>>> wrote:
>>>>
>>>> Hi there.
>>>>
>>>> I thought it could be of general interest to post this to scala-user.
>>>>
>>>> Here is a doc describing what we are working on right now. Will be
>>>> released in three steps: 2.0, 2.1 and 2.2.
>>>>
>>>>
>>>> https://docs.google.com/a/typesafe.com/document/pub?id=18W9-fKs55wiFNjXL...
>>>>
>>>> --
>>>> Jonas Bonér
>>>> CTO
>>>> Typesafe - Enterprise-Grade Scala from the
>>>> Experts
>>>> Phone: +46 733 777 123
>>>> Twitter: @jboner
>>>
>>>
>>>
>>> --
>>> Jonas Bonér
>>> CTO
>>> Typesafe - Enterprise-Grade Scala from the
>>> Experts
>>> Phone: +46 733 777 123
>>> Twitter: @jboner
>>> Google+: http://gplus.to/jboner
>>>
>>>
>>>
>>
>
>
>
> --
> Jonas Bonér
> CTO
> Typesafe - Enterprise-Grade Scala from the
> Experts
> Phone: +46 733 777 123
> Twitter: @jboner
> Google+: http://gplus.to/jboner
>
>
>
>
Mon, 2011-09-19, 15:07
#6
Re: Akka 2.x roadmap
Hi Daniel,
ActorRef exists for multiple reasons, and this decoupling is one of the most important. However, in Akka 1.2 you still need to do something special in the code whenever you want to get a remote ActorRef back. Changing this, i.e. making the normal factory methods respect the deployment configuration as given in the config file, is the new thing.
One difficulty which we are solving right now is that for true transparency, calling the methods on ActorRef must always have the same semantics, irrespective of the actor’s physical location. This—in a nutshell—is the reason why the feature is not released, yet.
Regards,
Roland
On Sep 19, 2011, at 04:23 , Daniel Sobral wrote:
> Jonas, I don't understand what is meant by Location Transparency.
> Doesn't ActorRef give that already?
>
> On Sun, Sep 18, 2011 at 17:32, Jonas Bonér wrote:
>>
>>
>> On Sun, Sep 18, 2011 at 10:10 PM, Rex Kerr wrote:
>>>
>>> I assume that the configuration-based deployment is either lightning-fast
>>> or optional; it strikes me as
>>
>> It is a two step process. First is making sure that the deployment config is
>> in sync at all nodes. This is done when the node joins the cluster. One
>> time. Second is instantiating actual actors on node X. This is done lazily
>> on demand, when needed.
>>
>>>
>>> the kind of step that otherwise could lead to a large performance penalty
>>> for creating new actors. The rest all looks great in principle. Hopefully
>>> you won't run into any major practical barriers!
>>
>> Thank you.
>>
>>>
>>> --Rex
>>>
>>>
>>> On Sun, Sep 18, 2011 at 4:02 PM, Jonas Bonér wrote:
>>>>
>>>> Sorry. I can't seem to create a public page from within Typesafe's domain
>>>> on Google Docs. Copied it into my own. Sorry.
>>>> This should work now:
>>>>
>>>> https://docs.google.com/document/pub?id=1CMz_MEQA8oPcGw9oaFdq_KYYFB_5qZj...
>>>>
>>>> On Sun, Sep 18, 2011 at 9:44 PM, Jonas Bonér
>>>> wrote:
>>>>>
>>>>> Hi there.
>>>>>
>>>>> I thought it could be of general interest to post this to scala-user.
>>>>>
>>>>> Here is a doc describing what we are working on right now. Will be
>>>>> released in three steps: 2.0, 2.1 and 2.2.
>>>>>
>>>>>
>>>>> https://docs.google.com/a/typesafe.com/document/pub?id=18W9-fKs55wiFNjXL...
>>>>>
>>>>> --
>>>>> Jonas Bonér
>>>>> CTO
>>>>> Typesafe - Enterprise-Grade Scala from the
>>>>> Experts
>>>>> Phone: +46 733 777 123
>>>>> Twitter: @jboner
>>>>
>>>>
>>>>
>>>> --
>>>> Jonas Bonér
>>>> CTO
>>>> Typesafe - Enterprise-Grade Scala from the
>>>> Experts
>>>> Phone: +46 733 777 123
>>>> Twitter: @jboner
>>>> Google+: http://gplus.to/jboner
>>>>
>>>>
>>>>
>>>
>>
>>
>>
>> --
>> Jonas Bonér
>> CTO
>> Typesafe - Enterprise-Grade Scala from the
>> Experts
>> Phone: +46 733 777 123
>> Twitter: @jboner
>> Google+: http://gplus.to/jboner
>>
>>
>>
>>
>
>
>
Mon, 2011-09-19, 21:37
#7
Re: Akka 2.x roadmap
On Mon, Sep 19, 2011 at 6:59 AM, Roland Kuhn <google@rkuhn.info> wrote:
I recently read this paper, and it keeps knocking on the Akka part of my brain and demanding to be introduced:
http://labs.oracle.com/techrep/1994/abstract-29.html
Here's a fun quote:
-0xe1a
One difficulty which we are solving right now is that for true transparency, calling the methods on ActorRef must always have the same semantics, irrespective of the actor’s physical location. This—in a nutshell—is the reason why the feature is not released, yet.
I recently read this paper, and it keeps knocking on the Akka part of my brain and demanding to be introduced:
http://labs.oracle.com/techrep/1994/abstract-29.html
Here's a fun quote:
Lessons from NFS
We do not need to look far to see the consequences of ignoring the distinction between local and distributed
computing at the interface level. NFS®, Sun’s distributed computing file system [14], [15] is an example of a nondistributed application programer interface (API) (open, read, write, close, etc.) re-implemented in a distributed
way.
Before NFS and other network file systems, an error status returned from one of these calls indicated something rare:
a full disk, or a catastrophe such as a disk crash. Most failures simply crashed the application along with the file system. Further, these errors generally reflected a situation that was either catastrophic for the program receiving the
error or one that the user running the program could do something about.
NFS opened the door to partial failure within a file system. It has essentially two modes for dealing with an inaccessible file server: soft mounting and hard mounting. But since the designers of NFS were unwilling (for easily understandable reasons) to change the interface to the file system to reflect the new, distributed nature of file access, neither option is particularly robust.
Soft mounts expose network or server failure to the client program. Read and write operations return a failure status
much more often than in the single-system case, and programs written with no allowance for these failures can easily corrupt the files used by the program. In the early days of NFS, system administrators tried to tune various parameters (time-out length, number of retries) to avoid these problems. These efforts failed. Today, soft mounts are seldom used, and when they are used, their use is generally restricted to read-only file systems or special applications.
Hard mounts mean that the application hangs until the server comes back up. This generally prevents a client program from seeing partial failure, but it leads to a malady familiar to users of workstation networks: one server crashes, and many workstations—even those apparently having nothing to do with that server—freeze. Figuring out the chain of causality is very difficult, and even when the cause of the failure can be determined, the individual user can rarely do anything about it but wait. This kind of brittleness can be reduced only with strong policies and network administration aimed at reducing interdependencies. Nonetheless, hard mounts are now almost universal.
Note that because the NFS protocol is stateless, it assumes clients contain no state of interest with respect to the protocol; in other words, the server doesn’t care what happens to the client. NFS is also a “pure” client-server protocol,
which means that failure can be limited to three parties: the client, the server, or the network. This combination of
features means that failure modes are simpler than in the more general case of peer-to-peer distributed object-oriented applications where no such limitation on shared state can be made and where servers are themselves clients
of other servers. Such peer-to-peer distributed applications can and will fail in far more intricate ways than are currently possible with NFS.
The limitations on the reliability and robustness of NFS have nothing to do with the implementation of the parts of
that system. There is no “quality of service” that can be improved to eliminate the need for hard mounting NFS volumes. The problem can be traced to the interface upon which NFS is built, an interface that was designed for nondistributed computing where partial failure was not possible. The reliability of NFS cannot be changed without a
change to that interface, a change that will reflect the distributed nature of the application.
-0xe1a
Mon, 2011-09-19, 22:17
#8
Re: Akka 2.x roadmap
Hi Alex,
that paper is a classic, and a great inspiration to do it right. When I read that paper I have the urge to tilt the head of the author ever so slightly, so that he would see right through the cracks between the irreconcilable paradigms and discover actors. The encapsulation offered by the simple mailbox-and-state approach lends itself nicely to writing software which is transport-agnostic in the sense of local vs. remote message sends. The problem of shared mutable state vanishes into nothingness if you follow actor best practices (though the whole Java/Scala platform leaves ample opportunity to aim dangerously close to your foot if you really want to).
In a certain sense the cited paper is an anti-manifest for Akka ;-)
Regards,
Roland
On Sep 19, 2011, at 22:35 , Alex Cruise wrote:
Roland KuhnTypesafe – Enterprise-Grade Scala from the Expertstwitter: @rolandkuhn
that paper is a classic, and a great inspiration to do it right. When I read that paper I have the urge to tilt the head of the author ever so slightly, so that he would see right through the cracks between the irreconcilable paradigms and discover actors. The encapsulation offered by the simple mailbox-and-state approach lends itself nicely to writing software which is transport-agnostic in the sense of local vs. remote message sends. The problem of shared mutable state vanishes into nothingness if you follow actor best practices (though the whole Java/Scala platform leaves ample opportunity to aim dangerously close to your foot if you really want to).
In a certain sense the cited paper is an anti-manifest for Akka ;-)
Regards,
Roland
On Sep 19, 2011, at 22:35 , Alex Cruise wrote:
On Mon, Sep 19, 2011 at 6:59 AM, Roland Kuhn <google@rkuhn.info> wrote:One difficulty which we are solving right now is that for true transparency, calling the methods on ActorRef must always have the same semantics, irrespective of the actor’s physical location. This—in a nutshell—is the reason why the feature is not released, yet.
I recently read this paper, and it keeps knocking on the Akka part of my brain and demanding to be introduced:
http://labs.oracle.com/techrep/1994/abstract-29.html
Here's a fun quote:Lessons from NFSWe do not need to look far to see the consequences of ignoring the distinction between local and distributed
computing at the interface level. NFS®, Sun’s distributed computing file system [14], [15] is an example of a nondistributed application programer interface (API) (open, read, write, close, etc.) re-implemented in a distributed
way.Before NFS and other network file systems, an error status returned from one of these calls indicated something rare:
a full disk, or a catastrophe such as a disk crash. Most failures simply crashed the application along with the file system. Further, these errors generally reflected a situation that was either catastrophic for the program receiving the
error or one that the user running the program could do something about.NFS opened the door to partial failure within a file system. It has essentially two modes for dealing with an inaccessible file server: soft mounting and hard mounting. But since the designers of NFS were unwilling (for easily understandable reasons) to change the interface to the file system to reflect the new, distributed nature of file access, neither option is particularly robust.Soft mounts expose network or server failure to the client program. Read and write operations return a failure status
much more often than in the single-system case, and programs written with no allowance for these failures can easily corrupt the files used by the program. In the early days of NFS, system administrators tried to tune various parameters (time-out length, number of retries) to avoid these problems. These efforts failed. Today, soft mounts are seldom used, and when they are used, their use is generally restricted to read-only file systems or special applications.Hard mounts mean that the application hangs until the server comes back up. This generally prevents a client program from seeing partial failure, but it leads to a malady familiar to users of workstation networks: one server crashes, and many workstations—even those apparently having nothing to do with that server—freeze. Figuring out the chain of causality is very difficult, and even when the cause of the failure can be determined, the individual user can rarely do anything about it but wait. This kind of brittleness can be reduced only with strong policies and network administration aimed at reducing interdependencies. Nonetheless, hard mounts are now almost universal.Note that because the NFS protocol is stateless, it assumes clients contain no state of interest with respect to the protocol; in other words, the server doesn’t care what happens to the client. NFS is also a “pure” client-server protocol,which means that failure can be limited to three parties: the client, the server, or the network. This combination of
features means that failure modes are simpler than in the more general case of peer-to-peer distributed object-oriented applications where no such limitation on shared state can be made and where servers are themselves clients
of other servers. Such peer-to-peer distributed applications can and will fail in far more intricate ways than are currently possible with NFS.The limitations on the reliability and robustness of NFS have nothing to do with the implementation of the parts of
that system. There is no “quality of service” that can be improved to eliminate the need for hard mounting NFS volumes. The problem can be traced to the interface upon which NFS is built, an interface that was designed for nondistributed computing where partial failure was not possible. The reliability of NFS cannot be changed without a
change to that interface, a change that will reflect the distributed nature of the application.
-0xe1a
Roland KuhnTypesafe – Enterprise-Grade Scala from the Expertstwitter: @rolandkuhn
Tue, 2011-09-20, 07:07
#9
Re: Akka 2.x roadmap
Another good article on this similar issue is Steve Vinoski's Convenience over Correctness (http://steve.vinoski.net/pdf/IEEE-Convenience_Over_Correctness.pdf). He also warns against the traditional RPC mechanisms that tend to hide local and remote abstractions behind a uniform API. This is convenient to the user, but leaks heavily with non trivial systems. Here's what he says ..
"The illusion of RPC — the idea that a distributed call can be treated the same as a local call — ignores not only latency and partial failure but also the concerns that spell the difference between a scalable networked system with good performance capabilities and a nonscalable one whose performance characteristics are dictated entirely by the RPC infrastructure."
When I saw the phrase Location Transparency in Jonas' roadmap document for Akka 2.0, immediately I was also reminded of both the Jim Waldo paper and the one by Steve Vinoski.
Steve of course goes on to say that the way to mitigate this is asynchronous messaging and the actor model of Erlang. And Akka is NOT about synchronous blocking RPC calls, it encourages the non-blocking philosophy all the way. Supervisors ensure that exceptions / crashes are handled properly, I guess that addresses Steve's concerns of partial failures as well.
Thanks.
On Tue, Sep 20, 2011 at 2:42 AM, Roland Kuhn <google@rkuhn.info> wrote:
--
Debasish Ghosh
http://manning.com/ghosh
Twttr: @debasishg
Blog: http://debasishg.blogspot.com
Code: http://github.com/debasishg
"The illusion of RPC — the idea that a distributed call can be treated the same as a local call — ignores not only latency and partial failure but also the concerns that spell the difference between a scalable networked system with good performance capabilities and a nonscalable one whose performance characteristics are dictated entirely by the RPC infrastructure."
When I saw the phrase Location Transparency in Jonas' roadmap document for Akka 2.0, immediately I was also reminded of both the Jim Waldo paper and the one by Steve Vinoski.
Steve of course goes on to say that the way to mitigate this is asynchronous messaging and the actor model of Erlang. And Akka is NOT about synchronous blocking RPC calls, it encourages the non-blocking philosophy all the way. Supervisors ensure that exceptions / crashes are handled properly, I guess that addresses Steve's concerns of partial failures as well.
Thanks.
On Tue, Sep 20, 2011 at 2:42 AM, Roland Kuhn <google@rkuhn.info> wrote:
Hi Alex,
that paper is a classic, and a great inspiration to do it right. When I read that paper I have the urge to tilt the head of the author ever so slightly, so that he would see right through the cracks between the irreconcilable paradigms and discover actors. The encapsulation offered by the simple mailbox-and-state approach lends itself nicely to writing software which is transport-agnostic in the sense of local vs. remote message sends. The problem of shared mutable state vanishes into nothingness if you follow actor best practices (though the whole Java/Scala platform leaves ample opportunity to aim dangerously close to your foot if you really want to).
In a certain sense the cited paper is an anti-manifest for Akka ;-)
Regards,
Roland
On Sep 19, 2011, at 22:35 , Alex Cruise wrote:On Mon, Sep 19, 2011 at 6:59 AM, Roland Kuhn <google@rkuhn.info> wrote:
One difficulty which we are solving right now is that for true transparency, calling the methods on ActorRef must always have the same semantics, irrespective of the actor’s physical location. This—in a nutshell—is the reason why the feature is not released, yet.
I recently read this paper, and it keeps knocking on the Akka part of my brain and demanding to be introduced:
http://labs.oracle.com/techrep/1994/abstract-29.html
Here's a fun quote:Lessons from NFSWe do not need to look far to see the consequences of ignoring the distinction between local and distributed
computing at the interface level. NFS®, Sun’s distributed computing file system [14], [15] is an example of a nondistributed application programer interface (API) (open, read, write, close, etc.) re-implemented in a distributed
way.Before NFS and other network file systems, an error status returned from one of these calls indicated something rare:
a full disk, or a catastrophe such as a disk crash. Most failures simply crashed the application along with the file system. Further, these errors generally reflected a situation that was either catastrophic for the program receiving the
error or one that the user running the program could do something about.NFS opened the door to partial failure within a file system. It has essentially two modes for dealing with an inaccessible file server: soft mounting and hard mounting. But since the designers of NFS were unwilling (for easily understandable reasons) to change the interface to the file system to reflect the new, distributed nature of file access, neither option is particularly robust.Soft mounts expose network or server failure to the client program. Read and write operations return a failure status
much more often than in the single-system case, and programs written with no allowance for these failures can easily corrupt the files used by the program. In the early days of NFS, system administrators tried to tune various parameters (time-out length, number of retries) to avoid these problems. These efforts failed. Today, soft mounts are seldom used, and when they are used, their use is generally restricted to read-only file systems or special applications.Hard mounts mean that the application hangs until the server comes back up. This generally prevents a client program from seeing partial failure, but it leads to a malady familiar to users of workstation networks: one server crashes, and many workstations—even those apparently having nothing to do with that server—freeze. Figuring out the chain of causality is very difficult, and even when the cause of the failure can be determined, the individual user can rarely do anything about it but wait. This kind of brittleness can be reduced only with strong policies and network administration aimed at reducing interdependencies. Nonetheless, hard mounts are now almost universal.Note that because the NFS protocol is stateless, it assumes clients contain no state of interest with respect to the protocol; in other words, the server doesn’t care what happens to the client. NFS is also a “pure” client-server protocol,which means that failure can be limited to three parties: the client, the server, or the network. This combination of
features means that failure modes are simpler than in the more general case of peer-to-peer distributed object-oriented applications where no such limitation on shared state can be made and where servers are themselves clients
of other servers. Such peer-to-peer distributed applications can and will fail in far more intricate ways than are currently possible with NFS.The limitations on the reliability and robustness of NFS have nothing to do with the implementation of the parts of
that system. There is no “quality of service” that can be improved to eliminate the need for hard mounting NFS volumes. The problem can be traced to the interface upon which NFS is built, an interface that was designed for nondistributed computing where partial failure was not possible. The reliability of NFS cannot be changed without a
change to that interface, a change that will reflect the distributed nature of the application.
-0xe1a
Roland KuhnTypesafe – Enterprise-Grade Scala from the Expertstwitter: @rolandkuhn
--
Debasish Ghosh
http://manning.com/ghosh
Twttr: @debasishg
Blog: http://debasishg.blogspot.com
Code: http://github.com/debasishg
Tue, 2011-09-20, 08:47
#10
Re: Akka 2.x roadmap
On Tue, Sep 20, 2011 at 8:01 AM, Debasish Ghosh
wrote:
> Another good article on this similar issue is Steve Vinoski's Convenience
> over Correctness
> (http://steve.vinoski.net/pdf/IEEE-Convenience_Over_Correctness.pdf). He
> also warns against the traditional RPC mechanisms that tend to hide local
> and remote abstractions behind a uniform API. This is convenient to the
> user, but leaks heavily with non trivial systems. Here's what he says ..
> "The illusion of RPC — the idea that a distributed call can be treated
> the same as a local call — ignores not only latency and partial failure but
> also the concerns that spell the difference between a scalable networked
> system with good performance capabilities and a nonscalable one whose
> performance characteristics are dictated entirely by the RPC
> infrastructure."
> When I saw the phrase Location Transparency in Jonas' roadmap document for
> Akka 2.0, immediately I was also reminded of both the Jim Waldo paper and
> the one by Steve Vinoski.
> Steve of course goes on to say that the way to mitigate this is asynchronous
> messaging and the actor model of Erlang. And Akka is NOT about synchronous
> blocking RPC calls, it encourages the non-blocking philosophy all the way.
> Supervisors ensure that exceptions / crashes are handled properly, I guess
> that addresses Steve's concerns of partial failures as well.
You could say that Akka hides local calls behind a uniform interface
for remote calls.
-jason
Wed, 2011-09-21, 00:17
#11
Re: Akka 2.x roadmap
On Tue, Sep 20, 2011 at 6:35 AM, Alex Cruise wrote:
> On Mon, Sep 19, 2011 at 6:59 AM, Roland Kuhn wrote:
>>
>> One difficulty which we are solving right now is that for true
>> transparency, calling the methods on ActorRef must always have the same
>> semantics, irrespective of the actor’s physical location. This—in a
>> nutshell—is the reason why the feature is not released, yet.
>
> I recently read this paper, and it keeps knocking on the Akka part of my
> brain and demanding to be introduced:
> http://labs.oracle.com/techrep/1994/abstract-29.html
I always get a little stirred up when that paper is cited.
It talks alot about the problems of "location transparency". And yet,
in my day to day work, I still find myself seeking that very thing,
over and over again, year after year. And I think the reason is that,
whatever the perils of transparency, "location opaqueness" is worse.
Having to treat some service as special just because it runs somewhere
else, having to rewrite code to move things around, is a painful
waste.
On Tue, Sep 20, 2011 at 4:01 PM, Debasish Ghosh
wrote:
> Another good article on this similar issue is Steve Vinoski's Convenience
> over Correctness
> (http://steve.vinoski.net/pdf/IEEE-Convenience_Over_Correctness.pdf). [snip]
> Steve of course goes on to say that the way to mitigate this is asynchronous
> messaging and the actor model of Erlang. And Akka is NOT about synchronous
> blocking RPC calls, it encourages the non-blocking philosophy all the way.
> Supervisors ensure that exceptions / crashes are handled properly, I guess
> that addresses Steve's concerns of partial failures as well.
We should not demonize synchronous interactions. They are common and
legitimate use cases. Asynchronous is not inherently better, and its
not the solution to everything. When you need data that will come from
elsewhere, /you are going to have to wait for it/. Internally, use
whatever clever mechanism you like to /implement/ the call, asynch
messages, futures, continuations. But there's no escaping the
dependency upon the data being available to proceed (*).
* Other than moving the computation to the location containing the
data via some kind of mobile code.
-Ben
Wed, 2011-09-21, 00:57
#12
Re: Akka 2.x roadmap
Akka embraces the unreliability of the network rather than pretend that it is not there.
Akka does **not** try to emulate shared state in the cluster and create a leaky illusion of being in-process like many other tools/products, something I think is fundamentally broken and is what the paper is referring to.
In Akka everything is designed for distribution from the beginning, if it is running in a local context then that is only an optimization. The essence of distributed computing is asynchronous message passing, Akka makes it first class. Add to this a fault-tolerance model that is designed for distribution and you have a very solid platform to build systems on.
/Jonas
On Mon, Sep 19, 2011 at 11:12 PM, Roland Kuhn <google@rkuhn.info> wrote:
--
Jonas Bonér
CTO
Typesafe <http://www.typesafe.com/> - Enterprise-Grade Scala from the
Experts
Phone: +46 733 777 123
Twitter: @jboner <http://twitter.com/jboner>
Google+: http://gplus.to/jboner
Akka does **not** try to emulate shared state in the cluster and create a leaky illusion of being in-process like many other tools/products, something I think is fundamentally broken and is what the paper is referring to.
In Akka everything is designed for distribution from the beginning, if it is running in a local context then that is only an optimization. The essence of distributed computing is asynchronous message passing, Akka makes it first class. Add to this a fault-tolerance model that is designed for distribution and you have a very solid platform to build systems on.
/Jonas
On Mon, Sep 19, 2011 at 11:12 PM, Roland Kuhn <google@rkuhn.info> wrote:
Hi Alex,
that paper is a classic, and a great inspiration to do it right. When I read that paper I have the urge to tilt the head of the author ever so slightly, so that he would see right through the cracks between the irreconcilable paradigms and discover actors. The encapsulation offered by the simple mailbox-and-state approach lends itself nicely to writing software which is transport-agnostic in the sense of local vs. remote message sends. The problem of shared mutable state vanishes into nothingness if you follow actor best practices (though the whole Java/Scala platform leaves ample opportunity to aim dangerously close to your foot if you really want to).
In a certain sense the cited paper is an anti-manifest for Akka ;-)
Regards,
Roland
On Sep 19, 2011, at 22:35 , Alex Cruise wrote:On Mon, Sep 19, 2011 at 6:59 AM, Roland Kuhn <google@rkuhn.info> wrote:
One difficulty which we are solving right now is that for true transparency, calling the methods on ActorRef must always have the same semantics, irrespective of the actor’s physical location. This—in a nutshell—is the reason why the feature is not released, yet.
I recently read this paper, and it keeps knocking on the Akka part of my brain and demanding to be introduced:
http://labs.oracle.com/techrep/1994/abstract-29.html
Here's a fun quote:Lessons from NFSWe do not need to look far to see the consequences of ignoring the distinction between local and distributed
computing at the interface level. NFS®, Sun’s distributed computing file system [14], [15] is an example of a nondistributed application programer interface (API) (open, read, write, close, etc.) re-implemented in a distributed
way.Before NFS and other network file systems, an error status returned from one of these calls indicated something rare:
a full disk, or a catastrophe such as a disk crash. Most failures simply crashed the application along with the file system. Further, these errors generally reflected a situation that was either catastrophic for the program receiving the
error or one that the user running the program could do something about.NFS opened the door to partial failure within a file system. It has essentially two modes for dealing with an inaccessible file server: soft mounting and hard mounting. But since the designers of NFS were unwilling (for easily understandable reasons) to change the interface to the file system to reflect the new, distributed nature of file access, neither option is particularly robust.Soft mounts expose network or server failure to the client program. Read and write operations return a failure status
much more often than in the single-system case, and programs written with no allowance for these failures can easily corrupt the files used by the program. In the early days of NFS, system administrators tried to tune various parameters (time-out length, number of retries) to avoid these problems. These efforts failed. Today, soft mounts are seldom used, and when they are used, their use is generally restricted to read-only file systems or special applications.Hard mounts mean that the application hangs until the server comes back up. This generally prevents a client program from seeing partial failure, but it leads to a malady familiar to users of workstation networks: one server crashes, and many workstations—even those apparently having nothing to do with that server—freeze. Figuring out the chain of causality is very difficult, and even when the cause of the failure can be determined, the individual user can rarely do anything about it but wait. This kind of brittleness can be reduced only with strong policies and network administration aimed at reducing interdependencies. Nonetheless, hard mounts are now almost universal.Note that because the NFS protocol is stateless, it assumes clients contain no state of interest with respect to the protocol; in other words, the server doesn’t care what happens to the client. NFS is also a “pure” client-server protocol,which means that failure can be limited to three parties: the client, the server, or the network. This combination of
features means that failure modes are simpler than in the more general case of peer-to-peer distributed object-oriented applications where no such limitation on shared state can be made and where servers are themselves clients
of other servers. Such peer-to-peer distributed applications can and will fail in far more intricate ways than are currently possible with NFS.The limitations on the reliability and robustness of NFS have nothing to do with the implementation of the parts of
that system. There is no “quality of service” that can be improved to eliminate the need for hard mounting NFS volumes. The problem can be traced to the interface upon which NFS is built, an interface that was designed for nondistributed computing where partial failure was not possible. The reliability of NFS cannot be changed without a
change to that interface, a change that will reflect the distributed nature of the application.
-0xe1a
Roland KuhnTypesafe – Enterprise-Grade Scala from the Expertstwitter: @rolandkuhn
--
Jonas Bonér
CTO
Typesafe <http://www.typesafe.com/> - Enterprise-Grade Scala from the
Experts
Phone: +46 733 777 123
Twitter: @jboner <http://twitter.com/jboner>
Google+: http://gplus.to/jboner
Wed, 2011-09-21, 02:57
#13
Re: Akka 2.x roadmap
On Tue, Sep 20, 2011 at 20:15, Ben Hutchison wrote:
>
> We should not demonize synchronous interactions. They are common and
> legitimate use cases. Asynchronous is not inherently better, and its
> not the solution to everything. When you need data that will come from
> elsewhere, /you are going to have to wait for it/. Internally, use
> whatever clever mechanism you like to /implement/ the call, asynch
> messages, futures, continuations. But there's no escaping the
> dependency upon the data being available to proceed (*).
The thing is, there are algorithms possible with synchronous
interactions that are not possible _at all_ asynchronously. That's why
asynchronous communication need time outs -- it re-introduces
synchronicity by creating failures.
And that's a key aspect of actors and location transparency: you
_have_ to introduce time outs and failures, which means the code will
have to deal with them. And that's exactly what Akka is doing: replies
may timeout, and supervisors will deal with crashes that might arise
out of that -- even if your application will be running in a
single-core CPU where none of that will ever happen, except if program
logic itself introduces it.
Sat, 2011-09-24, 22:47
#14
Re: Akka 2.x roadmap
On Monday 19 September 2011, Alex Cruise wrote:
> On Mon, Sep 19, 2011 at 6:59 AM, Roland Kuhn
wrote:
> > One difficulty which we are solving right now is that for true
> > transparency, calling the methods on ActorRef must always have the
> > same semantics, irrespective of the actor’s physical location.
> > This—in a nutshell—is the reason why the feature is not released,
> > yet.
>
> I recently read this paper, and it keeps knocking on the Akka part of
> my brain and demanding to be introduced:
>
> http://labs.oracle.com/techrep/1994/abstract-29.html
>
> Here's a fun quote:
>
> Lessons from NFS
Here's a very real-world example of how this plays out in the life of a
professional programmer.
At a company I worked for about 10 years ago or so, we use Suns as our
back-end servers. One of our applications accessed Berkeley DB files
over NFS. One day a formerly reliable application began to exhibit odd
failures in which the BDB index files became corrupted. After much
painstaking source code analysis, we discovered that an NFS upgrade had
introduced asynchronous NFS operations. With this performance
improvement, NFS calls would be serviced asynchronously with the client
sometimes receiving error indications on a call later than the one that
actually encountered the error (no later than the the close call).
Well, it turned out that if you filled up the file system (wrote beyond
the end of the file, which ordinarily grows the file as needed), you'd
get one of these deferred error responses. But the BDB code was very
carefully crafted to stage writes to the data area before writing an
index that referred to that data, only writing the index entry if the
data write succeeded. But if the error pertaining to the data write was
not delivered until a later call, the BDB code would write the index
entry that referred to a portion of the file that did not in fact
exist. Presto! Corrupt BDB file!
Randall Schulz
Tue, 2011-09-27, 11:37
#15
Re: Akka 2.x roadmap
hi
thanks for the roadmap, it triggered quite some talk here. However,
some indicative dates for the roadmap would be very nice to have so
rough time schedule.
best
joseph
On Sat, Sep 24, 2011 at 11:41 PM, Randall R Schulz wrote:
> On Monday 19 September 2011, Alex Cruise wrote:
>> On Mon, Sep 19, 2011 at 6:59 AM, Roland Kuhn
> wrote:
>> > One difficulty which we are solving right now is that for true
>> > transparency, calling the methods on ActorRef must always have the
>> > same semantics, irrespective of the actor’s physical location.
>> > This—in a nutshell—is the reason why the feature is not released,
>> > yet.
>>
>> I recently read this paper, and it keeps knocking on the Akka part of
>> my brain and demanding to be introduced:
>>
>> http://labs.oracle.com/techrep/1994/abstract-29.html
>>
>> Here's a fun quote:
>>
>> Lessons from NFS
>
> Here's a very real-world example of how this plays out in the life of a
> professional programmer.
>
> At a company I worked for about 10 years ago or so, we use Suns as our
> back-end servers. One of our applications accessed Berkeley DB files
> over NFS. One day a formerly reliable application began to exhibit odd
> failures in which the BDB index files became corrupted. After much
> painstaking source code analysis, we discovered that an NFS upgrade had
> introduced asynchronous NFS operations. With this performance
> improvement, NFS calls would be serviced asynchronously with the client
> sometimes receiving error indications on a call later than the one that
> actually encountered the error (no later than the the close call).
> Well, it turned out that if you filled up the file system (wrote beyond
> the end of the file, which ordinarily grows the file as needed), you'd
> get one of these deferred error responses. But the BDB code was very
> carefully crafted to stage writes to the data area before writing an
> index that referred to that data, only writing the index entry if the
> data write succeeded. But if the error pertaining to the data write was
> not delivered until a later call, the BDB code would write the index
> entry that referred to a portion of the file that did not in fact
> exist. Presto! Corrupt BDB file!
>
>
> Randall Schulz
>
Tue, 2011-10-04, 13:37
#16
Re: Akka 2.x roadmap
Added dates to the roadmap now.
On Tue, Sep 27, 2011 at 12:18 PM, Joseph Pachod <joseph.pachod@gmail.com> wrote:
--
Jonas Bonér
CTO
Typesafe <http://www.typesafe.com/> - Enterprise-Grade Scala from the
Experts
Phone: +46 733 777 123
Twitter: @jboner <http://twitter.com/jboner>
Google+: http://gplus.to/jboner
On Tue, Sep 27, 2011 at 12:18 PM, Joseph Pachod <joseph.pachod@gmail.com> wrote:
hi
thanks for the roadmap, it triggered quite some talk here. However,
some indicative dates for the roadmap would be very nice to have so
rough time schedule.
best
joseph
On Sat, Sep 24, 2011 at 11:41 PM, Randall R Schulz <rschulz@sonic.net> wrote:
> On Monday 19 September 2011, Alex Cruise wrote:
>> On Mon, Sep 19, 2011 at 6:59 AM, Roland Kuhn <google@rkuhn.info>
> wrote:
>> > One difficulty which we are solving right now is that for true
>> > transparency, calling the methods on ActorRef must always have the
>> > same semantics, irrespective of the actor’s physical location.
>> > This—in a nutshell—is the reason why the feature is not released,
>> > yet.
>>
>> I recently read this paper, and it keeps knocking on the Akka part of
>> my brain and demanding to be introduced:
>>
>> http://labs.oracle.com/techrep/1994/abstract-29.html
>>
>> Here's a fun quote:
>>
>> Lessons from NFS
>
> Here's a very real-world example of how this plays out in the life of a
> professional programmer.
>
> At a company I worked for about 10 years ago or so, we use Suns as our
> back-end servers. One of our applications accessed Berkeley DB files
> over NFS. One day a formerly reliable application began to exhibit odd
> failures in which the BDB index files became corrupted. After much
> painstaking source code analysis, we discovered that an NFS upgrade had
> introduced asynchronous NFS operations. With this performance
> improvement, NFS calls would be serviced asynchronously with the client
> sometimes receiving error indications on a call later than the one that
> actually encountered the error (no later than the the close call).
> Well, it turned out that if you filled up the file system (wrote beyond
> the end of the file, which ordinarily grows the file as needed), you'd
> get one of these deferred error responses. But the BDB code was very
> carefully crafted to stage writes to the data area before writing an
> index that referred to that data, only writing the index entry if the
> data write succeeded. But if the error pertaining to the data write was
> not delivered until a later call, the BDB code would write the index
> entry that referred to a portion of the file that did not in fact
> exist. Presto! Corrupt BDB file!
>
>
> Randall Schulz
>
--
Jonas Bonér
CTO
Typesafe <http://www.typesafe.com/> - Enterprise-Grade Scala from the
Experts
Phone: +46 733 777 123
Twitter: @jboner <http://twitter.com/jboner>
Google+: http://gplus.to/jboner
This should work now: https://docs.google.com/document/pub?id=1CMz_MEQA8oPcGw9oaFdq_KYYFB_5qZjsDYYwuXfZhBU
On Sun, Sep 18, 2011 at 9:44 PM, Jonas Bonér <lists@jonasboner.com> wrote:
--
Jonas Bonér
CTO
Typesafe <http://www.typesafe.com/> - Enterprise-Grade Scala from the
Experts
Phone: +46 733 777 123
Twitter: @jboner <http://twitter.com/jboner>
Google+: http://gplus.to/jboner