- About Scala
- Documentation
- Code Examples
- Software
- Scala Developers
Closures and Concurrency
Mon, 2009-09-14, 20:15
Hi All,
I think there's a bug here, but there may be some way to do this that
I'm unaware of. Basically, I got to wondering if I have a local var
that's accessed by two closures, each of which is run by a different
thread, can I get the Scala compiler to synchronize access in some way
(such as to make it volatile in the object to which it gets promoted
so it can be visible to both closures). Because if not the Java memory
model won't guarantee one thread will see the changes made by the
other thread.
Looking at the bytecodes generated, it didn't look good. A shared Int
for example will get stuck in a scala.runtime.IntRef, which looks like
this:
package scala.runtime;
public class IntRef implements java.io.Serializable {
private static final long serialVersionUID = 1488197132022872888L;
public int elem;
public IntRef(int elem) { this.elem = elem; }
public String toString() { return Integer.toString(elem); }
}
Public data is hard to synchronize. If I stick an @volatile annotation
on the local var, it has no effect. It just uses IntRef again.
I wanted to try and catch the problem, but there was a Heisenberg
uncertainty problem. For me to know that thread A has written to the
variable before thread B tries to read it, I have to synchronized the
two threads. Which means they are synchronized and so I can't catch
the problem. Running this code, for example, I was never able to
witness a failure (which doesn't mean I couldn't in theory, but my
guess is that the waitForBeat synchronization is sufficient to ensure
threads ping and pong always see each other's writes to buf):
import org.scalatest.FunSuite
import org.scalatest.concurrent.ConductorMethods
import org.scalatest.matchers.ShouldMatchers._
import java.util.concurrent.ArrayBlockingQueue
class HeisenbergSuite extends FunSuite with ConductorMethods {
test("can't see closure concurrency problem by looking directly at it") {
var buf = 0
thread("ping") {
for (i <- 2 to 100000 by 2) {
buf = i
waitForBeat(i)
buf should be (i + 1)
}
}
thread("pong") {
for (i <- 1 to 100000 by 2) {
waitForBeat(i)
buf should be (i + 1)
buf = i + 2
}
}
}
}
However just as I was scratching my head about this Heinz Kabutz
pinged me on Skype, and provided four Java classes that show this kid
of problem in Java (these were based on some code sent to Heinz by
George Georgovassilis). I translated them into Scala, and voila, you
can see it:
import org.scalatest.FunSuite
import org.scalatest.concurrent.ConductorMethods
import org.scalatest.matchers.ShouldMatchers._
import java.util.concurrent.ArrayBlockingQueue
class ClosureProblemSuite extends FunSuite with ConductorMethods {
test("closure access to mutable shared state isn't synchronized") {
var buf = 0
thread("ping") {
var ok = 0
var mistake = 0
while (true) {
buf = 42
if (buf != 42)
mistake += 1
else {
ok += 1
println("ok: " + ok + " mistake: " + mistake)
}
}
}
thread("pong") {
while (true) {
buf = 0
}
}
}
}
If you run this with:
java -server -cp scalatest-1.0-SNAPSHOT.jar:scala-library.jar
org.scalatest.tools.Runner -p . -o -s ClosureProblemSuite
What you'll see is that eventually the number of mistakes stays
constant. The reason is that because the elem field in IntRef isn't
private and volatile, hotspot will after a while perform some
optimization that prevents the ping thread from seeing pong's updates.
What probably needs to be done, if there is no other way to do this,
is to define volatile versions of the Ref classes, VolatileIntRef for
IntRef, VolatileObjectRef for ObjectRef, etc., which get used if
there's a @volatile annotation on a local variable that gets promoted
to the heap for closure to access them. Let me know if there's some
way to accomplish this currently (other than making putting buf in an
object myself and making it private and volatile there). If not, I'll
file an enhancement request.
Thanks.
Bill
----
Bill Venners
Artima, Inc.
http://www.artima.com
Mon, 2009-09-14, 20:57
#2
Re: Closures and Concurrency
Try using 64-bit values for higher failure rate, even on 64-bit hardware.
On Mon, Sep 14, 2009 at 2:14 PM, Bill Venners <bill@artima.com> wrote:
On Mon, Sep 14, 2009 at 2:14 PM, Bill Venners <bill@artima.com> wrote:
Hi All,
I think there's a bug here, but there may be some way to do this that
I'm unaware of. Basically, I got to wondering if I have a local var
that's accessed by two closures, each of which is run by a different
thread, can I get the Scala compiler to synchronize access in some way
(such as to make it volatile in the object to which it gets promoted
so it can be visible to both closures). Because if not the Java memory
model won't guarantee one thread will see the changes made by the
other thread.
Looking at the bytecodes generated, it didn't look good. A shared Int
for example will get stuck in a scala.runtime.IntRef, which looks like
this:
package scala.runtime;
public class IntRef implements java.io.Serializable {
private static final long serialVersionUID = 1488197132022872888L;
public int elem;
public IntRef(int elem) { this.elem = elem; }
public String toString() { return Integer.toString(elem); }
}
Public data is hard to synchronize. If I stick an @volatile annotation
on the local var, it has no effect. It just uses IntRef again.
I wanted to try and catch the problem, but there was a Heisenberg
uncertainty problem. For me to know that thread A has written to the
variable before thread B tries to read it, I have to synchronized the
two threads. Which means they are synchronized and so I can't catch
the problem. Running this code, for example, I was never able to
witness a failure (which doesn't mean I couldn't in theory, but my
guess is that the waitForBeat synchronization is sufficient to ensure
threads ping and pong always see each other's writes to buf):
import org.scalatest.FunSuite
import org.scalatest.concurrent.ConductorMethods
import org.scalatest.matchers.ShouldMatchers._
import java.util.concurrent.ArrayBlockingQueue
class HeisenbergSuite extends FunSuite with ConductorMethods {
test("can't see closure concurrency problem by looking directly at it") {
var buf = 0
thread("ping") {
for (i <- 2 to 100000 by 2) {
buf = i
waitForBeat(i)
buf should be (i + 1)
}
}
thread("pong") {
for (i <- 1 to 100000 by 2) {
waitForBeat(i)
buf should be (i + 1)
buf = i + 2
}
}
}
}
However just as I was scratching my head about this Heinz Kabutz
pinged me on Skype, and provided four Java classes that show this kid
of problem in Java (these were based on some code sent to Heinz by
George Georgovassilis). I translated them into Scala, and voila, you
can see it:
import org.scalatest.FunSuite
import org.scalatest.concurrent.ConductorMethods
import org.scalatest.matchers.ShouldMatchers._
import java.util.concurrent.ArrayBlockingQueue
class ClosureProblemSuite extends FunSuite with ConductorMethods {
test("closure access to mutable shared state isn't synchronized") {
var buf = 0
thread("ping") {
var ok = 0
var mistake = 0
while (true) {
buf = 42
if (buf != 42)
mistake += 1
else {
ok += 1
println("ok: " + ok + " mistake: " + mistake)
}
}
}
thread("pong") {
while (true) {
buf = 0
}
}
}
}
If you run this with:
java -server -cp scalatest-1.0-SNAPSHOT.jar:scala-library.jar
org.scalatest.tools.Runner -p . -o -s ClosureProblemSuite
What you'll see is that eventually the number of mistakes stays
constant. The reason is that because the elem field in IntRef isn't
private and volatile, hotspot will after a while perform some
optimization that prevents the ping thread from seeing pong's updates.
What probably needs to be done, if there is no other way to do this,
is to define volatile versions of the Ref classes, VolatileIntRef for
IntRef, VolatileObjectRef for ObjectRef, etc., which get used if
there's a @volatile annotation on a local variable that gets promoted
to the heap for closure to access them. Let me know if there's some
way to accomplish this currently (other than making putting buf in an
object myself and making it private and volatile there). If not, I'll
file an enhancement request.
Thanks.
Bill
----
Bill Venners
Artima, Inc.
http://www.artima.com
Mon, 2009-09-14, 21:07
#3
Re: Closures and Concurrency
Hi David,
I don't think so. synchronized is defined on AnyRef, not Any, so this
technique couldn't be used for value types. Oh, maybe it is getting
converted to RichInt to get the synchronized method?
Bill
On Mon, Sep 14, 2009 at 12:23 PM, David Pollak
wrote:
> Bill,
> It should be legal to do:
> buf.synchronized {
> ...
> }
>
> That would deal with the JVM memory model issue. In 2.7.5, it's not looking
> like the compiler understands that the references to buf are being turned
> into IntRefs, but that seems to be a bug in the compiler rather than
> anything else.
> My 2 cents.
> Thanks,
> David
> On Mon, Sep 14, 2009 at 12:14 PM, Bill Venners wrote:
>>
>> Hi All,
>>
>> I think there's a bug here, but there may be some way to do this that
>> I'm unaware of. Basically, I got to wondering if I have a local var
>> that's accessed by two closures, each of which is run by a different
>> thread, can I get the Scala compiler to synchronize access in some way
>> (such as to make it volatile in the object to which it gets promoted
>> so it can be visible to both closures). Because if not the Java memory
>> model won't guarantee one thread will see the changes made by the
>> other thread.
>>
>> Looking at the bytecodes generated, it didn't look good. A shared Int
>> for example will get stuck in a scala.runtime.IntRef, which looks like
>> this:
>>
>> package scala.runtime;
>>
>> public class IntRef implements java.io.Serializable {
>> private static final long serialVersionUID = 1488197132022872888L;
>>
>> public int elem;
>> public IntRef(int elem) { this.elem = elem; }
>> public String toString() { return Integer.toString(elem); }
>> }
>>
>> Public data is hard to synchronize. If I stick an @volatile annotation
>> on the local var, it has no effect. It just uses IntRef again.
>>
>> I wanted to try and catch the problem, but there was a Heisenberg
>> uncertainty problem. For me to know that thread A has written to the
>> variable before thread B tries to read it, I have to synchronized the
>> two threads. Which means they are synchronized and so I can't catch
>> the problem. Running this code, for example, I was never able to
>> witness a failure (which doesn't mean I couldn't in theory, but my
>> guess is that the waitForBeat synchronization is sufficient to ensure
>> threads ping and pong always see each other's writes to buf):
>>
>> import org.scalatest.FunSuite
>> import org.scalatest.concurrent.ConductorMethods
>> import org.scalatest.matchers.ShouldMatchers._
>> import java.util.concurrent.ArrayBlockingQueue
>>
>> class HeisenbergSuite extends FunSuite with ConductorMethods {
>>
>> test("can't see closure concurrency problem by looking directly at it") {
>>
>> var buf = 0
>>
>> thread("ping") {
>> for (i <- 2 to 100000 by 2) {
>> buf = i
>> waitForBeat(i)
>> buf should be (i + 1)
>> }
>> }
>>
>> thread("pong") {
>> for (i <- 1 to 100000 by 2) {
>> waitForBeat(i)
>> buf should be (i + 1)
>> buf = i + 2
>> }
>> }
>> }
>> }
>>
>> However just as I was scratching my head about this Heinz Kabutz
>> pinged me on Skype, and provided four Java classes that show this kid
>> of problem in Java (these were based on some code sent to Heinz by
>> George Georgovassilis). I translated them into Scala, and voila, you
>> can see it:
>>
>> import org.scalatest.FunSuite
>> import org.scalatest.concurrent.ConductorMethods
>> import org.scalatest.matchers.ShouldMatchers._
>> import java.util.concurrent.ArrayBlockingQueue
>>
>> class ClosureProblemSuite extends FunSuite with ConductorMethods {
>>
>> test("closure access to mutable shared state isn't synchronized") {
>>
>> var buf = 0
>>
>> thread("ping") {
>>
>> var ok = 0
>> var mistake = 0
>>
>> while (true) {
>> buf = 42
>> if (buf != 42)
>> mistake += 1
>> else {
>> ok += 1
>> println("ok: " + ok + " mistake: " + mistake)
>> }
>> }
>> }
>>
>> thread("pong") {
>> while (true) {
>> buf = 0
>> }
>> }
>> }
>> }
>>
>> If you run this with:
>>
>> java -server -cp scalatest-1.0-SNAPSHOT.jar:scala-library.jar
>> org.scalatest.tools.Runner -p . -o -s ClosureProblemSuite
>>
>> What you'll see is that eventually the number of mistakes stays
>> constant. The reason is that because the elem field in IntRef isn't
>> private and volatile, hotspot will after a while perform some
>> optimization that prevents the ping thread from seeing pong's updates.
>>
>> What probably needs to be done, if there is no other way to do this,
>> is to define volatile versions of the Ref classes, VolatileIntRef for
>> IntRef, VolatileObjectRef for ObjectRef, etc., which get used if
>> there's a @volatile annotation on a local variable that gets promoted
>> to the heap for closure to access them. Let me know if there's some
>> way to accomplish this currently (other than making putting buf in an
>> object myself and making it private and volatile there). If not, I'll
>> file an enhancement request.
>>
>> Thanks.
>>
>> Bill
>> ----
>> Bill Venners
>> Artima, Inc.
>> http://www.artima.com
>
>
>
> --
> Lift, the simply functional web framework http://liftweb.net
> Beginning Scala http://www.apress.com/book/view/1430219890
> Follow me: http://twitter.com/dpp
> Git some: http://github.com/dpp
>
Mon, 2009-09-14, 21:17
#4
Re: Closures and Concurrency
Even if it was translated to RichInt, this would still be
synchronizing on different locks, so no visibility effect.
I assume you want the closures to run in parallel, right? Otherwise
you could add synchronization around them. Tough issue. I'm very
interested in seeing what's the scala approach to this.
Dimitris
PS: isn't it this issue (the potential to render a local variable,
through closures, thread-unsafe) one of the main reasons that closures
were dropped from Java? I vaguely recall so.
2009/9/14 Bill Venners :
> Hi David,
>
> I don't think so. synchronized is defined on AnyRef, not Any, so this
> technique couldn't be used for value types. Oh, maybe it is getting
> converted to RichInt to get the synchronized method?
>
> Bill
>
> On Mon, Sep 14, 2009 at 12:23 PM, David Pollak
> wrote:
>> Bill,
>> It should be legal to do:
>> buf.synchronized {
>> ...
>> }
>>
>> That would deal with the JVM memory model issue. In 2.7.5, it's not looking
>> like the compiler understands that the references to buf are being turned
>> into IntRefs, but that seems to be a bug in the compiler rather than
>> anything else.
>> My 2 cents.
>> Thanks,
>> David
>> On Mon, Sep 14, 2009 at 12:14 PM, Bill Venners wrote:
>>>
>>> Hi All,
>>>
>>> I think there's a bug here, but there may be some way to do this that
>>> I'm unaware of. Basically, I got to wondering if I have a local var
>>> that's accessed by two closures, each of which is run by a different
>>> thread, can I get the Scala compiler to synchronize access in some way
>>> (such as to make it volatile in the object to which it gets promoted
>>> so it can be visible to both closures). Because if not the Java memory
>>> model won't guarantee one thread will see the changes made by the
>>> other thread.
>>>
>>> Looking at the bytecodes generated, it didn't look good. A shared Int
>>> for example will get stuck in a scala.runtime.IntRef, which looks like
>>> this:
>>>
>>> package scala.runtime;
>>>
>>> public class IntRef implements java.io.Serializable {
>>> private static final long serialVersionUID = 1488197132022872888L;
>>>
>>> public int elem;
>>> public IntRef(int elem) { this.elem = elem; }
>>> public String toString() { return Integer.toString(elem); }
>>> }
>>>
>>> Public data is hard to synchronize. If I stick an @volatile annotation
>>> on the local var, it has no effect. It just uses IntRef again.
>>>
>>> I wanted to try and catch the problem, but there was a Heisenberg
>>> uncertainty problem. For me to know that thread A has written to the
>>> variable before thread B tries to read it, I have to synchronized the
>>> two threads. Which means they are synchronized and so I can't catch
>>> the problem. Running this code, for example, I was never able to
>>> witness a failure (which doesn't mean I couldn't in theory, but my
>>> guess is that the waitForBeat synchronization is sufficient to ensure
>>> threads ping and pong always see each other's writes to buf):
>>>
>>> import org.scalatest.FunSuite
>>> import org.scalatest.concurrent.ConductorMethods
>>> import org.scalatest.matchers.ShouldMatchers._
>>> import java.util.concurrent.ArrayBlockingQueue
>>>
>>> class HeisenbergSuite extends FunSuite with ConductorMethods {
>>>
>>> test("can't see closure concurrency problem by looking directly at it") {
>>>
>>> var buf = 0
>>>
>>> thread("ping") {
>>> for (i <- 2 to 100000 by 2) {
>>> buf = i
>>> waitForBeat(i)
>>> buf should be (i + 1)
>>> }
>>> }
>>>
>>> thread("pong") {
>>> for (i <- 1 to 100000 by 2) {
>>> waitForBeat(i)
>>> buf should be (i + 1)
>>> buf = i + 2
>>> }
>>> }
>>> }
>>> }
>>>
>>> However just as I was scratching my head about this Heinz Kabutz
>>> pinged me on Skype, and provided four Java classes that show this kid
>>> of problem in Java (these were based on some code sent to Heinz by
>>> George Georgovassilis). I translated them into Scala, and voila, you
>>> can see it:
>>>
>>> import org.scalatest.FunSuite
>>> import org.scalatest.concurrent.ConductorMethods
>>> import org.scalatest.matchers.ShouldMatchers._
>>> import java.util.concurrent.ArrayBlockingQueue
>>>
>>> class ClosureProblemSuite extends FunSuite with ConductorMethods {
>>>
>>> test("closure access to mutable shared state isn't synchronized") {
>>>
>>> var buf = 0
>>>
>>> thread("ping") {
>>>
>>> var ok = 0
>>> var mistake = 0
>>>
>>> while (true) {
>>> buf = 42
>>> if (buf != 42)
>>> mistake += 1
>>> else {
>>> ok += 1
>>> println("ok: " + ok + " mistake: " + mistake)
>>> }
>>> }
>>> }
>>>
>>> thread("pong") {
>>> while (true) {
>>> buf = 0
>>> }
>>> }
>>> }
>>> }
>>>
>>> If you run this with:
>>>
>>> java -server -cp scalatest-1.0-SNAPSHOT.jar:scala-library.jar
>>> org.scalatest.tools.Runner -p . -o -s ClosureProblemSuite
>>>
>>> What you'll see is that eventually the number of mistakes stays
>>> constant. The reason is that because the elem field in IntRef isn't
>>> private and volatile, hotspot will after a while perform some
>>> optimization that prevents the ping thread from seeing pong's updates.
>>>
>>> What probably needs to be done, if there is no other way to do this,
>>> is to define volatile versions of the Ref classes, VolatileIntRef for
>>> IntRef, VolatileObjectRef for ObjectRef, etc., which get used if
>>> there's a @volatile annotation on a local variable that gets promoted
>>> to the heap for closure to access them. Let me know if there's some
>>> way to accomplish this currently (other than making putting buf in an
>>> object myself and making it private and volatile there). If not, I'll
>>> file an enhancement request.
>>>
>>> Thanks.
>>>
>>> Bill
>>> ----
>>> Bill Venners
>>> Artima, Inc.
>>> http://www.artima.com
>>
>>
>>
>> --
>> Lift, the simply functional web framework http://liftweb.net
>> Beginning Scala http://www.apress.com/book/view/1430219890
>> Follow me: http://twitter.com/dpp
>> Git some: http://github.com/dpp
>>
>
>
>
> --
> Bill Venners
> Artima, Inc.
> http://www.artima.com
>
Mon, 2009-09-14, 21:27
#5
Re: Closures and Concurrency
On Mon, Sep 14, 2009 at 3:10 PM, Jim Andreou <jim.andreou@gmail.com> wrote:
Is that true? I'm fairly convinced that while locking wouldn't work, visibility would.
Not that I would recommend that anyway, just saying...
Even if it was translated to RichInt, this would still be
synchronizing on different locks, so no visibility effect.
Is that true? I'm fairly convinced that while locking wouldn't work, visibility would.
Not that I would recommend that anyway, just saying...
Mon, 2009-09-14, 21:47
#6
Re: Closures and Concurrency
Yes, visibility is guaranteed (at least by the memory model) only if
synchronization uses the same lock (I know I agree with myself here,
this is just argument by repetition). The same goes for volatile. (In
both cases you may actually get visibility too, just formally it's not
guaranteed and an uber-smart jvm can freely break such code).
2009/9/14 Nils Kilden-Pedersen :
> On Mon, Sep 14, 2009 at 3:10 PM, Jim Andreou wrote:
>>
>> Even if it was translated to RichInt, this would still be
>> synchronizing on different locks, so no visibility effect.
>
> Is that true? I'm fairly convinced that while locking wouldn't work,
> visibility would.
> Not that I would recommend that anyway, just saying...
>
>
Tue, 2009-09-15, 00:07
#7
Re: Closures and Concurrency
Hi All,
You're right about visibility requiring the same lock. It's moot
though. I just tried this and:
scala> 7.synchronized { println("hi") }
:5: error: type mismatch;
found : Int
required: ?{val synchronized: ?}
Note that implicit conversions are not applicable because they are ambiguous:
both method int2Integer in object Predef of type (Int)java.lang.Integer
and method intWrapper in object Predef of type (Int)scala.runtime.RichInt
are possible conversion functions from Int to ?{val synchronized: ?}
7.synchronized { println("hi") }
^
It doesn't work anyway because of ambiguous implicits. Good thing,
because it might confuse people into thinking they are synchronizing
on a var when they are locking a one-shot RichInt wrapper that no one
else will be able to lock. If both threads grabbed any common lock, it
wouldn't matter which one, before accessing the buf var, then all
would be well. That's why I couldn't detect the problem in the first
example I showed which called waitForBeat on the Conductor.
Volatile has some pretty big restrictions, and honestly I think it
would be pretty rare that people would want to do this. But it would
probably show up occasionally when people are using Conductor to test
(intended-to-be) thread-safe classes and APIs. One thing volatile
doesn't do is guarantee atomicity of reads and writes. So if I had a:
buf += 1
in my test, then volatile wouldn't work anyway, and I'd have to make a
class. The way to get it to work now is to make class that holds buf,
and it is small and quick to do that in Scala:
class VolatileIntRef(@volatile private var i: Int) {
def buf = i
def buf_=(newValue: Int) = i = newValue
}
And I'd use it like this:
import org.scalatest.FunSuite
import org.scalatest.concurrent.ConductorMethods
import org.scalatest.matchers.ShouldMatchers._
import java.util.concurrent.ArrayBlockingQueue
class ClosureProblemSuite extends FunSuite with ConductorMethods {
test("closure access to mutable shared state *is* synchronized if it
is volatile") {
val ref = new VolatileIntRef(0)
import ref._
thread("ping") {
var ok = 0
var mistake = 0
while (true) {
buf = 42
if (buf != 42)
mistake += 1
else {
ok += 1
println("ok: " + ok + " mistake: " + mistake)
}
}
}
thread("pong") {
while (true) {
buf = 0
}
}
}
}
That isn't too painful, and it could be that's the way Scala should
stay because it keeps the language simpler: i.e, there are no
visibility guarantees for reassignable vars captured by closures in
Scala. But it seems reasonable to be consider letting users just mark
a local var as volatile and let the compiler use VolatileIntRef-like
class for me:
@volatile var buf = 0
Bill
On Mon, Sep 14, 2009 at 1:32 PM, Jim Andreou wrote:
> Yes, visibility is guaranteed (at least by the memory model) only if
> synchronization uses the same lock (I know I agree with myself here,
> this is just argument by repetition). The same goes for volatile. (In
> both cases you may actually get visibility too, just formally it's not
> guaranteed and an uber-smart jvm can freely break such code).
>
> 2009/9/14 Nils Kilden-Pedersen :
>> On Mon, Sep 14, 2009 at 3:10 PM, Jim Andreou wrote:
>>>
>>> Even if it was translated to RichInt, this would still be
>>> synchronizing on different locks, so no visibility effect.
>>
>> Is that true? I'm fairly convinced that while locking wouldn't work,
>> visibility would.
>> Not that I would recommend that anyway, just saying...
>>
>>
>
Tue, 2009-09-15, 07:17
#8
Re: Closures and Concurrency
All,
One of the claims of Scala
is that it treats types (and their values) in a uniform way
[ example: all values are objects ].
So, with this uniformity claim in mind, I'd vote
for "@volatile var buf = 0" to have the semantics
we'd expect (so that it is not needed to define a wrapper)
Luc
On Tue, Sep 15, 2009 at 12:59 AM, Bill Venners <bill@artima.com> wrote:
--
__~O
-\ <,
(*)/ (*)
reality goes far beyond imagination
One of the claims of Scala
is that it treats types (and their values) in a uniform way
[ example: all values are objects ].
So, with this uniformity claim in mind, I'd vote
for "@volatile var buf = 0" to have the semantics
we'd expect (so that it is not needed to define a wrapper)
Luc
On Tue, Sep 15, 2009 at 12:59 AM, Bill Venners <bill@artima.com> wrote:
Hi All,
You're right about visibility requiring the same lock. It's moot
though. I just tried this and:
scala> 7.synchronized { println("hi") }
<console>:5: error: type mismatch;
found : Int
required: ?{val synchronized: ?}
Note that implicit conversions are not applicable because they are ambiguous:
both method int2Integer in object Predef of type (Int)java.lang.Integer
and method intWrapper in object Predef of type (Int)scala.runtime.RichInt
are possible conversion functions from Int to ?{val synchronized: ?}
7.synchronized { println("hi") }
^
It doesn't work anyway because of ambiguous implicits. Good thing,
because it might confuse people into thinking they are synchronizing
on a var when they are locking a one-shot RichInt wrapper that no one
else will be able to lock. If both threads grabbed any common lock, it
wouldn't matter which one, before accessing the buf var, then all
would be well. That's why I couldn't detect the problem in the first
example I showed which called waitForBeat on the Conductor.
Volatile has some pretty big restrictions, and honestly I think it
would be pretty rare that people would want to do this. But it would
probably show up occasionally when people are using Conductor to test
(intended-to-be) thread-safe classes and APIs. One thing volatile
doesn't do is guarantee atomicity of reads and writes. So if I had a:
buf += 1
in my test, then volatile wouldn't work anyway, and I'd have to make a
class. The way to get it to work now is to make class that holds buf,
and it is small and quick to do that in Scala:
class VolatileIntRef(@volatile private var i: Int) {
def buf = i
def buf_=(newValue: Int) = i = newValue
}
And I'd use it like this:
import org.scalatest.FunSuite
import org.scalatest.concurrent.ConductorMethods
import org.scalatest.matchers.ShouldMatchers._
import java.util.concurrent.ArrayBlockingQueue
class ClosureProblemSuite extends FunSuite with ConductorMethods {
test("closure access to mutable shared state *is* synchronized if it
is volatile") {
val ref = new VolatileIntRef(0)
import ref._
thread("ping") {
var ok = 0
var mistake = 0
while (true) {
buf = 42
if (buf != 42)
mistake += 1
else {
ok += 1
println("ok: " + ok + " mistake: " + mistake)
}
}
}
thread("pong") {
while (true) {
buf = 0
}
}
}
}
That isn't too painful, and it could be that's the way Scala should
stay because it keeps the language simpler: i.e, there are no
visibility guarantees for reassignable vars captured by closures in
Scala. But it seems reasonable to be consider letting users just mark
a local var as volatile and let the compiler use VolatileIntRef-like
class for me:
@volatile var buf = 0
Bill
On Mon, Sep 14, 2009 at 1:32 PM, Jim Andreou <jim.andreou@gmail.com> wrote:
> Yes, visibility is guaranteed (at least by the memory model) only if
> synchronization uses the same lock (I know I agree with myself here,
> this is just argument by repetition). The same goes for volatile. (In
> both cases you may actually get visibility too, just formally it's not
> guaranteed and an uber-smart jvm can freely break such code).
>
> 2009/9/14 Nils Kilden-Pedersen <nilskp@gmail.com>:
>> On Mon, Sep 14, 2009 at 3:10 PM, Jim Andreou <jim.andreou@gmail.com> wrote:
>>>
>>> Even if it was translated to RichInt, this would still be
>>> synchronizing on different locks, so no visibility effect.
>>
>> Is that true? I'm fairly convinced that while locking wouldn't work,
>> visibility would.
>> Not that I would recommend that anyway, just saying...
>>
>>
>
--
Bill Venners
Artima, Inc.
http://www.artima.com
--
__~O
-\ <,
(*)/ (*)
reality goes far beyond imagination
Tue, 2009-09-15, 15:57
#9
Re: Closures and Concurrency
On thinking about my suggestion that synchronized should work... I was wrong.
@volatile seems like the right answer... and it can be implemented with existing JVM classes. If a var that's going to be promoted by Scala to a shared ref is marked as @volatile, the shared Ref object can be one in the java.util.concurrent.atomic package... e.g. http://java.sun.com/javase/6/docs/api/java/util/concurrent/atomic/AtomicInteger.html
That way, there's no additional classes in the Scala distribution.
On Mon, Sep 14, 2009 at 11:10 PM, Luc Duponcheel <luc.duponcheel@gmail.com> wrote:
--
Lift, the simply functional web framework http://liftweb.net
Beginning Scala http://www.apress.com/book/view/1430219890
Follow me: http://twitter.com/dpp
Git some: http://github.com/dpp
@volatile seems like the right answer... and it can be implemented with existing JVM classes. If a var that's going to be promoted by Scala to a shared ref is marked as @volatile, the shared Ref object can be one in the java.util.concurrent.atomic package... e.g. http://java.sun.com/javase/6/docs/api/java/util/concurrent/atomic/AtomicInteger.html
That way, there's no additional classes in the Scala distribution.
On Mon, Sep 14, 2009 at 11:10 PM, Luc Duponcheel <luc.duponcheel@gmail.com> wrote:
All,
One of the claims of Scala
is that it treats types (and their values) in a uniform way
[ example: all values are objects ].
So, with this uniformity claim in mind, I'd vote
for "@volatile var buf = 0" to have the semantics
we'd expect (so that it is not needed to define a wrapper)
Luc
On Tue, Sep 15, 2009 at 12:59 AM, Bill Venners <bill@artima.com> wrote:
Hi All,
You're right about visibility requiring the same lock. It's moot
though. I just tried this and:
scala> 7.synchronized { println("hi") }
<console>:5: error: type mismatch;
found : Int
required: ?{val synchronized: ?}
Note that implicit conversions are not applicable because they are ambiguous:
both method int2Integer in object Predef of type (Int)java.lang.Integer
and method intWrapper in object Predef of type (Int)scala.runtime.RichInt
are possible conversion functions from Int to ?{val synchronized: ?}
7.synchronized { println("hi") }
^
It doesn't work anyway because of ambiguous implicits. Good thing,
because it might confuse people into thinking they are synchronizing
on a var when they are locking a one-shot RichInt wrapper that no one
else will be able to lock. If both threads grabbed any common lock, it
wouldn't matter which one, before accessing the buf var, then all
would be well. That's why I couldn't detect the problem in the first
example I showed which called waitForBeat on the Conductor.
Volatile has some pretty big restrictions, and honestly I think it
would be pretty rare that people would want to do this. But it would
probably show up occasionally when people are using Conductor to test
(intended-to-be) thread-safe classes and APIs. One thing volatile
doesn't do is guarantee atomicity of reads and writes. So if I had a:
buf += 1
in my test, then volatile wouldn't work anyway, and I'd have to make a
class. The way to get it to work now is to make class that holds buf,
and it is small and quick to do that in Scala:
class VolatileIntRef(@volatile private var i: Int) {
def buf = i
def buf_=(newValue: Int) = i = newValue
}
And I'd use it like this:
import org.scalatest.FunSuite
import org.scalatest.concurrent.ConductorMethods
import org.scalatest.matchers.ShouldMatchers._
import java.util.concurrent.ArrayBlockingQueue
class ClosureProblemSuite extends FunSuite with ConductorMethods {
test("closure access to mutable shared state *is* synchronized if it
is volatile") {
val ref = new VolatileIntRef(0)
import ref._
thread("ping") {
var ok = 0
var mistake = 0
while (true) {
buf = 42
if (buf != 42)
mistake += 1
else {
ok += 1
println("ok: " + ok + " mistake: " + mistake)
}
}
}
thread("pong") {
while (true) {
buf = 0
}
}
}
}
That isn't too painful, and it could be that's the way Scala should
stay because it keeps the language simpler: i.e, there are no
visibility guarantees for reassignable vars captured by closures in
Scala. But it seems reasonable to be consider letting users just mark
a local var as volatile and let the compiler use VolatileIntRef-like
class for me:
@volatile var buf = 0
Bill
On Mon, Sep 14, 2009 at 1:32 PM, Jim Andreou <jim.andreou@gmail.com> wrote:
> Yes, visibility is guaranteed (at least by the memory model) only if
> synchronization uses the same lock (I know I agree with myself here,
> this is just argument by repetition). The same goes for volatile. (In
> both cases you may actually get visibility too, just formally it's not
> guaranteed and an uber-smart jvm can freely break such code).
>
> 2009/9/14 Nils Kilden-Pedersen <nilskp@gmail.com>:
>> On Mon, Sep 14, 2009 at 3:10 PM, Jim Andreou <jim.andreou@gmail.com> wrote:
>>>
>>> Even if it was translated to RichInt, this would still be
>>> synchronizing on different locks, so no visibility effect.
>>
>> Is that true? I'm fairly convinced that while locking wouldn't work,
>> visibility would.
>> Not that I would recommend that anyway, just saying...
>>
>>
>
--
Bill Venners
Artima, Inc.
http://www.artima.com
--
__~O
-\ <,
(*)/ (*)
reality goes far beyond imagination
--
Lift, the simply functional web framework http://liftweb.net
Beginning Scala http://www.apress.com/book/view/1430219890
Follow me: http://twitter.com/dpp
Git some: http://github.com/dpp
Tue, 2009-09-15, 18:17
#10
Re: Closures and Concurrency
Hi Luc,
On Mon, Sep 14, 2009 at 11:10 PM, Luc Duponcheel
wrote:
> All,
>
> One of the claims of Scala
> is that it treats types (and their values) in a uniform way
> [ example: all values are objects ].
>
> So, with this uniformity claim in mind, I'd vote
> for "@volatile var buf = 0" to have the semantics
> we'd expect (so that it is not needed to define a wrapper)
>
It can be argued that saying local variables can never be volatile is
treating them uniformly. That's what Java does, but the difference is
first that volatile in Java is a keyword. So if you try to put
volatile on a local variable in Java it won't compile. In Scala
volatile is an annotation and it does compile on local variables, but
just doesn't have an effect. The other thing is because Java doesn't
have full-on closures--any free variables referenced from inside an
anonymous inner class must be final--Java doesn't have the issue at
all. Whereas because Scala does have real closures, the issue exists.
I do think it's rather rare people would need it, but when someone
sees this:
@volatile var buf = 0
Looking at the code, it sure looks like buf is volatile. This compiles
today, but buf *isn't* volatile. So it violates the principal of least
surprise to some extent.
Bill
>
> Luc
>
> On Tue, Sep 15, 2009 at 12:59 AM, Bill Venners wrote:
>>
>> Hi All,
>>
>> You're right about visibility requiring the same lock. It's moot
>> though. I just tried this and:
>>
>> scala> 7.synchronized { println("hi") }
>> :5: error: type mismatch;
>> found : Int
>> required: ?{val synchronized: ?}
>> Note that implicit conversions are not applicable because they are
>> ambiguous:
>> both method int2Integer in object Predef of type (Int)java.lang.Integer
>> and method intWrapper in object Predef of type (Int)scala.runtime.RichInt
>> are possible conversion functions from Int to ?{val synchronized: ?}
>> 7.synchronized { println("hi") }
>> ^
>>
>> It doesn't work anyway because of ambiguous implicits. Good thing,
>> because it might confuse people into thinking they are synchronizing
>> on a var when they are locking a one-shot RichInt wrapper that no one
>> else will be able to lock. If both threads grabbed any common lock, it
>> wouldn't matter which one, before accessing the buf var, then all
>> would be well. That's why I couldn't detect the problem in the first
>> example I showed which called waitForBeat on the Conductor.
>>
>> Volatile has some pretty big restrictions, and honestly I think it
>> would be pretty rare that people would want to do this. But it would
>> probably show up occasionally when people are using Conductor to test
>> (intended-to-be) thread-safe classes and APIs. One thing volatile
>> doesn't do is guarantee atomicity of reads and writes. So if I had a:
>>
>> buf += 1
>>
>> in my test, then volatile wouldn't work anyway, and I'd have to make a
>> class. The way to get it to work now is to make class that holds buf,
>> and it is small and quick to do that in Scala:
>>
>> class VolatileIntRef(@volatile private var i: Int) {
>> def buf = i
>> def buf_=(newValue: Int) = i = newValue
>> }
>>
>> And I'd use it like this:
>>
>> import org.scalatest.FunSuite
>> import org.scalatest.concurrent.ConductorMethods
>> import org.scalatest.matchers.ShouldMatchers._
>> import java.util.concurrent.ArrayBlockingQueue
>>
>> class ClosureProblemSuite extends FunSuite with ConductorMethods {
>>
>> test("closure access to mutable shared state *is* synchronized if it
>> is volatile") {
>>
>> val ref = new VolatileIntRef(0)
>> import ref._
>>
>> thread("ping") {
>>
>> var ok = 0
>> var mistake = 0
>>
>> while (true) {
>> buf = 42
>> if (buf != 42)
>> mistake += 1
>> else {
>> ok += 1
>> println("ok: " + ok + " mistake: " + mistake)
>> }
>> }
>> }
>>
>> thread("pong") {
>> while (true) {
>> buf = 0
>> }
>> }
>> }
>> }
>>
>> That isn't too painful, and it could be that's the way Scala should
>> stay because it keeps the language simpler: i.e, there are no
>> visibility guarantees for reassignable vars captured by closures in
>> Scala. But it seems reasonable to be consider letting users just mark
>> a local var as volatile and let the compiler use VolatileIntRef-like
>> class for me:
>>
>> @volatile var buf = 0
>>
>> Bill
>>
>> On Mon, Sep 14, 2009 at 1:32 PM, Jim Andreou
>> wrote:
>> > Yes, visibility is guaranteed (at least by the memory model) only if
>> > synchronization uses the same lock (I know I agree with myself here,
>> > this is just argument by repetition). The same goes for volatile. (In
>> > both cases you may actually get visibility too, just formally it's not
>> > guaranteed and an uber-smart jvm can freely break such code).
>> >
>> > 2009/9/14 Nils Kilden-Pedersen :
>> >> On Mon, Sep 14, 2009 at 3:10 PM, Jim Andreou
>> >> wrote:
>> >>>
>> >>> Even if it was translated to RichInt, this would still be
>> >>> synchronizing on different locks, so no visibility effect.
>> >>
>> >> Is that true? I'm fairly convinced that while locking wouldn't work,
>> >> visibility would.
>> >> Not that I would recommend that anyway, just saying...
>> >>
>> >>
>> >
>>
>>
>>
>> --
>> Bill Venners
>> Artima, Inc.
>> http://www.artima.com
>
>
>
> --
> __~O
> -\ <,
> (*)/ (*)
>
> reality goes far beyond imagination
>
>
Tue, 2009-09-15, 19:57
#11
Re: Closures and Concurrency
Hi David,
Reusing the atomic classes is a good idea.
Bill
On Tue, Sep 15, 2009 at 7:48 AM, David Pollak
wrote:
> On thinking about my suggestion that synchronized should work... I was
> wrong.
> @volatile seems like the right answer... and it can be implemented with
> existing JVM classes. If a var that's going to be promoted by Scala to a
> shared ref is marked as @volatile, the shared Ref object can be one in the
> java.util.concurrent.atomic package...
> e.g. http://java.sun.com/javase/6/docs/api/java/util/concurrent/atomic/AtomicInteger.html
> That way, there's no additional classes in the Scala distribution.
>
> On Mon, Sep 14, 2009 at 11:10 PM, Luc Duponcheel
> wrote:
>>
>> All,
>>
>> One of the claims of Scala
>> is that it treats types (and their values) in a uniform way
>> [ example: all values are objects ].
>>
>> So, with this uniformity claim in mind, I'd vote
>> for "@volatile var buf = 0" to have the semantics
>> we'd expect (so that it is not needed to define a wrapper)
>>
>>
>> Luc
>>
>> On Tue, Sep 15, 2009 at 12:59 AM, Bill Venners wrote:
>>>
>>> Hi All,
>>>
>>> You're right about visibility requiring the same lock. It's moot
>>> though. I just tried this and:
>>>
>>> scala> 7.synchronized { println("hi") }
>>> :5: error: type mismatch;
>>> found : Int
>>> required: ?{val synchronized: ?}
>>> Note that implicit conversions are not applicable because they are
>>> ambiguous:
>>> both method int2Integer in object Predef of type (Int)java.lang.Integer
>>> and method intWrapper in object Predef of type
>>> (Int)scala.runtime.RichInt
>>> are possible conversion functions from Int to ?{val synchronized: ?}
>>> 7.synchronized { println("hi") }
>>> ^
>>>
>>> It doesn't work anyway because of ambiguous implicits. Good thing,
>>> because it might confuse people into thinking they are synchronizing
>>> on a var when they are locking a one-shot RichInt wrapper that no one
>>> else will be able to lock. If both threads grabbed any common lock, it
>>> wouldn't matter which one, before accessing the buf var, then all
>>> would be well. That's why I couldn't detect the problem in the first
>>> example I showed which called waitForBeat on the Conductor.
>>>
>>> Volatile has some pretty big restrictions, and honestly I think it
>>> would be pretty rare that people would want to do this. But it would
>>> probably show up occasionally when people are using Conductor to test
>>> (intended-to-be) thread-safe classes and APIs. One thing volatile
>>> doesn't do is guarantee atomicity of reads and writes. So if I had a:
>>>
>>> buf += 1
>>>
>>> in my test, then volatile wouldn't work anyway, and I'd have to make a
>>> class. The way to get it to work now is to make class that holds buf,
>>> and it is small and quick to do that in Scala:
>>>
>>> class VolatileIntRef(@volatile private var i: Int) {
>>> def buf = i
>>> def buf_=(newValue: Int) = i = newValue
>>> }
>>>
>>> And I'd use it like this:
>>>
>>> import org.scalatest.FunSuite
>>> import org.scalatest.concurrent.ConductorMethods
>>> import org.scalatest.matchers.ShouldMatchers._
>>> import java.util.concurrent.ArrayBlockingQueue
>>>
>>> class ClosureProblemSuite extends FunSuite with ConductorMethods {
>>>
>>> test("closure access to mutable shared state *is* synchronized if it
>>> is volatile") {
>>>
>>> val ref = new VolatileIntRef(0)
>>> import ref._
>>>
>>> thread("ping") {
>>>
>>> var ok = 0
>>> var mistake = 0
>>>
>>> while (true) {
>>> buf = 42
>>> if (buf != 42)
>>> mistake += 1
>>> else {
>>> ok += 1
>>> println("ok: " + ok + " mistake: " + mistake)
>>> }
>>> }
>>> }
>>>
>>> thread("pong") {
>>> while (true) {
>>> buf = 0
>>> }
>>> }
>>> }
>>> }
>>>
>>> That isn't too painful, and it could be that's the way Scala should
>>> stay because it keeps the language simpler: i.e, there are no
>>> visibility guarantees for reassignable vars captured by closures in
>>> Scala. But it seems reasonable to be consider letting users just mark
>>> a local var as volatile and let the compiler use VolatileIntRef-like
>>> class for me:
>>>
>>> @volatile var buf = 0
>>>
>>> Bill
>>>
>>> On Mon, Sep 14, 2009 at 1:32 PM, Jim Andreou
>>> wrote:
>>> > Yes, visibility is guaranteed (at least by the memory model) only if
>>> > synchronization uses the same lock (I know I agree with myself here,
>>> > this is just argument by repetition). The same goes for volatile. (In
>>> > both cases you may actually get visibility too, just formally it's not
>>> > guaranteed and an uber-smart jvm can freely break such code).
>>> >
>>> > 2009/9/14 Nils Kilden-Pedersen :
>>> >> On Mon, Sep 14, 2009 at 3:10 PM, Jim Andreou
>>> >> wrote:
>>> >>>
>>> >>> Even if it was translated to RichInt, this would still be
>>> >>> synchronizing on different locks, so no visibility effect.
>>> >>
>>> >> Is that true? I'm fairly convinced that while locking wouldn't work,
>>> >> visibility would.
>>> >> Not that I would recommend that anyway, just saying...
>>> >>
>>> >>
>>> >
>>>
>>>
>>>
>>> --
>>> Bill Venners
>>> Artima, Inc.
>>> http://www.artima.com
>>
>>
>>
>> --
>> __~O
>> -\ <,
>> (*)/ (*)
>>
>> reality goes far beyond imagination
>>
>
>
>
> --
> Lift, the simply functional web framework http://liftweb.net
> Beginning Scala http://www.apress.com/book/view/1430219890
> Follow me: http://twitter.com/dpp
> Git some: http://github.com/dpp
>
Thu, 2009-10-01, 06:07
#12
Re: Closures and Concurrency
It's arguable that if you intend to use a local variable on multiple
threads, you should be requird to choose an appropriate synchronization
mechanism. When doing so, you need to be aware of additional methods
required to use the value appropriately.
val changes = new AtomicInteger()
instead of
@volatile var changes = 0
and then
changes set newValue
instead of
changes = newValue
A variety of methods on AtomicInteger exist to properly guarantee thread safety. With @volatile, how would you guarantee:
changes = changes + x
You can't unless you carry along the implementation of AtomicInteger. At the compiler level, translating @volatile to operations on AtomicInteger would have to recognize common expressions and invoke the "right" methods on the Atomic classes. You can't translate the code above in a simple manner, like:
changes.set(changes.get() + x)
It needs to be translated as:
changes.addAndGet(x)
RJ
On Tue, Sep 15, 2009 at 2:50 PM, Bill Venners <bill@artima.com> wrote:
val changes = new AtomicInteger()
instead of
@volatile var changes = 0
and then
changes set newValue
instead of
changes = newValue
A variety of methods on AtomicInteger exist to properly guarantee thread safety. With @volatile, how would you guarantee:
changes = changes + x
You can't unless you carry along the implementation of AtomicInteger. At the compiler level, translating @volatile to operations on AtomicInteger would have to recognize common expressions and invoke the "right" methods on the Atomic classes. You can't translate the code above in a simple manner, like:
changes.set(changes.get() + x)
It needs to be translated as:
changes.addAndGet(x)
RJ
On Tue, Sep 15, 2009 at 2:50 PM, Bill Venners <bill@artima.com> wrote:
Hi David,
Reusing the atomic classes is a good idea.
Bill
Thu, 2009-10-01, 08:37
#13
Re: Closures and Concurrency
Hi Ross,
On Wed, Sep 30, 2009 at 10:05 PM, Ross Judson wrote:
> It's arguable that if you intend to use a local variable on multiple
> threads, you should be requird to choose an appropriate synchronization
> mechanism. When doing so, you need to be aware of additional methods
> required to use the value appropriately.
>
> val changes = new AtomicInteger()
>
> instead of
>
> @volatile var changes = 0
>
> and then
> changes set newValue
>
> instead of
> changes = newValue
>
Yes. The compiler already does that kind of thing on IntRef, though
IntRef doesn't have getters and setters, just a public field.
> A variety of methods on AtomicInteger exist to properly guarantee thread
> safety. With @volatile, how would you guarantee:
>
> changes = changes + x
>
> You can't unless you carry along the implementation of AtomicInteger. At the
> compiler level, translating @volatile to operations on AtomicInteger would
> have to recognize common expressions and invoke the "right" methods on the
> Atomic classes. You can't translate the code above in a simple manner, like:
>
> changes.set(changes.get() + x)
>
> It needs to be translated as:
>
> changes.addAndGet(x)
>
I think you mean incrementAndGet, but regardless, that's not what
volatile means. Volatile doesn't offer any atomic operations, just
that threads will read what other threads write. But if you want to
get an increment a variable as one atomic operation, then you can't
use volatile. So I think all the compiler would need to do is call set
and get, because if it were to do this kind of thing, it should stick
100% to what volatile means elsewhere.
But your initial point is correct, it can be argued that the behavior
we have now is OK and people just need to take care of making things
work themselves. That's what I do right now in my multi-threaded
tests. I make my own atomic integers. The problem is that because
volatile in Scala is an annotation, you can put it on a local
variable, and it will compile. So it *looks* like it is volatile. In
Java, because volatile is a keyword, it won't compile if you try and
put it on a local variable (and of course it wouldn't make sense
because Java doesn't have true closures anyway).
Bill
> RJ
>
> On Tue, Sep 15, 2009 at 2:50 PM, Bill Venners wrote:
>>
>> Hi David,
>>
>> Reusing the atomic classes is a good idea.
>>
>> Bill
>
>
Thu, 2009-10-01, 18:47
#14
Re: Closures and Concurrency
Yes, I think the bug is that variables marked volatile might not actually be volatile.
Scala doesn't do any concurrency hand-holding anywhere else. I think it would be inappropriate to make user's variables atomic without their consent. If they need atomic variables, they should use them explicitly. If they need volatile variables, they should be able to. If they specify neither, then they should be allowed to shoot themselves in the foot.
--j
On Thu, Oct 1, 2009 at 12:15 AM, Bill Venners <bill@artima.com> wrote:
Scala doesn't do any concurrency hand-holding anywhere else. I think it would be inappropriate to make user's variables atomic without their consent. If they need atomic variables, they should use them explicitly. If they need volatile variables, they should be able to. If they specify neither, then they should be allowed to shoot themselves in the foot.
--j
On Thu, Oct 1, 2009 at 12:15 AM, Bill Venners <bill@artima.com> wrote:
Hi Ross,
On Wed, Sep 30, 2009 at 10:05 PM, Ross Judson <rossjudson@gmail.com> wrote:
> It's arguable that if you intend to use a local variable on multiple
> threads, you should be requird to choose an appropriate synchronization
> mechanism. When doing so, you need to be aware of additional methods
> required to use the value appropriately.
>
> val changes = new AtomicInteger()
>
> instead of
>
> @volatile var changes = 0
>
> and then
> changes set newValue
>
> instead of
> changes = newValue
>
Yes. The compiler already does that kind of thing on IntRef, though
IntRef doesn't have getters and setters, just a public field.
> A variety of methods on AtomicInteger exist to properly guarantee thread
> safety. With @volatile, how would you guarantee:
>
> changes = changes + x
>
> You can't unless you carry along the implementation of AtomicInteger. At the
> compiler level, translating @volatile to operations on AtomicInteger would
> have to recognize common expressions and invoke the "right" methods on the
> Atomic classes. You can't translate the code above in a simple manner, like:
>
> changes.set(changes.get() + x)
>
> It needs to be translated as:
>
> changes.addAndGet(x)
>
I think you mean incrementAndGet, but regardless, that's not what
volatile means. Volatile doesn't offer any atomic operations, just
that threads will read what other threads write. But if you want to
get an increment a variable as one atomic operation, then you can't
use volatile. So I think all the compiler would need to do is call set
and get, because if it were to do this kind of thing, it should stick
100% to what volatile means elsewhere.
But your initial point is correct, it can be argued that the behavior
we have now is OK and people just need to take care of making things
work themselves. That's what I do right now in my multi-threaded
tests. I make my own atomic integers. The problem is that because
volatile in Scala is an annotation, you can put it on a local
variable, and it will compile. So it *looks* like it is volatile. In
Java, because volatile is a keyword, it won't compile if you try and
put it on a local variable (and of course it wouldn't make sense
because Java doesn't have true closures anyway).
Bill
> RJ
>
> On Tue, Sep 15, 2009 at 2:50 PM, Bill Venners <bill@artima.com> wrote:
>>
>> Hi David,
>>
>> Reusing the atomic classes is a good idea.
>>
>> Bill
>
>
--
Bill Venners
Artima, Inc.
http://www.artima.com
Thu, 2009-10-01, 18:57
#15
Re: Closures and Concurrency
Is there a trac ticket for this?
On Thu, Oct 1, 2009 at 10:34 AM, Jorge Ortiz <jorge.ortiz@gmail.com> wrote:
On Thu, Oct 1, 2009 at 10:34 AM, Jorge Ortiz <jorge.ortiz@gmail.com> wrote:
Yes, I think the bug is that variables marked volatile might not actually be volatile.
Scala doesn't do any concurrency hand-holding anywhere else. I think it would be inappropriate to make user's variables atomic without their consent. If they need atomic variables, they should use them explicitly. If they need volatile variables, they should be able to. If they specify neither, then they should be allowed to shoot themselves in the foot.
--j
On Thu, Oct 1, 2009 at 12:15 AM, Bill Venners <bill@artima.com> wrote:
Hi Ross,
On Wed, Sep 30, 2009 at 10:05 PM, Ross Judson <rossjudson@gmail.com> wrote:
> It's arguable that if you intend to use a local variable on multiple
> threads, you should be requird to choose an appropriate synchronization
> mechanism. When doing so, you need to be aware of additional methods
> required to use the value appropriately.
>
> val changes = new AtomicInteger()
>
> instead of
>
> @volatile var changes = 0
>
> and then
> changes set newValue
>
> instead of
> changes = newValue
>
Yes. The compiler already does that kind of thing on IntRef, though
IntRef doesn't have getters and setters, just a public field.
> A variety of methods on AtomicInteger exist to properly guarantee thread
> safety. With @volatile, how would you guarantee:
>
> changes = changes + x
>
> You can't unless you carry along the implementation of AtomicInteger. At the
> compiler level, translating @volatile to operations on AtomicInteger would
> have to recognize common expressions and invoke the "right" methods on the
> Atomic classes. You can't translate the code above in a simple manner, like:
>
> changes.set(changes.get() + x)
>
> It needs to be translated as:
>
> changes.addAndGet(x)
>
I think you mean incrementAndGet, but regardless, that's not what
volatile means. Volatile doesn't offer any atomic operations, just
that threads will read what other threads write. But if you want to
get an increment a variable as one atomic operation, then you can't
use volatile. So I think all the compiler would need to do is call set
and get, because if it were to do this kind of thing, it should stick
100% to what volatile means elsewhere.
But your initial point is correct, it can be argued that the behavior
we have now is OK and people just need to take care of making things
work themselves. That's what I do right now in my multi-threaded
tests. I make my own atomic integers. The problem is that because
volatile in Scala is an annotation, you can put it on a local
variable, and it will compile. So it *looks* like it is volatile. In
Java, because volatile is a keyword, it won't compile if you try and
put it on a local variable (and of course it wouldn't make sense
because Java doesn't have true closures anyway).
Bill
> RJ
>
> On Tue, Sep 15, 2009 at 2:50 PM, Bill Venners <bill@artima.com> wrote:
>>
>> Hi David,
>>
>> Reusing the atomic classes is a good idea.
>>
>> Bill
>
>
--
Bill Venners
Artima, Inc.
http://www.artima.com
Thu, 2009-10-01, 19:27
#16
Re: Closures and Concurrency
Hi Jorge,
I didn't make one yet. Was collecting feedback. I'll make an
enhancement request ticket for this later today.
Bill
On Thu, Oct 1, 2009 at 10:37 AM, Jorge Ortiz wrote:
> Is there a trac ticket for this?
>
> On Thu, Oct 1, 2009 at 10:34 AM, Jorge Ortiz wrote:
>>
>> Yes, I think the bug is that variables marked volatile might not actually
>> be volatile.
>>
>> Scala doesn't do any concurrency hand-holding anywhere else. I think it
>> would be inappropriate to make user's variables atomic without their
>> consent. If they need atomic variables, they should use them explicitly. If
>> they need volatile variables, they should be able to. If they specify
>> neither, then they should be allowed to shoot themselves in the foot.
>>
>> --j
>>
>> On Thu, Oct 1, 2009 at 12:15 AM, Bill Venners wrote:
>>>
>>> Hi Ross,
>>>
>>> On Wed, Sep 30, 2009 at 10:05 PM, Ross Judson
>>> wrote:
>>> > It's arguable that if you intend to use a local variable on multiple
>>> > threads, you should be requird to choose an appropriate synchronization
>>> > mechanism. When doing so, you need to be aware of additional methods
>>> > required to use the value appropriately.
>>> >
>>> > val changes = new AtomicInteger()
>>> >
>>> > instead of
>>> >
>>> > @volatile var changes = 0
>>> >
>>> > and then
>>> > changes set newValue
>>> >
>>> > instead of
>>> > changes = newValue
>>> >
>>> Yes. The compiler already does that kind of thing on IntRef, though
>>> IntRef doesn't have getters and setters, just a public field.
>>>
>>> > A variety of methods on AtomicInteger exist to properly guarantee
>>> > thread
>>> > safety. With @volatile, how would you guarantee:
>>> >
>>> > changes = changes + x
>>> >
>>> > You can't unless you carry along the implementation of AtomicInteger.
>>> > At the
>>> > compiler level, translating @volatile to operations on AtomicInteger
>>> > would
>>> > have to recognize common expressions and invoke the "right" methods on
>>> > the
>>> > Atomic classes. You can't translate the code above in a simple manner,
>>> > like:
>>> >
>>> > changes.set(changes.get() + x)
>>> >
>>> > It needs to be translated as:
>>> >
>>> > changes.addAndGet(x)
>>> >
>>> I think you mean incrementAndGet, but regardless, that's not what
>>> volatile means. Volatile doesn't offer any atomic operations, just
>>> that threads will read what other threads write. But if you want to
>>> get an increment a variable as one atomic operation, then you can't
>>> use volatile. So I think all the compiler would need to do is call set
>>> and get, because if it were to do this kind of thing, it should stick
>>> 100% to what volatile means elsewhere.
>>>
>>> But your initial point is correct, it can be argued that the behavior
>>> we have now is OK and people just need to take care of making things
>>> work themselves. That's what I do right now in my multi-threaded
>>> tests. I make my own atomic integers. The problem is that because
>>> volatile in Scala is an annotation, you can put it on a local
>>> variable, and it will compile. So it *looks* like it is volatile. In
>>> Java, because volatile is a keyword, it won't compile if you try and
>>> put it on a local variable (and of course it wouldn't make sense
>>> because Java doesn't have true closures anyway).
>>>
>>> Bill
>>>
>>> > RJ
>>> >
>>> > On Tue, Sep 15, 2009 at 2:50 PM, Bill Venners wrote:
>>> >>
>>> >> Hi David,
>>> >>
>>> >> Reusing the atomic classes is a good idea.
>>> >>
>>> >> Bill
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>> Bill Venners
>>> Artima, Inc.
>>> http://www.artima.com
>>
>
>
Thu, 2009-10-01, 19:47
#17
Re: Closures and Concurrency
I think a ticket is in order. Lazy values, for instance, are thread-safe, so either @volatile locals are indeed volatile, or an error should be reported.
iulian
On Thu, Oct 1, 2009 at 8:18 PM, Bill Venners <bill@artima.com> wrote:
--
« Je déteste la montagne, ça cache le paysage »
Alphonse Allais
iulian
On Thu, Oct 1, 2009 at 8:18 PM, Bill Venners <bill@artima.com> wrote:
Hi Jorge,
I didn't make one yet. Was collecting feedback. I'll make an
enhancement request ticket for this later today.
Bill
On Thu, Oct 1, 2009 at 10:37 AM, Jorge Ortiz <jorge.ortiz@gmail.com> wrote:
> Is there a trac ticket for this?
>
> On Thu, Oct 1, 2009 at 10:34 AM, Jorge Ortiz <jorge.ortiz@gmail.com> wrote:
>>
>> Yes, I think the bug is that variables marked volatile might not actually
>> be volatile.
>>
>> Scala doesn't do any concurrency hand-holding anywhere else. I think it
>> would be inappropriate to make user's variables atomic without their
>> consent. If they need atomic variables, they should use them explicitly. If
>> they need volatile variables, they should be able to. If they specify
>> neither, then they should be allowed to shoot themselves in the foot.
>>
>> --j
>>
>> On Thu, Oct 1, 2009 at 12:15 AM, Bill Venners <bill@artima.com> wrote:
>>>
>>> Hi Ross,
>>>
>>> On Wed, Sep 30, 2009 at 10:05 PM, Ross Judson <rossjudson@gmail.com>
>>> wrote:
>>> > It's arguable that if you intend to use a local variable on multiple
>>> > threads, you should be requird to choose an appropriate synchronization
>>> > mechanism. When doing so, you need to be aware of additional methods
>>> > required to use the value appropriately.
>>> >
>>> > val changes = new AtomicInteger()
>>> >
>>> > instead of
>>> >
>>> > @volatile var changes = 0
>>> >
>>> > and then
>>> > changes set newValue
>>> >
>>> > instead of
>>> > changes = newValue
>>> >
>>> Yes. The compiler already does that kind of thing on IntRef, though
>>> IntRef doesn't have getters and setters, just a public field.
>>>
>>> > A variety of methods on AtomicInteger exist to properly guarantee
>>> > thread
>>> > safety. With @volatile, how would you guarantee:
>>> >
>>> > changes = changes + x
>>> >
>>> > You can't unless you carry along the implementation of AtomicInteger.
>>> > At the
>>> > compiler level, translating @volatile to operations on AtomicInteger
>>> > would
>>> > have to recognize common expressions and invoke the "right" methods on
>>> > the
>>> > Atomic classes. You can't translate the code above in a simple manner,
>>> > like:
>>> >
>>> > changes.set(changes.get() + x)
>>> >
>>> > It needs to be translated as:
>>> >
>>> > changes.addAndGet(x)
>>> >
>>> I think you mean incrementAndGet, but regardless, that's not what
>>> volatile means. Volatile doesn't offer any atomic operations, just
>>> that threads will read what other threads write. But if you want to
>>> get an increment a variable as one atomic operation, then you can't
>>> use volatile. So I think all the compiler would need to do is call set
>>> and get, because if it were to do this kind of thing, it should stick
>>> 100% to what volatile means elsewhere.
>>>
>>> But your initial point is correct, it can be argued that the behavior
>>> we have now is OK and people just need to take care of making things
>>> work themselves. That's what I do right now in my multi-threaded
>>> tests. I make my own atomic integers. The problem is that because
>>> volatile in Scala is an annotation, you can put it on a local
>>> variable, and it will compile. So it *looks* like it is volatile. In
>>> Java, because volatile is a keyword, it won't compile if you try and
>>> put it on a local variable (and of course it wouldn't make sense
>>> because Java doesn't have true closures anyway).
>>>
>>> Bill
>>>
>>> > RJ
>>> >
>>> > On Tue, Sep 15, 2009 at 2:50 PM, Bill Venners <bill@artima.com> wrote:
>>> >>
>>> >> Hi David,
>>> >>
>>> >> Reusing the atomic classes is a good idea.
>>> >>
>>> >> Bill
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>> Bill Venners
>>> Artima, Inc.
>>> http://www.artima.com
>>
>
>
--
Bill Venners
Artima, Inc.
http://www.artima.com
--
« Je déteste la montagne, ça cache le paysage »
Alphonse Allais
Thu, 2009-10-01, 20:37
#18
Re: Closures and Concurrency
Hi All,
Ticket submitted:
https://lampsvn.epfl.ch/trac/scala/ticket/2424
Thanks.
Bill
On Thu, Oct 1, 2009 at 11:41 AM, Iulian Dragos wrote:
> I think a ticket is in order. Lazy values, for instance, are thread-safe, so
> either @volatile locals are indeed volatile, or an error should be reported.
> iulian
>
> On Thu, Oct 1, 2009 at 8:18 PM, Bill Venners wrote:
>>
>> Hi Jorge,
>>
>> I didn't make one yet. Was collecting feedback. I'll make an
>> enhancement request ticket for this later today.
>>
>> Bill
>>
>> On Thu, Oct 1, 2009 at 10:37 AM, Jorge Ortiz
>> wrote:
>> > Is there a trac ticket for this?
>> >
>> > On Thu, Oct 1, 2009 at 10:34 AM, Jorge Ortiz
>> > wrote:
>> >>
>> >> Yes, I think the bug is that variables marked volatile might not
>> >> actually
>> >> be volatile.
>> >>
>> >> Scala doesn't do any concurrency hand-holding anywhere else. I think it
>> >> would be inappropriate to make user's variables atomic without their
>> >> consent. If they need atomic variables, they should use them
>> >> explicitly. If
>> >> they need volatile variables, they should be able to. If they specify
>> >> neither, then they should be allowed to shoot themselves in the foot.
>> >>
>> >> --j
>> >>
>> >> On Thu, Oct 1, 2009 at 12:15 AM, Bill Venners wrote:
>> >>>
>> >>> Hi Ross,
>> >>>
>> >>> On Wed, Sep 30, 2009 at 10:05 PM, Ross Judson
>> >>> wrote:
>> >>> > It's arguable that if you intend to use a local variable on multiple
>> >>> > threads, you should be requird to choose an appropriate
>> >>> > synchronization
>> >>> > mechanism. When doing so, you need to be aware of additional methods
>> >>> > required to use the value appropriately.
>> >>> >
>> >>> > val changes = new AtomicInteger()
>> >>> >
>> >>> > instead of
>> >>> >
>> >>> > @volatile var changes = 0
>> >>> >
>> >>> > and then
>> >>> > changes set newValue
>> >>> >
>> >>> > instead of
>> >>> > changes = newValue
>> >>> >
>> >>> Yes. The compiler already does that kind of thing on IntRef, though
>> >>> IntRef doesn't have getters and setters, just a public field.
>> >>>
>> >>> > A variety of methods on AtomicInteger exist to properly guarantee
>> >>> > thread
>> >>> > safety. With @volatile, how would you guarantee:
>> >>> >
>> >>> > changes = changes + x
>> >>> >
>> >>> > You can't unless you carry along the implementation of
>> >>> > AtomicInteger.
>> >>> > At the
>> >>> > compiler level, translating @volatile to operations on AtomicInteger
>> >>> > would
>> >>> > have to recognize common expressions and invoke the "right" methods
>> >>> > on
>> >>> > the
>> >>> > Atomic classes. You can't translate the code above in a simple
>> >>> > manner,
>> >>> > like:
>> >>> >
>> >>> > changes.set(changes.get() + x)
>> >>> >
>> >>> > It needs to be translated as:
>> >>> >
>> >>> > changes.addAndGet(x)
>> >>> >
>> >>> I think you mean incrementAndGet, but regardless, that's not what
>> >>> volatile means. Volatile doesn't offer any atomic operations, just
>> >>> that threads will read what other threads write. But if you want to
>> >>> get an increment a variable as one atomic operation, then you can't
>> >>> use volatile. So I think all the compiler would need to do is call set
>> >>> and get, because if it were to do this kind of thing, it should stick
>> >>> 100% to what volatile means elsewhere.
>> >>>
>> >>> But your initial point is correct, it can be argued that the behavior
>> >>> we have now is OK and people just need to take care of making things
>> >>> work themselves. That's what I do right now in my multi-threaded
>> >>> tests. I make my own atomic integers. The problem is that because
>> >>> volatile in Scala is an annotation, you can put it on a local
>> >>> variable, and it will compile. So it *looks* like it is volatile. In
>> >>> Java, because volatile is a keyword, it won't compile if you try and
>> >>> put it on a local variable (and of course it wouldn't make sense
>> >>> because Java doesn't have true closures anyway).
>> >>>
>> >>> Bill
>> >>>
>> >>> > RJ
>> >>> >
>> >>> > On Tue, Sep 15, 2009 at 2:50 PM, Bill Venners
>> >>> > wrote:
>> >>> >>
>> >>> >> Hi David,
>> >>> >>
>> >>> >> Reusing the atomic classes is a good idea.
>> >>> >>
>> >>> >> Bill
>> >>> >
>> >>> >
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Bill Venners
>> >>> Artima, Inc.
>> >>> http://www.artima.com
>> >>
>> >
>> >
>>
>>
>>
>> --
>> Bill Venners
>> Artima, Inc.
>> http://www.artima.com
>
>
>
> --
> « Je déteste la montagne, ça cache le paysage »
> Alphonse Allais
>
It should be legal to do:
buf.synchronized { ...}
That would deal with the JVM memory model issue. In 2.7.5, it's not looking like the compiler understands that the references to buf are being turned into IntRefs, but that seems to be a bug in the compiler rather than anything else.
My 2 cents.
Thanks,
David
On Mon, Sep 14, 2009 at 12:14 PM, Bill Venners <bill@artima.com> wrote:
--
Lift, the simply functional web framework http://liftweb.net
Beginning Scala http://www.apress.com/book/view/1430219890
Follow me: http://twitter.com/dpp
Git some: http://github.com/dpp