Scala 3
This is the documentation for the Scala standard library.
Package structure
The scala package contains core types like Int
, Float
, Array
or Option
which are accessible in all Scala compilation units without explicit qualification or
imports.
Notable packages include:
scala.collection
and its sub-packages contain Scala's collections frameworkscala.collection.immutable
- Immutable, sequential data-structures such asVector
,List
,Range
,HashMap
orHashSet
scala.collection.mutable
- Mutable, sequential data-structures such asArrayBuffer
,StringBuilder
,HashMap
orHashSet
scala.collection.concurrent
- Mutable, concurrent data-structures such asTrieMap
scala.concurrent
- Primitives for concurrent programming such asFutures
andPromises
scala.io
- Input and output operationsscala.math
- Basic math functions and additional numeric types likeBigInt
andBigDecimal
scala.sys
- Interaction with other processes and the operating system
Other packages exist. See the complete list on the right.
Additional parts of the standard library are shipped as separate libraries. These include:
scala.reflect
- Scala's reflection API (scala-reflect.jar)scala.xml
- XML parsing, manipulation, and serialization (scala-xml.jar)scala.collection.parallel
- Parallel collections (scala-parallel-collections.jar)scala.util.parsing
- Parser combinators (scala-parser-combinators.jar)scala.swing
- A convenient wrapper around Java's GUI framework called Swing (scala-swing.jar)
Automatic imports
Identifiers in the scala package and the scala.Predef
object are always in scope by default.
Some of these identifiers are type aliases provided as shortcuts to commonly used classes. For example, List
is an alias for
scala.collection.immutable.List
.
Other aliases refer to classes provided by the underlying platform. For example, on the JVM, String
is an alias for java.lang.String
.
Packages
Core Scala types.
Core Scala types. They are always available without an explicit import.
When defining a field, the Scala compiler creates up to four accessors
for it: a getter, a setter, and if the field is annotated with
@BeanProperty
, a bean getter and a bean setter.
When defining a field, the Scala compiler creates up to four accessors
for it: a getter, a setter, and if the field is annotated with
@BeanProperty
, a bean getter and a bean setter.
For instance in the following class definition
class C(@myAnnot @BeanProperty var c: Int)
there are six entities which can carry the annotation @myAnnot
: the
constructor parameter, the generated field and the four accessors.
By default, annotations on (val
-, var
- or plain) constructor parameters
end up on the parameter, not on any other entity. Annotations on fields
by default only end up on the field.
The meta-annotations in package scala.annotation.meta
are used
to control where annotations on fields and class parameters are copied.
This is done by annotating either the annotation type or the annotation
class with one or several of the meta-annotations in this package.
Annotating the annotation type
The target meta-annotations can be put on the annotation type when
instantiating the annotation. In the following example, the annotation
@Id
will be added only to the bean getter getX
.
import javax.persistence.Id
class A {
@(Id @beanGetter) @BeanProperty val x = 0
}
In order to annotate the field as well, the meta-annotation @field
would need to be added.
The syntax can be improved using a type alias:
object ScalaJPA {
type Id = javax.persistence.Id @beanGetter
}
import ScalaJPA.Id
class A {
@Id @BeanProperty val x = 0
}
Annotating the annotation class
For annotations defined in Scala, a default target can be specified in the annotation class itself, for example
@getter
class myAnnotation extends Annotation
This only changes the default target for the annotation myAnnotation
.
When instantiating the annotation, the target can still be specified
as described in the last section.
This package object contains primitives for concurrent and parallel programming.
This package object contains primitives for concurrent and parallel programming.
Guide
A more detailed guide to Futures and Promises, including discussion and examples can be found at https://docs.scala-lang.org/overviews/core/futures.html.
Common Imports
When working with Futures, you will often find that importing the whole concurrent package is convenient:
import scala.concurrent._
When using things like Future
s, it is often required to have an implicit ExecutionContext
in scope. The general advice for these implicits are as follows.
If the code in question is a class or method definition, and no ExecutionContext
is available,
request one from the caller by adding an implicit parameter list:
def myMethod(myParam: MyType)(implicit ec: ExecutionContext) = …
//Or
class MyClass(myParam: MyType)(implicit ec: ExecutionContext) { … }
This allows the caller of the method, or creator of the instance of the class, to decide which
ExecutionContext
should be used.
For typical REPL usage and experimentation, importing the global ExecutionContext
is often desired.
import scala.concurrent.ExcutionContext.Implicits.global
Specifying Durations
Operations often require a duration to be specified. A duration DSL is available to make defining these easier:
import scala.concurrent.duration._
val d: Duration = 10.seconds
Using Futures For Non-blocking Computation
Basic use of futures is easy with the factory method on Future, which executes a provided function asynchronously, handing you back a future result of that function without blocking the current thread. In order to create the Future you will need either an implicit or explicit ExecutionContext to be provided:
import scala.concurrent._
import ExecutionContext.Implicits.global // implicit execution context
val firstZebra: Future[Int] = Future {
val words = Files.readAllLines("/etc/dictionaries-common/words").asScala
words.indexOfSlice("zebra")
}
Avoid Blocking
Although blocking is possible in order to await results (with a mandatory timeout duration):
import scala.concurrent.duration._
Await.result(firstZebra, 10.seconds)
and although this is sometimes necessary to do, in particular for testing purposes, blocking in general is discouraged when working with Futures and concurrency in order to avoid potential deadlocks and improve performance. Instead, use callbacks or combinators to remain in the future domain:
val animalRange: Future[Int] = for {
aardvark <- firstAardvark
zebra <- firstZebra
} yield zebra - aardvark
animalRange.onSuccess {
case x if x > 500000 => println("It's a long way from Aardvark to Zebra")
}
The jdk package contains utilities to interact with JDK classes.
The jdk package contains utilities to interact with JDK classes.
This packages offers a number of converters, that are able to wrap or copy types from the scala library to equivalent types in the JDK class library and vice versa:
- CollectionConverters, converting collections like scala.collection.Seq, scala.collection.Map, scala.collection.Set, scala.collection.mutable.Buffer, scala.collection.Iterator and scala.collection.Iterable to their JDK counterparts - OptionConverters, converting between Option and java.util.Optional and primitive variations - StreamConverters, to create JDK Streams from scala collections - DurationConverters, for conversions between scala scala.concurrent.duration.FiniteDuration and java.time.Duration - FunctionConverters, from scala Functions to java java.util.function.Function, java.util.function.UnaryOperator, java.util.function.Consumer and java.util.function.Predicate, as well as primitive variations and Bi-variations.
By convention, converters that wrap an object to provide a different interface to the same underlying data structure use .asScala and .asJava extension methods, whereas converters that copy the underlying data structure use .toScala and .toJava.
In the javaapi package, the same converters can be found with a java-friendly interface that don't rely on implicit enrichments.
Additionally, this package offers Accumulators, capable of efficiently traversing JDK Streams.
The package object scala.math
contains methods for performing basic
numeric operations such as elementary exponential, logarithmic, root and
trigonometric functions.
The package object scala.math
contains methods for performing basic
numeric operations such as elementary exponential, logarithmic, root and
trigonometric functions.
All methods forward to java.lang.Math unless otherwise noted.
- See also:
The package object scala.sys
contains methods for reading
and altering core aspects of the virtual machine as well as the
world outside of it.
The package object scala.sys
contains methods for reading
and altering core aspects of the virtual machine as well as the
world outside of it.
This package handles the execution of external processes.
This package handles the execution of external processes. The contents of this package can be divided in three groups, according to their responsibilities:
Indicating what to run and how to run it.
Handling a process input and output.
Running the process.
For simple uses, the only group that matters is the first one. Running an
external command can be as simple as "ls".!
, or as complex as building a
pipeline of commands such as this:
import scala.sys.process._
"ls" #| "grep .scala" #&& Seq("sh", "-c", "scalac *.scala") #|| "echo nothing found" lazyLines
We describe below the general concepts and architecture of the package, and then take a closer look at each of the categories mentioned above.
Concepts and Architecture
The underlying basis for the whole package is Java's Process
and
ProcessBuilder
classes. While there's no need to use these Java classes,
they impose boundaries on what is possible. One cannot, for instance,
retrieve a process id for whatever is executing.
When executing an external process, one can provide a command's name,
arguments to it, the directory in which it will be executed and what
environment variables will be set. For each executing process, one can
feed its standard input through a java.io.OutputStream
, and read from
its standard output and standard error through a pair of
java.io.InputStream
. One can wait until a process finishes execution and
then retrieve its return value, or one can kill an executing process.
Everything else must be built on those features.
This package provides a DSL for running and chaining such processes, mimicking Unix shells ability to pipe output from one process to the input of another, or control the execution of further processes based on the return status of the previous one.
In addition to this DSL, this package also provides a few ways of controlling input and output of these processes, going from simple and easy to use to complex and flexible.
When processes are composed, a new ProcessBuilder
is created which, when
run, will execute the ProcessBuilder
instances it is composed of
according to the manner of the composition. If piping one process to
another, they'll be executed simultaneously, and each will be passed a
ProcessIO
that will copy the output of one to the input of the other.
What to Run and How
The central component of the process execution DSL is the
scala.sys.process.ProcessBuilder trait. It is ProcessBuilder
that
implements the process execution DSL, that creates the
scala.sys.process.Process that will handle the execution, and return
the results of such execution to the caller. We can see that DSL in the
introductory example: #|
, #&&
and #!!
are methods on
ProcessBuilder
used to create a new ProcessBuilder
through
composition.
One creates a ProcessBuilder
either through factories on the
scala.sys.process.Process's companion object, or through implicit
conversions available in this package object itself. Implicitly, each
process is created either out of a String
, with arguments separated by
spaces -- no escaping of spaces is possible -- or out of a
scala.collection.Seq, where the first element represents the command
name, and the remaining elements are arguments to it. In this latter case,
arguments may contain spaces.
To further control what how the process will be run, such as specifying the directory in which it will be run, see the factories on scala.sys.process.Process's companion object.
Once the desired ProcessBuilder
is available, it can be executed in
different ways, depending on how one desires to control its I/O, and what
kind of result one wishes for:
Return status of the process (
!
methods)Output of the process as a
String
(!!
methods)Continuous output of the process as a
LazyList[String]
(lazyLines
methods)The
Process
representing it (run
methods)
Some simple examples of these methods:
import scala.sys.process._
// This uses ! to get the exit code
def fileExists(name: String) = Seq("test", "-f", name).! == 0
// This uses !! to get the whole result as a string
val dirContents = "ls".!!
// This "fire-and-forgets" the method, which can be lazily read through
// a LazyList[String]
def sourceFilesAt(baseDir: String): LazyList[String] = {
val cmd = Seq("find", baseDir, "-name", "*.scala", "-type", "f")
cmd.lazyLines
}
We'll see more details about controlling I/O of the process in the next section.
Handling Input and Output
In the underlying Java model, once a Process
has been started, one can
get java.io.InputStream
and java.io.OutputStream
representing its
output and input respectively. That is, what one writes to an
OutputStream
is turned into input to the process, and the output of a
process can be read from an InputStream
-- of which there are two, one
representing normal output, and the other representing error output.
This model creates a difficulty, which is that the code responsible for actually running the external processes is the one that has to take decisions about how to handle its I/O.
This package presents an alternative model: the I/O of a running process
is controlled by a scala.sys.process.ProcessIO object, which can be
passed _to_ the code that runs the external process. A ProcessIO
will
have direct access to the java streams associated with the process I/O. It
must, however, close these streams afterwards.
Simpler abstractions are available, however. The components of this package that handle I/O are:
scala.sys.process.ProcessIO: provides the low level abstraction.
scala.sys.process.ProcessLogger: provides a higher level abstraction for output, and can be created through its companion object.
scala.sys.process.BasicIO: a library of helper methods for the creation of
ProcessIO
.This package object itself, with a few implicit conversions.
Some examples of I/O handling:
import scala.sys.process._
// An overly complex way of computing size of a compressed file
def gzFileSize(name: String) = {
val cat = Seq("zcat", name)
var count = 0
def byteCounter(input: java.io.InputStream) = {
while(input.read() != -1) count += 1
input.close()
}
val p = cat run new ProcessIO(_.close(), byteCounter, _.close())
p.exitValue()
count
}
// This "fire-and-forgets" the method, which can be lazily read through
// a LazyList[String], and accumulates all errors on a StringBuffer
def sourceFilesAt(baseDir: String): (LazyList[String], StringBuffer) = {
val buffer = new StringBuffer()
val cmd = Seq("find", baseDir, "-name", "*.scala", "-type", "f")
val lazyLines = cmd lazyLines_! ProcessLogger(buffer append _)
(lazyLines, buffer)
}
Instances of the java classes java.io.File
and java.net.URL
can both
be used directly as input to other processes, and java.io.File
can be
used as output as well. One can even pipe one to the other directly
without any intervening process, though that's not a design goal or
recommended usage. For example, the following code will copy a web page to
a file:
import java.io.File
import java.net.URL
import scala.sys.process._
new URL("https://www.scala-lang.org/") #> new File("scala-lang.html") !
More information about the other ways of controlling I/O can be found in the Scaladoc for the associated objects, traits and classes.
Running the Process
Paradoxically, this is the simplest component of all, and the one least likely to be interacted with. It consists solely of scala.sys.process.Process, and it provides only two methods:
exitValue()
: blocks until the process exit, and then returns the exit value. This is what happens when one uses the!
method ofProcessBuilder
.destroy()
: this will kill the external process and close the streams associated with it.