- About Scala
- Documentation
- Code Examples
- Software
- Scala Developers
regaining the fun of compiler-plugin writing
Fri, 2009-11-06, 16:44
Hi,
My previous post on real-world stories of compiler-plugins sparked a few
off-list exchanges on lessons learnt. Given that the workday is over for me,
I'll let you know my reflections on them :-) Summing up, a common theme has
been the practical difficulty of realizing ideas which, at their core, can
be cleanly formalized (language-integrated query is my favorite example, but
the same applies to other efforts listed in that post).
In all cases, the practical difficulties resulted from the driving force
behind scalac: being useful for real-world development. For research
projects instead, a by-the-spec compiler would be better suited. After all,
an innovative program transformation can be shown to work in the small, and
if large scale code analyses are necessary then one is better served by code
query techniques anyway, as in http://doop.program-analysis.org/
If I may have five minutes of your time, please consider why a "research
compiler" would be useful for the Scala community in general.
(a) Easier maintenance due to best-of-breed reuse:
Most, if not all, of the chores of language processing have been factored
out into existing tooling, tooling that can be used (mostly seamlessly) for
a language under study. Say all we have is a context-free grammar? Use
http://strategoxt.org/Sdf/SGLR to get all (possibly ambiguous) trees for
some given input. What about the resulting (undecidable) cfg analyses? Rely
on a prototype dealing with bounded versions of them (
http://www.tcs.hut.fi/~kepa/publications/AxelssonHeljankoLange-ICALP08.pdf ).
The above may already sound naive, but what about this: the whole
transformation pipeline, one phase at a time, could be specified without
regard for speed but for clarity. Would a straightforward implementation be
slow? Maybe, although one could argue that automatic parallelism and
automatic incrementalization (
http://www.tti-c.org/technical_reports/ttic-tr-2009-2.pdf ) could do their
part. Wait, does that mean one could get an incremental compiler just by
using some research prototype? Nuts, isn't it?
And so on. Again, the whole purpose of the resulting "research compiler"
would be end-to-end transparency, leaving the (independent) work of
optimizing its operation for others (division of labor). If you're not
convinced how this approach could possibly benefit practitioners, think
about this analogy: the JVM spec and the breakneck competition among
implementers, a competition made possible by the completeness of the JVM
spec (remember those research JVMs? They served a good purpose :-)
(b) Shorter language engineering cycle:
Because of arguments similar to the above. Again we can learn from history
here. Take Java, whose extensions have been slow to gain traction due to too
much nitty gritty in implementing them. For some reason, research tools for
just this purpose ( like http://jastadd.org/ ) are not used to their fullest
( notable exception like http://www.cs.cmu.edu/~donna/public/oopsla09.pdf
notwithstanding ). Project Coin is all the rage instead.
(c) Divergence:
A cursory reading of the above may leave the impression that a Scala
research compiler could threaten standardization. Standardization of what?
Of a definitive language spec, empowered by a reference implementation (the
research compiler)? Again here, there appear to be no cons against the
suggested approach.
I hope you enjoyed reading this post as much as I did writing it :-)
I fully agree that a research compiler for mini-but-to-the-spec-scala is extremely desirable. All we need is the resources (time/people/...) to implement it...
have a nice weekendadriaan
On Fri, Nov 6, 2009 at 4:44 PM, Miguel Garcia <miguel.garcia@tuhh.de> wrote: