Slang: Review of 2012

Already the first quarter of 2013 is behind us, but I wanted to take a look whether I accomplished the goals I set for Slang development in 2012. I’m not going to reiterate the goals here, but almost none of them were completed.

I wanted to bring the language into a usable form, but so far there is still a lot of work to do. I wanted to make the compiler open source, but I’m still not sure at which point I would like to do that. I think I will postpone that quite a bit, because I don’t have the time to run an open source project at the moment (and parts of the compiler are still full of horrible hacks). Actually, lack of development time was one of the things that prevented me from fulfilling the goals. I worked on Slang mostly during vacations and when I found a drive to start work during other times I would continue for a couple of weeks and then take a break for a few months.

I am suspecting that development will continue at a similar pace. I have made some progress, though, sporadically during 2012 and more continuously during the last few weeks.

New Parser

I think I spent most of my Slang development time last year on the new parser, using the mixfix parser combinators I blogged about (sorry, I kind of forgot to post the third part of that). I have improved the parser framework a lot since then, but it does have a downside of being quite slow for large programs, even with Packrat parsers. So I have some further work to do here, like maybe replacing the whole parser again, but I will probably leave that to some unknown future date.

Custom Operator Precedence Rules

The new mixfix parser theoretically enabled a modular grammar: the default grammar can be relatively small, but programs could define their own prefix, postfix, infix and closed operators and their precedence rules. And recently, I made that actually work. You still can’t quite define a grammar per Slang source file, but you can add operator graph (.ops) files to the list of compiled files and together with the standard rules, they will define the grammar for that compiler run. Some examples.

New Compiler Infrastructure

I just finished the initial work on this, a step towards separate compilation: instead of including library code into the source file being compiled, now the new Slang compiler driver can compile multiple source files at the same time, but still requires that the library sources are available. The end result is linked together with the LLVM IR linker.

Slang IDE

I started work on an Eclipse-based Slang IDE. Currently it includes syntax highlighting, error reporting, a few quick fixes, and quick-assist for rewriting expressions to use different aliases of methods. E.g. if you write x.times(y) you can then click Ctrl+1 and select “Use alias: *”. Then the expression will be rewritten to x * y. This is much more useful if you have Unicode operators that can’t be easily typed. Another example: you write array(1), Ctrl+1 suggests “Use alias: subscript” and selecting it turns the expression into array₁

The IDE also shows warnings and errors with squiggly lines, and I added a few more useful warnings to the compiler and improved the messages.

Debugging

This is still in progress, but for some programs, Slang compiler can now emit DWARF debug info, and the programs can be debugged using the GNU Debugger. I was quite pleased to get that working, and together with the IDE, and new compiler driver, it has made Slang much more usable for myself.

Next Steps

This time, I don’t want to give any deadlines, but the next steps are to make the tooling more usable, improve error messages, add some of the language features I didn’t get to add in 2012, etc. I think I might even release a build in the form of the Slang IDE with the compiler embedded, even if it will not be open source.

I have various language features in mind that I want to explore, such as proper generic types, some form of dependent types, Strings, IO, better interoperability with C, JVM back-end. Strings and IO are pretty much given, and generics as well. Not sure how soon I’ll get to the other things.

I think I will pretty much be doing pain-driven development: what is currently giving me most pain will get fixed first. I think actually generics might be the thing, because adding collection methods for only a specific type of arrays is no fun.

Syntax Matters in Slang

This post was inspired by Ola Bini’s notes on syntax. I agree with most of Ola’s thoughts. I’m not convinced that repetition aids reading or improves syntax, and think that optimising syntax for writing is mostly irrelevant. I might agree less about the familiarity of syntax. I would love it if people, including me, were more open to trying languages with unfamiliar syntax, but the familiarity is important to many on some level.

Some of these thoughts form the basis for Slang’s syntax. I’m not quite ready to define The Zen of Slang, but I can give a shot to stating some of the underlying principles behind its syntax:

  • Reading code is an order of magnitude more important than writing code
  • You should still be able to type a lot of code quickly in any text editor
  • Concise is good, repetition is bad
  • There can be multiple ways to write the same thing, but one way should be canonical
  • Familiarity is good, in this case with:
    • Scala
    • to a lesser extent, other curly brace languages
    • mathematical notation

How these goals actually effect the syntax deserves more explanation

Reading > Writing

We read code more than we write it, and reading is more important. I hope this is accepted by everyone nowadays and I’m not going into why. But how to optimise for readability is another question, and probably highly subjective. In Slang, I am experimenting with syntax that I think is readable, but that might not be always easy to write.

Slang makes very few restrictions on what Unicode characters are allowed in identifiers — I believe that well-known mathematical operators in code that deals with maths are more readable than English words. The ability to write close to mathematical notation in many cases is enhanced by limited mix-fix operator support, but this is really less about syntax and more about libraries. Slang’s standard libraries will prefer well-known mathematical operator symbols instead of word-based method names.

A thing I really love about Scala’s syntax is the ability to define one-line methods without using curly braces.

def +(v: Vector2) = Vector2(x + v.x, y + v.y)

I’d find it incredibly distracting for readability if there were curly braces around this or if the method body was on a separate line due to code conventions. For me, such one-liners and even multi-line single-expression methods are important.

def sum(xs: [Int]) =
  loop(i := 0, acc := 0) while i < #xs do
    (i + 1, acc + xsᵢ)
  yield acc

Many methods that are based on loops can conveniently be implemented as a single loop expression because Slang doesn’t have mutable local variables (at least not yet) that would be defined in a code block, but instead prefers “anonymous recursion” for computing varying values. while is syntax sugar for making that construct more concise. There is just one loop construct, where other languages would have for loops, tail recursion, while loops, or do while loops. It’s likely that I’ll add a ‘for each’ construct, though.

There are other things to consider when thinking about code readability, but that could be a whole post or series of posts on its own.

Writing cannot suck!

Typing Unicode symbols is not easy. The solution I use regularly is to have often-used symbols in a text file, and copy them from there when needed, or to search in a Character Map application (there’s a good one in Ubuntu). This slows down writing, which often doesn’t matter if you spend more time thinking than writing anyway.

But sometimes you still want to type fast, and for that, Slang’s library will have ASCII aliases for every non-ASCII identifier. You can write ASCII names instead of Unicode symbols and replace them later when a code snippet is complete and tested. In the future, an IDE should know what the canonical forms are and do the replacement automatically.

This means there will effectively be two ways of writing many programs. Some language designers would consider multiple ways of doing things bad, but in Slang I allow it with no regrets — this comes from the desire to achieve a balance between the goals “Reading > Writing” and “Writing cannot suck!”.

Repetition is bad and multiple ways of writing things

Ok, I mentioned that non-ASCII symbols always have ASCII aliases (in fact it is mostly vice versa). Consider the following:

class Boolean {
  def xor(that: Boolean) = // intrinsic operation
  def ⊻(that: Boolean) = xor(that)

  // Methods ending with … are prefix operators
  def not…() = true ⊻ this
  def ¬…() = true ⊻ this
}

In the above code, we create aliases by either duplicating definitions (not, ¬) or defining similar methods where the aliases call the “canonical” method (xor, ). There is repetition in both cases and potential performance issues in the latter case if methods are not in-lined. It is how most widely used languages allow aliasing, because they are not too concerned with providing multiple ways of doing things. But Slang is concerned with that, because it helps deal with the above-mentioned tension between reading and writing. So Slang makes aliases explicit:

class Boolean {
  def xor(that: Boolean) = // intrinsic operation
  def not…() = true ⊻ this

  alias ⊻ = xor
  alias ¬… = not…
}

The repetition is gone — to define an alias, we only define its name and the name of the aliasee. Right now only simple identifiers may be aliased, but I might allow more complex cases later.

If you don’t like the aliases Slang provides for the standard types, you could define your own:

extending Boolean {
  alias `no way!` = not… // I hope nobody will actually do this :)
}

Now you can write: if collection.isEmpty.`no way!` then ...

As you can see, even spaces are allowed in identifiers, as long as the identifier is surrounded by back-ticks (same as in Scala). We can’t use the `no way!` alias as a prefix operator, though, because Slang currently allows only certain operator symbols to be prefix instead of infix. I might add user-defined precedence/fixity rules later, but not totally sure about that.

Now, I hear alarm bells sounding in some readers’ heads already. How come I’m willing to give potential Slang programmers so many tools to shoot themselves in the foot? Because I want to believe that most programmers are not stupid and the usefulness of this in the hands of smart people outweighs the negative sides.

Similarly to term aliases, there are type aliases:

type SDL_Rect = (x: Short, y: Short, w: Short, h: Short)

Familiar is good

I don’t have a whole lot to say about this, as things are not yet set in stone. I really like most of Scala’s syntax and I want to stick to something similar for the most part. Significant whitespace would be an interesting experiment, but I’m not sure if it’s a clear improvement over curly braces, which are familiar to the vast majority of programmers. Mathematical notation is complex, and varies between different fields of mathematics, blackboard and paper, and so on. But there is a part of it that is commonly understood and I would like to make some subset of that usable in Slang, although slight differences in syntax might be necessary.

Some examples:

∃! xs , x -> x == 3 // there exists only one x in xs where x == 3
2 ∈ xs // 2 is an element of xs
5 + ∏ xs // 5 + the product of xs
∛(27) + |-3| // cube root of 27 + absolute value of -3

// definition of for all on an array of integers
extending [Int] {
  def ∀…,…(ƒ: (Int) → Boolean) =
    loop(i ≔ 0, z ≔ true) while z ∧ i < #this do
      (i + 1, z ∧ ƒ(thisᵢ))
    yield z
}

You may have noticed

  • quantifiers (for all, exists etc.). For this purpose I allowed comma to be used in a few special identifiers: ∀…,… ∃…,… ∄…,… ∃!…,…
  • subscripts can be used to index into arrays and other indexed collections
  • Unicode aliases like ∧, ≔ and →
  • ∏… as a prefix operator
  • |…| as a closed operator

These all actually translate to method calls such as

xs.∃!…,…((x: Int) -> x == 3)
2.∈(xs)
5.+(xs.∏…())
∛(27).+(3.-…().|…|())
this.#…()
this.subscript(i)

Conclusion

Syntax is quite important in Slang. I’m not designing it syntax-first, but syntax weighs into most considerations. I think the syntax as a whole will be somewhat familiar to many, being similar to Scala and a couple of other recent JVM languages.

The language is optimised for reading, mostly by allowing concise method definitions, a large range of Unicode characters in identifiers, but also by having some support for mix-fix operators. Aliases to Unicode symbols allow fast writing as well, and the alias definitions are made explicit in code. Lots of aliases allow one to write the same program in multiple ways, but in the future, there might be IDE support for “normalising” the syntax.

There are some new things here as well, or at least things that the vast majority of languages don’t have. One is the translation of subscript and superscript characters to their normal equivalents. Another is the loop construct, which is basically anonymous recursion similar to some functional language constructs. It turned out more flexible than I thought, and I’d like to write another post about that, soon.

Mixfix Operators & Parser Combinators, Bonus Part 2a

This is a short bonus post in the Mixfix Operator series. Part 1 was an introduction to mixfix operators and in part 2 we looked at them more closely in the context of a grammar for a boolean algebra and arithmetic language. The implementation of a parser for the language is coming next, but before that I thought it would be interesting to see what the grammar would look like if we removed the mixfix abstraction and mechanically converted the precedence graph to BNF notation.

It turns out this is not that hard if we turn each operator group (graph node) into a separate production and leave out the irrelevant productions for the types of operators we don’t have in those groups. This is especially easy in our case since we only have the same types of operators in each group. I’ll use shorthand names here for brevity.


expr ::= or | and | not | eq | cmp
       | add | mul | exp | neg | tightest

or   ::= (or | or↑) "|" or↑
or↑  ::= and | not | eq | cmp | tightest

and  ::= (and | and↑) "&" and↑
and↑ ::= not | eq | cmp | tightest

not  ::= "!" (not | not↑)
not↑ ::= eq | cmp | tightest

eq   ::= (eq | eq↑) ("=" | "≠") eq↑
eq↑  ::= cmp | add | mul | exp | neg | tightest

cmp  ::= (cmp | cmp↑) ("<" | ">") cmp↑
cmp↑ ::= add | mul | exp | neg | tightest

add  ::= (add | add↑) ("+" | "-") add↑
add↑ ::= mul | exp | neg | tightest

mul  ::= (mul | mul↑) ("*" | "/" | "mod") mul↑
mul↑ ::= exp | neg | tightest

exp  ::= (exp | exp↑) "^" exp↑
exp↑ ::= neg | tightest

neg  ::= "-" (neg | neg↑)
neg↑ ::= tightest

tightest := ("(" expr ")") | value

To be honest, encoding the whole graph as BNF is a lot simpler than I initially thought, and so is translating this into a combinator parser. It makes me think whether the mixfix grammar abstraction could be overkill. Of course, this is so easy only because we have relatively few different operators: only left-associative infix, prefix and closed. If we had more operators, with more holes in them, and different types of operators in one group (which is probably not usual, though), perhaps we wouldn’t find the conversion to be that simple any more.

Plainly this simplified scheme won’t help much with user-defined operators and precedence, so I think the mixfix parser abstraction is still useful. However, in cases where there are only a few operators/operator groups, maybe the straightforward translation of this BNF form into parser combinators is preferable. If there’s room left in the next post, I’ll include an alternative implementation based on this scheme.

Mixfix Operators & Parser Combinators, Part 2

In the previous post I introduced the notion of mixfix operators. In this post we will look at them more closely, in the context of an actual grammar. In the next part we will implement the parser for this grammar, look at performance issues and try to fix them with packrat parsers.

We will implement a simple language that consists of boolean algebra and integer arithmetic expressions. The grammar for the language looks like the following (we’re only considering tokens here and assume that a lexical parser has already identified literals, identifiers and delimiters in the text)

statement   ::= expression | declaration
declaration ::= variable ":=" expression
expression  ::= ??? | value
value       ::= literal | variable
literal     ::= booleanLiteral | integerLiteral
variable    ::= identifier

What should the expression productions look like, though? In examples of parsers and grammars we can commonly find an arithmetic expression language described with concepts of ‘factor’ and ‘term’ to create a precedence relation between addition and multiplication:

expression ::= (term "+")* term
term       ::= (factor "*")* factor
factor     ::= constant | variable
               | "(" expression ")"

This seems simple, but when we add more precedence rules, it can get quite complex, especially if we are writing a parser for a general purpose programming language instead of a simple expression language, and we also do semantic actions (create AST nodes) in the parser. This also makes the set of operators rather fixed: you might have to change several grammar productions to add a new operator with a new precedence level. I didn’t even try building Slang’s precedence rules into the grammar in this fashion.

Mixfix parsers still make the precedence part of the grammar, but there is a layer of abstraction there: we describe operators and their precedence rules as a directed graph, where (groups of) operators are the nodes and precedences are the edges. Then we instantiate the grammar with that particular precedence graph.

Before getting to the precedence rules in the language we are about to create, lets look at the operators it will have. In the list below, _ means a hole in the expression that can contain any other expression that “binds tighter” than the operator in question. In the case where the hole is closed on both left and right, it can contain any expression at all. Only a pair of parentheses forms a closed operator in this language.

( _ ) – parentheses
_ + _ – addition
_ - _ – subtraction
  - _ – negation
_ * _ – multiplication
_ / _ – division
_ ^ _ – exponent
_ mod _ – modulo/remainder
_ = _ – equality test
_ ≠ _ – inequality test
_ < _ – less than
_ > _ – greater than
_ & _ – conjunction
_ | _ – disjunction
  ! _ – logical not

This doesn’t include many common operators in real programming languages, but it is enough to demonstrate some interesting aspects of mixfix operators and using a DAG to describe their precedence relations. I used mod instead of % to show that operators don’t have to be symbols.

Before defining the precedence rules, lets look at some sample expressions and how we want them to be interpreted, mostly sticking with existing well known precedence rules, such as those in C, Java or Scala, but occasionally deviating from them:

a + b * c      = a + (b * c)
a < b & b < c  = (a < b) & (b < c)
-5 ^ 6         = (-5) ^ 6
a & !b | c     = (a & (!b)) | c
5 < 2 ≠ 6 > 3  = (5 < 2) ≠ (6 > 3)
1 < x & !x > 5 = (1 < x) & !(x > 5)

I think that’s enough examples for now. Lets try to describe the rules behind these somewhat intuitive expectations as a precedence graph. First, we’ll put the operators into groups where all operators in one group bind just as tightly as the others in the same group. For example 1 + 2 - 3 will be (1 + 2) - 3 and 1 - 2 + 3 will be (1 - 2) + 3

parentheses   : ()
negation      : - (prefix)
exponent      : ^
multiplication: *, /, mod
addition      : +, -
comparison    : <, >
equality      : =, ≠
not           : ! (prefix)
and           : &
or            : |

Negation (prefix -) is in it’s own group so that we can do: -2 + 1. If it was in the same group with infix - and +, then it couldn’t appear next to them without parentheses because prefix operators are treated as right-associative, but most infix operators, such as - and + are left-associative. And we can’t mix left-associative and right-associative operators of the same precedence level! Why? Take the expression

1 + 2 - 3

If + and - are left-associatve, it means (1 + 2) - 3.

If - is right-associative instead, then both (1 + 2) - 3 and 1 + (2 - 3) would be right!

We could read the list of operator groups above as an order of precedence, where the first group (parentheses) binds tightest and the last group (or) binds least tight. This would be mostly compatible with many programming languages and we would have a good enough set of precedence rules right there.

However, as mentioned earlier, Danielsson’s mixfix grammar scheme describes precedence relations as a directed graph. Each of the groups above is a node in the graph, and a directed edge from one node to another a -> b means: b binds tighter than a. So lets describe these relations as a graph instead — it will be in reverse order compared to the above list where we started from the most tightly binding:

or             -> and, not, equality, comparison, parentheses
and            -> not, equality, comparison, parentheses
not            -> equality, comparison, parentheses

equality       -> comparison, addition, multiplication, exponent, negation, parentheses
comparison     -> addition, multiplication, exponent, negation, parentheses

addition       -> multiplication, exponent, negation, parentheses
multiplication -> exponent, negation, parentheses
exponent       -> negation, parentheses
negation       -> parentheses

Notice that from each group we draw the edge not into a single group, but into all of the groups that bind tighter. This is because of the non-transitivity of precedence in this scheme: each pair of operator groups that is to have a precedence relation must have an edge between them in the graph. The advantage of this is that we don’t need to describe the precedence between operators that aren’t related at all. This is one of the motivations for using a directed graph to represent operator precedence.

I hope that from the names of the operators it was clear that some of them will apply only to booleans and some only to integers. For example, the & operator isn’t defined as bitwise &, only as logical conjunction. Thus, assuming that our language is strongly typed, some of the operators can’t appear in the holes of some other operators in a correct program.

A parser doesn’t do type checking of course, but with this mixfix grammar scheme, it does implicitly do precedence correctness checking. For example 4 + 5 & 6 + 4 is not precedence correct, as we didn’t define a precedence relation between addition and and. And due to the parser’s precedence checking, this expression will not even parse.

If we had used a total precedence order instead, we would have + binding tighter than &. The expression would be interpreted as (4 + 5) & (6 + 4) but would probably yield a type error as & works on booleans, but + works on integers. We could write (4 + 5) & (6 + 4) ourselves and that would also parse, because we made the precedence explicit. Well, actually parentheses follow the same rules: remember that in our graph, () bind tighter than everything.

The fact that the parser only produces precedence correct expressions can be both a blessing and a curse.

On one hand, this allows us to view some unrelated groups of operators almost as sublanguages. In our case, boolean algebra and integer arithmetic. This might be good for implementing internal DSLs in the presence of extremely flexible user-defined mixfix operators. We could allow users to extend our precedence graph or even replace it completely with their own. If a DSL has boolean logic in it, but no arithmetic, it might have precedence relations to logical operators, but not to arithmetic operators. This would preclude arithmetic operators from appearing in the DSL without being surrounded by parentheses. Or the DSL could even disallow parentheses. Implementing this much flexibility in a host language is complicated, though. For example, the parser would have to know about any custom mixfix grammars defined in imported modules.

On the other hand, this puts some correctness checks at the wrong level. Arguably, a parser should only validate the syntax of a program and nothing else. If a simple mistake such as using a wrong operator (equal to calling a non-existing method in some languages) would prevent the whole program from being parsed, it would also prevent the compiler from doing other interesting and useful things, or reporting better error messages.

So maybe this grammar scheme isn’t ideal for a general purpose programming language. I am sticking with it in Slang for now, because the scheme is relatively simple and works for me at least as long as I’m the only user of Slang :) And perhaps there are workarounds that would allow a precedence-incorrect expression to be accepted by the parser still. But I don’t have immediate plans to allow a wide variety of user-defined mixfix operators or operator precedence.

Anyway, for our simple language, I think this scheme works well enough as long as we don’t care whether it is the parser or the type checker that reports the errors in incorrect programs. There aren’t any useful direct precedence relations between boolean algebra operators and arithmetic operators here. Only by having equality, comparison or parentheses between them, can we put them in the same expression.

Lets look at one of the consequences of our rules more closely. Many languages, including Java and C, would put most prefix (unary) operators such as ! and - at the same level of precedence, binding tighter than all infix (binary) operators. In Java, !6 == 5 is a type error because the operator ! is bound to 6, not to 6 == 5, and ! isn’t defined on integers. In our language, it isn’t necessary to have ! at the same level as -, though. Since there is no (precedence) relation between logical and arithmetic operators, !6 + 5 will not parse. But ! does have a relation to comparison and equality tests (they bind tighter), so you can write !6 = 5 and it will mean !(6 = 5).

The precedence rules that have = binding tighter than boolean operators is based on the assumption that booleans are rarely compared to each other, but multiple comparisons of other types of values are often used in disjunctions, conjunctions and complements.

To get back to the question in the beginning of the post, what would the expression production in the grammar look like instead of expression ::= ??? | value? The short answer is that we replace ??? with the mixfix grammar scheme instantiated with our particular precedence graph. The long answer would probably take an entire blog post by itself. You can read more about this scheme in the Agda paper, or look at the source code of my mixfix library. The scheme looks somewhat like the parser combinators in the following pseudo-code (~ means sequential composition):

value = variable | literal
expression = mixfixGrammar(precedenceGraph) | value

mixfixGrammar(graph) = {
  // graph - the precedence graph
  // g - an operator group, node in the graph
  // op - an operator in a group

  ⋁(parsers) = // returns the result of the first parser in the list to succeed

  opsLeft(g)   = // all left-associative infix operators in g
  opsRight(g)  = // all right-associative infix operators in g
  opsNon(g)    = // all non-associative infix operators in g
  opsClosed(g) = // all closed operators in g
  opsPre(g)    = // all prefix operators in g
  opsPost(g)   = // all postfix operators in g

  operator(op) =
    if (op.internalArity == 0)
      op.namePart1
    if (op.internalArity == 1)
      op.namePart1 ~ expression ~ op.namePart2
      // expression is an recursive reference back to the "outer" production
      // these are the internal "holes" that can take any expression

  group(g)  = closed(g) | non(g) | left(g) | right(g)    // any ops in this group

  closed(g) = ⋁{ opsClosed(g) map operator }             // closed ops

  non(g)    = ↑(g) ~ ⋁{ opsNon(g) map operator } ~ ↑(g)  // non-associative ops

  left(g)   = (left(g) | ↑(g))                           // left-associative ops
              ~ ( ⋁{ opsPost(g) map operator }
                | ⋁{ opsLeft(g) map operator } ~ ↑(g) )

  right(g)  = ( ⋁{ opsPre(g) map operator }              // right-associative ops
              | ↑(g) ~ ⋁{ opsRight(g) map operator } )
              ~ (right(g) | ↑(g))

  ↑(g) = ⋁{ graph.groupsTighterThan(g) map group } // every group that binds tighter than g
         | value                                   // or the tightest "group" of values

  return ⋁{ graph.nodes map group }
}

If you don’t understand this right now, no big deal — it’s late enough that I couldn’t come up with a better representation of the actual code that would fit in this post. And if you are not familiar with parser combinators I would recommend reading Daniel Spiewak’s post on the subject, at least before continuing to the next part of this series.

If you notice, the value and expression productions are referenced inside the mixfixGrammar. This is no good if the mixfix library is to be a separate module, so I actually implemented that by introducing a pseudo operator group that has a custom parser. This pseudo-group is then added to the precedence graph along with edges from every other group into that “really tight” group.

This concludes part 2. In the next part we will forget this pseudo-code and use Scala’s parser combinators and my mixfix library to implement an actual parser for the language, and maybe an AST and an interpreter as well.

Thanks to Miles Sabin and Daniel Spiewak for reviewing drafts for this series of posts.

Mixfix Operators & Parser Combinators, Part 1

Until recently, Slang’s parser really sucked. It was a quick hack implemented with Scala’s parser combinator library. Nothing really wrong about that in particular, but there was a gaping hole in the grammar: no operator precedence. So to get an expression like a + b * c to mean a + (b * c) I had to add the parentheses myself. In fact, there were even more problems — some things that should have been left-associative were right-associative. This resulted in very hairy test code, with lots of parentheses everywhere.

Although I think parsers are cool, I am actually not very good at writing one for a complex grammar. I feel that I just know too little about the theory behind them or how to put it to practical use. I’ve used parser combinators before and think they are probably the easiest way for newbies like me to implement parsers, so that’s what I used. The use of symbolic names in the library might be scary the first time, but actually I think parsing is one of the few contexts where use of lots of symbols and extremeley concise code is desirable. It allows one to put a lot of code on a few lines, and when you are looking at or writing a parser, you want to see many productions of the grammar at the same time to understand what is going on. At least I do.

For Slang, I implemented something minimal that could parse the language. I had no idea how to solve operator precedence well with parser combinators, and I didn’t want to spend a lot of time studying parsers, because the next compiler phases seemed more interesting at first. But getting the parser right is important for actually using the language because it’s the first thing that processes the code and reports errors. A parser that only kind of works can be very annoying.

Thankfully Miles Sabin suggested that I should look into mixfix operator parsers, and I did. I don’t know exactly where the word mixfix comes from, so I’m assuming it means mixed fixity — operators can be prefix, infix, postfix or closed. Here are some samples:

  • prefix : -a
  • infix : a + b
  • postfix: n!
  • closed : (a)

Of course, most languages have operators with all of these fixities. The term mixfix actually refers to something more flexible than that — a mixfix operator can be seen as a sequence of alternating name parts and “holes in the expression”. A hole is where the operator’s arguments go.

_ + _ has two holes and one name part + (and is infix)
if _ then _ else _ has three name parts if, then, else and three holes (and is prefix)

In the mixfix viewpoint, many syntactic constructs might be seen as operators that can have precedence in relation to others, and this concept of many name parts can make it easier to let users define their own operators in a more flexible way than just a single prefix or infix word (as is allowed by Scala). I think this would be a really nice way of creating internal DSL-s. In Slang, like in Scala, most operators are really methods. Slang doesn’t allow user-defined fixity or precedence for methods yet (or even multiple argument lists), but I may add this feature one day.

There are existing languages that support mixfix operators, such as Agda, Maude and BitC. To my knowledge, all these languages assign numeric precedence values to operators, and no language currently uses the exact scheme we will look at, although it was proposed for Agda.

Mixfix operators can be implemented in many ways, but one of the first things I found was the paper Parsing Mixfix Operators by Anders Danielsson and Ulf Norell that was a great help to me. I was able to implement the grammar scheme described in that paper on top of Scala’s parser combinators and patch that into Slang’s existing parser with minimal changes to existing productions. The characteristics of the grammar scheme described in Danielsson’s paper seemed like a good enough fit for what I wanted for Slang:

  • operator name parts and holes alternate — there can’t be two subsequent name parts or two subsequent holes

if _ then _ else _ is ok, if _ _ else _ is not

  • operator precedence is described as a directed acyclic graph (DAG), not as a total or partial ordering. You only have to describe the precedence relations where they make sense (more about this in the next post)

a directed edge '+' -> '*' means “* binds tighter than -

  • operator precedence is not transitive

'=' -> '+' and '&' -> '=' does not mean “+ binds tighter than &

  • prefix operators are treated as right-associative

!!a = !(!(a))

  • postfix operators are treated as left-associative

n!! = ((n)!)!

  • left-associative and right-associative operators of the same precedence can’t appear next to each other

assuming +: is a right-associative +, a + b +: c would not be allowed

  • parses are precedence correct
  • implementation using left-recursion is possible, for example when using Scala’s Packrat parsers

There weren’t any restrictions I couldn’t live with (in fact, we could relax some of the above requirements and the scheme would still work for some grammars), so I decided to implement this grammar scheme for Slang, pretty much as described in the paper. Although I didn’t really grok all of the Agda code samples, the principles were easily understandable. I implemented it as a separate library (available on GitHub) that builds on top of the existing Scala parser combinator library. It might even be somewhat usable in it’s current state, but needs improvement.

In the next post we’ll look at how to define a grammar for an arithmetic and boolean algebra language using mixfix operators. In the third part, we will actually implement the parser for the grammar, look at performance issues and whether we can solve them with packrat parsers.

Thanks to Miles Sabin and Daniel Spiewak for reviewing drafts for this series of posts.

Slang language goals for 2012

I’m setting myself some goals for the Slang language (I’m renaming Klang to Slang) for the coming year. By the end of 2012 I should have

  • a usable and useful language
    • programs must not crash, unless you purposefully make them crash
    • compiler must always provide nice error messages (and not crash)
    • must be able to do IO
    • must be able to do OpenGL
    • must have a small standard library
      • IO, Unicode strings, basic collections
      • parser combinators (maybe)
  • programs that are mostly pure by default, except when they’re not (doing IO etc.)
  • allocation in pools or a garbage collected heap
  • performance mostly on par with C
  • open source compiler written in Scala
  • a comprehensive test suite
    • all language features must have tests
    • all library functions must have tests
    • all compiler errors and warnings must have tests
    • some use case tests, maybe a Project Euler based test suite
  • all the things mentioned in the post where I outlined the first language (most of those exist already)

There are a few additional things that would be nice to have, but are not my explicit goals for the year. The first is a self-hosting compiler. The current compiler spews out .ll files, which then get parsed and compiled by LLVM tools such as llc, but it would be nice to work with LLVM directly. Another thing is separate compilation, which I’m probably not going to implement in the current compiler. Same goes for JIT. And lastly: support for debugging; syntax highlighters; maybe a bare-bones Eclipse IDE.

I should be able to pull off the things listed here, some sooner and some later. I still have a lot to learn about language design, type systems and compilers and am not sure at which point I’m going to open source it, but I’m aiming for June 2012 or so.

Klang: How To Solve Overloading?

I haven’t had much time to work on Klang since my vacation ended, but I’ve been thinking about how to solve overloading. Note that this post contains pseudo-code that bears resemblance to, but is not actual Klang code — these are just thoughts so far.

Given types Double and Vector2(x: Double, y: Double), there are mathematical operations that take both types as operands in different combinations, but have the same name. For example, multiplication:

a: Double * b: Double = // intrinsic operation a * b
a: Double * v: Vector2 = Vector2(a * v.x, a * v.y)
v: Vector2 * a: Double = Vector2(a * v.x, a * v.y)

Solution 1: Allow overloading of class methods

package klang
class Double {
  def *(that: Double) = // intrinsic this * that
}

package vecmath
extending Double {
  def *(v: Vector2) = Vector2(this * v.x, this * v.y)
}

Solution 1a: Allow overloading of package level functions

package klang
def *(a: Double, b: Double) = // intrinsic a * b

package klang.vecmath
def *(a: Double, v: Vector2) = Vector2(a * v.x, a * v.y)

Solution 2: Allow for Haskell-style type classes

package klang
typeclass Multiplication[A, B, C] {
  def *(a: A, b: B): C
}
instance Multiplication[Double, Double, Double] {
  def *(a: Double, b: Double) = // intrinsic a * b
}

package klang.vecmath
instance Multiplication[Double, Vector2, Vector2] {
  def *(a: Double, v: Vector2) = Vector2(a * v.x, a * v.y)
}

Pros and Cons

The first solution of allowing method/function overloading as in Java or Scala will certainly complicate things, especially when I introduce some form of inheritance (currently there is none). However, I’m not sure the second method, which is more like the Haskell way of handling overloading, is any better. Especially the third type class argument (return type) will be confusing: am I then allowed to also provide an

instance Multiplication[Double, Vector2, String]

which differs only in the last argument. Obviously it shouldn’t be so: if selecting a type class instance based on method return type was allowed, that wouldn’t work with Klang’s Scala style type inference. Maybe the type class could have an abstract type member instead:

typeclass Multiplication[A, B] {
  type Result
  def *(a: A, b: B): Result
}
instance Multiplication[Double, Vector2] {
  type Result = Vector2
  def *(a: Double, v: Vector2) = Vector2(a * v.x, a * v.y)
}

But I haven’t really decided whether to even have type classes or type members. There seems to be a lot more ceremony required with type classes compared to simply allowing overloading methods or module-level functions. And I’m not sure if that provides any real advantage…

I could say that defining a function *(a: Double, b: Vector2) creates an implicit type class *[A, B] and an implicit instance of it *[Double, Vector2]. It would be equivalent to the manually defined type classes, wouldn’t it? Except in the manual case you could put division into the same type class with multiplication, but that would still require more ceremony than simple overloading.

And what of functions like max(a: Double, b: Double) and max(a: Collection[Double])? It would be nice to be able to do have the 2-argument version for performance reasons. Type classes wouldn’t necessarily solve that unless one could abstract over method argument arity.

I should probably read more literature to make better informed decisions about these things, just haven’t gotten around to those parts yet :)

What do you think? Do you have an opinion about which way to go or any pointers to reading material?

Klang: Classes and Math Notation

Just got started with adding classes to Klang. At the moment, only immutable classes can be created and there is no inheritance.

The fields and methods are more separated than usual — the data is defined in the beginning of the class body as a single tuple type. On the other hand, there is a uniform access principle and both field accesses and method calls are represented the same way internally and in syntax. Actually the field access is implemented similarly to an intrinsic method call. Intrinsics in Klang usually translate to a single LLVM instruction and I’m going to expose some of them to users as well.

Here’s a sample program with a class:

class Vector2 {
  data (x: Double, y: Double)

  // other methods skipped

  def +(v: Vector2) = Vector2(x + v.x, y + v.y)
  def length() = √(x² + y²)

  alias |…| = length
}

def computeVectorLength(): Double = {
  val v1 := Vector2(1, 1)
  val v2 := Vector2(2, 3)
  | v1 + v2 |
}

def main() = computeVectorLength().toInt

Vector2 is a class whose data is represented as (x: Double, y: Double). Reusing tuples again :) The compiler also generates a no-op function named Vector2 that takes the same tuple type and returns an instance of the class. It’s a no-op because the instances of classes are still held in LLVM registers (LLVM has an infinite number of registers) simply as the data they represent.

Mathematical syntax

There is also some nice syntax resembling mathematical notation in the example. I want to allow as much mathematical syntax as can be represented in plain Unicode text, but also provide aliases for those who prefer plain English words. I took some ideas from Fortress, but I don’t want to go into that double-syntax stuff where the written syntax is weird, but can be rendered into readable mathematical notation.

I defined some specific characters as left/right braces that when surrounding an expression get turned into a specially named method call on that expression:
| v1 + v2 | is turned into (v1 + v2).|…| (the ellipsis is a single character, not three dots).

Unfortunately, I think using | as a left/right brace makes it hard to use | in the or meaning or allowing it as a regular identifier. Maybe it’s possible, but I’m not yet good enough with grammars to know for sure. Anyway, I think actual words and and or might be better than & and |. Certainly ^ as xor doesn’t make much sense. Of course, since I want to enable mathematical syntax, , and are aliases for and, or and xor as well.

The above code example doesn’t actually parse yet — until I improve the parser and the lexer, there is some additional spacing and parentheses needed. For example √((x ²) + y ²) would compile.

If you know any programming languages which are good at mathematical notation in plain text, let me know in the comments.

My Language Experiment

Last time I posted about how it came to be that I started designing a new programming language and writing a compiler for it. I think I should continue by explaining a bit about how I started the project, how I’m implementing the compiler, and describe the project’s goals and the language I want to create.

About the same time I started having ideas of a more high-level object-functional language allowing a data-oriented programming style and more close-to-machine runtime than the JVM, Geoff Reedy‘s Scala on LLVM appeared. I think that’s a really interesting project and I hope it turns into something awesome. But I think that is not quite what I was looking for, because it still needs the Java libraries and must carry some of the baggage that comes from the Scala/Java interoperability (such as null).

But LLVM caught my attention as I had been hearing more and more about it. I went through the Kaleidoscope tutorial and shortly had my mind set that LLVM would be the library/toolset I would use as the back-end of my compiler to output machine code. By the way, I suggest that anyone who is new to implementing compilers like me should try that tutorial — it takes about a day to get through and in the end you have a simple toy language with a basic REPL and JIT.

I tried to continue adding stuff to Kaleidoscope, but realized yet again that I really don’t like C++ (which is what the tutorial used). I found no Java bindings for LLVM, so I thought to learn a language that has bindings: Haskell. I Learned me a Haskell, or part of it. It is another awesome language, but the syntax and purity scares me a bit and I just couldn’t imagine being as comfortable writing Haskell as I am when writing Scala. So, back to Scala. I initially thought to output LLVM Bitcode from Scala, but after reading some docs, it seemed easier to generate LLVM assembly text files (.ll), similarly to the Scala LLVM project (I even reused some bits of code from there). And indeed, it was much easier getting something running that way, and that’s how I’ll implement the first compiler.

But lets talk about the language and the project goals as well. Actually, I want to split this project into separate phases.

  • The first phase is to create a relatively simple language that has a nice Scala-like syntax.
  • The second phase is trying out various designs and implementations based on that small base language.
  • The third phase, if I get that far, is at the moment an Unknown, trying to take what I’ve learned in the previous phases and try to apply that to actually implement the language I want.

At some point, I will try to bootstrap a compiler in the language itself. The project might end up creating yet another esoteric language, but I hope it will turn out to have some usefulness.

The separation into phases is necessary because I still have a lot to learn and don’t want to rush into creating a full-blown general programming language. And also because I don’t want to think too far ahead at the moment. I started reading some books on programming languages and compilers: Programming Language Pragmatics for the introduction, next will be the Dragon Book and Types and Programming Languages (thanks to Daniel Spiewak for the suggestions). I have a lot of theory to go through and progress may be slow.

The language from the first phase is codenamed

Klang

The name comes from Kaleidoscope + language (no relation to Viktor Klang :)), because Kaleidoscope was what I started with, although I altered the syntax to be more like Scala. What features will make it into Klang is not yet set in stone, but I’ll list some of them and describe the language in more detail in the next post, because this one is already pretty long. Anyway, some of the intended features

  • A Scala-like (but not always identical) syntax
  • Type safety
    • Type inference (somewhat similar to Scala’s)
  • Functions
    • Named and default arguments
    • Passing functions as arguments to other functions
    • Implicit conversions (but they can’t be used for pimping)
    • Anonymous functions
    • Extern function declarations (for linking to native C libraries)
  • Primitive types: Byte, Short, Int, Long, Float, Double, Boolean
    • Considering having Byte be unsigned, but not sure
  • Tuple types with optionally named elements
    • function argument lists are tuples
      • can call a function with any expression that returns a tuple (if the argument list matches)
    • multiple named return values from functions
  • Arrays and UTF-8 Strings
  • Blocks and local values (no mutable local variables)
  • Control structures
    • If expressions
    • Some kind of for-loop like structure, specifics not decided yet
    • && and ||
  • Packages or namespaces
  • A tiny standard library
  • … maybe a few more things

A feature that will probably not make it is polymorphism. There will likely not be an Any or Object type that all other objects inherit from. And you will not be able to ask for the type of a value at runtime.

At least the above is roughly what I think will be in the first version of Klang. Once that version is done, I might make it available on Github under some liberal license. More in the next post.

Holy Bitcode Batman, You’re Writing a Compiler!

Recently I’ve taken an interest in programming language design and have started working on a compiler for a new language. The reasons for doing that are perhaps less practical than I’d like, because I’m a practical man, or sometimes pretend to be. At least I’m usually more interested in the application of mathematics than the beauty of mathematics itself. The same used to go for programming languages.

As the few readers of this blog may know, I’ve been experimenting with game programming now and then, but haven’t been driven enough to actually complete and publish a game. In early years I tried Pascal and C++. I did complete some stupid little games in Pascal, but C++ was more a hindrance to me than an enabler. At some point I learned UnrealScript while developing a Deus Ex mod and kind of liked it. Actually I think that was part of what lead me to Java and getting jobs in Enterprise Java programming. I never really thought to write games in Java, though.

When I discovered Scala I fell in love and thought it was the be-all and the end-all of programming languages and very applicable to games. Tried to write a couple of games in it, wrote some libraries for my own use and even ported a physics engine from Java and C++ to Scala. I loved doing it, loved the language features, the type system, the syntax, the preference of immutability over mutability. Perhaps most of all I loved Scala’s elegant mixture of object-oriented and functional programming.

Eventually something about making games in it started nagging me, though. The performance was good enough for the type of game I was making, but the game was also using a lot of resources for the type of game it was. It started to feel wasteful. The idea that games are a big part of what is pushing hardware forward and have to take the most out of it was somehow stuck in my head.

Scala runs on the JVM, which has the nice abstraction of a big heap, and the memory management is done for you. My boss Jevgeni has given a really awesome talk on that topic titled “Do you really get memory?”. But whatever tricks the JVM does to make that abstraction work well enough for most applications, it does produce some amount of waste — extra CPU cycles and garbage objects which need to be collected to free the memory for new objects. And that is a big part of what the JVM engineers are continuously improving on. They are probably the most efficient at collecting (virtual) garbage in the world! But there are cases where that kind of heap abstraction doesn’t seem to hold well, and high-performance games are one of those cases.

My game engine used lots of immutable, short-lived objects. Things soon to become garbage, in other words. Garbage, dying a slow death in the heap. Every small Vector2(x, y) tracked by the collector, maybe living in a separate heap region from its closest friends. And looking up bits from here and there in the heap is really expensive from the CPUs perspective. Even when the Sun Java VM started optimizing away heap allocation of very short-lived objects (enabled by escape analysis), that only gave me a small performance boost. The situation has improved, but back then I decided to try and avoid so much garbage being produced. I optimized some functions manually, doing scalar replacement of Vector and Matrix objects. That made the code look really ugly and unreadable because it hid the mathematical formulas.

I couldn’t stand it. Neither could I stand all these cycles wasted on GC. So I wrote an optimizer that plugged into the Scala compiler and did the scalar replacement automatically. It worked, and gave a more significant improvement than the JVM’s escape analysis optimizations at that time; garbage production was being reduced, I was going green! But it was hard for me to maintain the optimizer as I knew almost nothing about compilers and was just going by my nose. There were some corner cases that were hard to handle correctly. It only worked on code written against a very specific library, optimizing away well-known constructor and method calls.

Writing that optimizer got me somewhat interested in compilers, though. I remember saying to someone during a job interview a few years ago that I like complex problems, but am not interested in the really complex stuff like compilers. Sometimes things work out as the reverse of what you think.

Anyway, working on my game engine, I wanted to create a really powerful entity system. I wanted to use mix-ins and other nice Scala features. Reading some blog posts about game entity systems, I realized that most people seemed to be moving away from inheritance-based systems into component-based systems. Reading more about component-based systems made me run into the topic of data oriented design (PDF), which is all about thinking about data first, and how the program processes it. A couple of presentations on that left an impression on me and made me realize just how expensive it actually is to make the CPU churn through megabytes of random memory.

But I didn’t want to switch to C++ or C to be able to take advantage of the kinds of optimizations that data-oriented programming can give. I had the idea that maybe there should be a language that was object-functional like Scala, but compiled down to very CPU- and cache-friendly data structures and functions, the kind of structures one would use when doing data-oriented programming manually. And I have huge respect for people who do the latter. But I noticed that some of them seemed to be wanting a better language than C++ as well.

So, a language as expressive and type safe as Scala, similarly object-functional, but with more efficient memory access, CPU cache and parallelization friendly, enabling a data-oriented programming style with less hassle than C++. Perhaps one that could even run some functions on the GPU. What could that look like? I started thinking that maybe I should find out first-hand. [Update to clarify my goals] Well, at least I want to find out what it would be like to try to get there. I’m sure combining the type safety and power of Scala with the raw performance of C is way too ambitious for me. So I’m setting the goal way lower, but more about that in the next post.[/Update]

I’ve never really been a language geek. I’ve programmed in several different languages and at the moment am good friends with only two: Java and Scala. There are scores of other programming languages out there and I’m sure there is some language that is at least 2/3 of what I’m looking for.

But I think some bug bit me, because I couldn’t let go of the idea of creating one myself. And this blog post was a lengthy, boring preface to a series of hopefully less boring posts documenting my experiment. I have no illusions — I don’t think I’ll create the “next big language” or anything close — there are people much smarter than me who have been working on programming languages for years and decades. But this will be a fun learning experiment and maybe something useful will come out of it. Next time I will talk about the kind of language(s) I want to create. The (s) is because I want to create a really simple language first, to learn more about compilers during the process, and later expand from that base.