Mixfix Operators & Parser Combinators, Bonus Part 2a

This is a short bonus post in the Mixfix Operator series. Part 1 was an introduction to mixfix operators and in part 2 we looked at them more closely in the context of a grammar for a boolean algebra and arithmetic language. The implementation of a parser for the language is coming next, but before that I thought it would be interesting to see what the grammar would look like if we removed the mixfix abstraction and mechanically converted the precedence graph to BNF notation.

It turns out this is not that hard if we turn each operator group (graph node) into a separate production and leave out the irrelevant productions for the types of operators we don’t have in those groups. This is especially easy in our case since we only have the same types of operators in each group. I’ll use shorthand names here for brevity.


expr ::= or | and | not | eq | cmp
       | add | mul | exp | neg | tightest

or   ::= (or | or↑) "|" or↑
or↑  ::= and | not | eq | cmp | tightest

and  ::= (and | and↑) "&" and↑
and↑ ::= not | eq | cmp | tightest

not  ::= "!" (not | not↑)
not↑ ::= eq | cmp | tightest

eq   ::= (eq | eq↑) ("=" | "≠") eq↑
eq↑  ::= cmp | add | mul | exp | neg | tightest

cmp  ::= (cmp | cmp↑) ("<" | ">") cmp↑
cmp↑ ::= add | mul | exp | neg | tightest

add  ::= (add | add↑) ("+" | "-") add↑
add↑ ::= mul | exp | neg | tightest

mul  ::= (mul | mul↑) ("*" | "/" | "mod") mul↑
mul↑ ::= exp | neg | tightest

exp  ::= (exp | exp↑) "^" exp↑
exp↑ ::= neg | tightest

neg  ::= "-" (neg | neg↑)
neg↑ ::= tightest

tightest := ("(" expr ")") | value

To be honest, encoding the whole graph as BNF is a lot simpler than I initially thought, and so is translating this into a combinator parser. It makes me think whether the mixfix grammar abstraction could be overkill. Of course, this is so easy only because we have relatively few different operators: only left-associative infix, prefix and closed. If we had more operators, with more holes in them, and different types of operators in one group (which is probably not usual, though), perhaps we wouldn’t find the conversion to be that simple any more.

Plainly this simplified scheme won’t help much with user-defined operators and precedence, so I think the mixfix parser abstraction is still useful. However, in cases where there are only a few operators/operator groups, maybe the straightforward translation of this BNF form into parser combinators is preferable. If there’s room left in the next post, I’ll include an alternative implementation based on this scheme.

Mixfix Operators & Parser Combinators, Part 2

In the previous post I introduced the notion of mixfix operators. In this post we will look at them more closely, in the context of an actual grammar. In the next part we will implement the parser for this grammar, look at performance issues and try to fix them with packrat parsers.

We will implement a simple language that consists of boolean algebra and integer arithmetic expressions. The grammar for the language looks like the following (we’re only considering tokens here and assume that a lexical parser has already identified literals, identifiers and delimiters in the text)

statement   ::= expression | declaration
declaration ::= variable ":=" expression
expression  ::= ??? | value
value       ::= literal | variable
literal     ::= booleanLiteral | integerLiteral
variable    ::= identifier

What should the expression productions look like, though? In examples of parsers and grammars we can commonly find an arithmetic expression language described with concepts of ‘factor’ and ‘term’ to create a precedence relation between addition and multiplication:

expression ::= (term "+")* term
term       ::= (factor "*")* factor
factor     ::= constant | variable
               | "(" expression ")"

This seems simple, but when we add more precedence rules, it can get quite complex, especially if we are writing a parser for a general purpose programming language instead of a simple expression language, and we also do semantic actions (create AST nodes) in the parser. This also makes the set of operators rather fixed: you might have to change several grammar productions to add a new operator with a new precedence level. I didn’t even try building Slang’s precedence rules into the grammar in this fashion.

Mixfix parsers still make the precedence part of the grammar, but there is a layer of abstraction there: we describe operators and their precedence rules as a directed graph, where (groups of) operators are the nodes and precedences are the edges. Then we instantiate the grammar with that particular precedence graph.

Before getting to the precedence rules in the language we are about to create, lets look at the operators it will have. In the list below, _ means a hole in the expression that can contain any other expression that “binds tighter” than the operator in question. In the case where the hole is closed on both left and right, it can contain any expression at all. Only a pair of parentheses forms a closed operator in this language.

( _ ) – parentheses
_ + _ – addition
_ - _ – subtraction
  - _ – negation
_ * _ – multiplication
_ / _ – division
_ ^ _ – exponent
_ mod _ – modulo/remainder
_ = _ – equality test
_ ≠ _ – inequality test
_ < _ – less than
_ > _ – greater than
_ & _ – conjunction
_ | _ – disjunction
  ! _ – logical not

This doesn’t include many common operators in real programming languages, but it is enough to demonstrate some interesting aspects of mixfix operators and using a DAG to describe their precedence relations. I used mod instead of % to show that operators don’t have to be symbols.

Before defining the precedence rules, lets look at some sample expressions and how we want them to be interpreted, mostly sticking with existing well known precedence rules, such as those in C, Java or Scala, but occasionally deviating from them:

a + b * c      = a + (b * c)
a < b & b < c  = (a < b) & (b < c)
-5 ^ 6         = (-5) ^ 6
a & !b | c     = (a & (!b)) | c
5 < 2 ≠ 6 > 3  = (5 < 2) ≠ (6 > 3)
1 < x & !x > 5 = (1 < x) & !(x > 5)

I think that’s enough examples for now. Lets try to describe the rules behind these somewhat intuitive expectations as a precedence graph. First, we’ll put the operators into groups where all operators in one group bind just as tightly as the others in the same group. For example 1 + 2 - 3 will be (1 + 2) - 3 and 1 - 2 + 3 will be (1 - 2) + 3

parentheses   : ()
negation      : - (prefix)
exponent      : ^
multiplication: *, /, mod
addition      : +, -
comparison    : <, >
equality      : =, ≠
not           : ! (prefix)
and           : &
or            : |

Negation (prefix -) is in it’s own group so that we can do: -2 + 1. If it was in the same group with infix - and +, then it couldn’t appear next to them without parentheses because prefix operators are treated as right-associative, but most infix operators, such as - and + are left-associative. And we can’t mix left-associative and right-associative operators of the same precedence level! Why? Take the expression

1 + 2 - 3

If + and - are left-associatve, it means (1 + 2) - 3.

If - is right-associative instead, then both (1 + 2) - 3 and 1 + (2 - 3) would be right!

We could read the list of operator groups above as an order of precedence, where the first group (parentheses) binds tightest and the last group (or) binds least tight. This would be mostly compatible with many programming languages and we would have a good enough set of precedence rules right there.

However, as mentioned earlier, Danielsson’s mixfix grammar scheme describes precedence relations as a directed graph. Each of the groups above is a node in the graph, and a directed edge from one node to another a -> b means: b binds tighter than a. So lets describe these relations as a graph instead — it will be in reverse order compared to the above list where we started from the most tightly binding:

or             -> and, not, equality, comparison, parentheses
and            -> not, equality, comparison, parentheses
not            -> equality, comparison, parentheses

equality       -> comparison, addition, multiplication, exponent, negation, parentheses
comparison     -> addition, multiplication, exponent, negation, parentheses

addition       -> multiplication, exponent, negation, parentheses
multiplication -> exponent, negation, parentheses
exponent       -> negation, parentheses
negation       -> parentheses

Notice that from each group we draw the edge not into a single group, but into all of the groups that bind tighter. This is because of the non-transitivity of precedence in this scheme: each pair of operator groups that is to have a precedence relation must have an edge between them in the graph. The advantage of this is that we don’t need to describe the precedence between operators that aren’t related at all. This is one of the motivations for using a directed graph to represent operator precedence.

I hope that from the names of the operators it was clear that some of them will apply only to booleans and some only to integers. For example, the & operator isn’t defined as bitwise &, only as logical conjunction. Thus, assuming that our language is strongly typed, some of the operators can’t appear in the holes of some other operators in a correct program.

A parser doesn’t do type checking of course, but with this mixfix grammar scheme, it does implicitly do precedence correctness checking. For example 4 + 5 & 6 + 4 is not precedence correct, as we didn’t define a precedence relation between addition and and. And due to the parser’s precedence checking, this expression will not even parse.

If we had used a total precedence order instead, we would have + binding tighter than &. The expression would be interpreted as (4 + 5) & (6 + 4) but would probably yield a type error as & works on booleans, but + works on integers. We could write (4 + 5) & (6 + 4) ourselves and that would also parse, because we made the precedence explicit. Well, actually parentheses follow the same rules: remember that in our graph, () bind tighter than everything.

The fact that the parser only produces precedence correct expressions can be both a blessing and a curse.

On one hand, this allows us to view some unrelated groups of operators almost as sublanguages. In our case, boolean algebra and integer arithmetic. This might be good for implementing internal DSLs in the presence of extremely flexible user-defined mixfix operators. We could allow users to extend our precedence graph or even replace it completely with their own. If a DSL has boolean logic in it, but no arithmetic, it might have precedence relations to logical operators, but not to arithmetic operators. This would preclude arithmetic operators from appearing in the DSL without being surrounded by parentheses. Or the DSL could even disallow parentheses. Implementing this much flexibility in a host language is complicated, though. For example, the parser would have to know about any custom mixfix grammars defined in imported modules.

On the other hand, this puts some correctness checks at the wrong level. Arguably, a parser should only validate the syntax of a program and nothing else. If a simple mistake such as using a wrong operator (equal to calling a non-existing method in some languages) would prevent the whole program from being parsed, it would also prevent the compiler from doing other interesting and useful things, or reporting better error messages.

So maybe this grammar scheme isn’t ideal for a general purpose programming language. I am sticking with it in Slang for now, because the scheme is relatively simple and works for me at least as long as I’m the only user of Slang :) And perhaps there are workarounds that would allow a precedence-incorrect expression to be accepted by the parser still. But I don’t have immediate plans to allow a wide variety of user-defined mixfix operators or operator precedence.

Anyway, for our simple language, I think this scheme works well enough as long as we don’t care whether it is the parser or the type checker that reports the errors in incorrect programs. There aren’t any useful direct precedence relations between boolean algebra operators and arithmetic operators here. Only by having equality, comparison or parentheses between them, can we put them in the same expression.

Lets look at one of the consequences of our rules more closely. Many languages, including Java and C, would put most prefix (unary) operators such as ! and - at the same level of precedence, binding tighter than all infix (binary) operators. In Java, !6 == 5 is a type error because the operator ! is bound to 6, not to 6 == 5, and ! isn’t defined on integers. In our language, it isn’t necessary to have ! at the same level as -, though. Since there is no (precedence) relation between logical and arithmetic operators, !6 + 5 will not parse. But ! does have a relation to comparison and equality tests (they bind tighter), so you can write !6 = 5 and it will mean !(6 = 5).

The precedence rules that have = binding tighter than boolean operators is based on the assumption that booleans are rarely compared to each other, but multiple comparisons of other types of values are often used in disjunctions, conjunctions and complements.

To get back to the question in the beginning of the post, what would the expression production in the grammar look like instead of expression ::= ??? | value? The short answer is that we replace ??? with the mixfix grammar scheme instantiated with our particular precedence graph. The long answer would probably take an entire blog post by itself. You can read more about this scheme in the Agda paper, or look at the source code of my mixfix library. The scheme looks somewhat like the parser combinators in the following pseudo-code (~ means sequential composition):

value = variable | literal
expression = mixfixGrammar(precedenceGraph) | value

mixfixGrammar(graph) = {
  // graph - the precedence graph
  // g - an operator group, node in the graph
  // op - an operator in a group

  ⋁(parsers) = // returns the result of the first parser in the list to succeed

  opsLeft(g)   = // all left-associative infix operators in g
  opsRight(g)  = // all right-associative infix operators in g
  opsNon(g)    = // all non-associative infix operators in g
  opsClosed(g) = // all closed operators in g
  opsPre(g)    = // all prefix operators in g
  opsPost(g)   = // all postfix operators in g

  operator(op) =
    if (op.internalArity == 0)
      op.namePart1
    if (op.internalArity == 1)
      op.namePart1 ~ expression ~ op.namePart2
      // expression is an recursive reference back to the "outer" production
      // these are the internal "holes" that can take any expression

  group(g)  = closed(g) | non(g) | left(g) | right(g)    // any ops in this group

  closed(g) = ⋁{ opsClosed(g) map operator }             // closed ops

  non(g)    = ↑(g) ~ ⋁{ opsNon(g) map operator } ~ ↑(g)  // non-associative ops

  left(g)   = (left(g) | ↑(g))                           // left-associative ops
              ~ ( ⋁{ opsPost(g) map operator }
                | ⋁{ opsLeft(g) map operator } ~ ↑(g) )

  right(g)  = ( ⋁{ opsPre(g) map operator }              // right-associative ops
              | ↑(g) ~ ⋁{ opsRight(g) map operator } )
              ~ (right(g) | ↑(g))

  ↑(g) = ⋁{ graph.groupsTighterThan(g) map group } // every group that binds tighter than g
         | value                                   // or the tightest "group" of values

  return ⋁{ graph.nodes map group }
}

If you don’t understand this right now, no big deal — it’s late enough that I couldn’t come up with a better representation of the actual code that would fit in this post. And if you are not familiar with parser combinators I would recommend reading Daniel Spiewak’s post on the subject, at least before continuing to the next part of this series.

If you notice, the value and expression productions are referenced inside the mixfixGrammar. This is no good if the mixfix library is to be a separate module, so I actually implemented that by introducing a pseudo operator group that has a custom parser. This pseudo-group is then added to the precedence graph along with edges from every other group into that “really tight” group.

This concludes part 2. In the next part we will forget this pseudo-code and use Scala’s parser combinators and my mixfix library to implement an actual parser for the language, and maybe an AST and an interpreter as well.

Thanks to Miles Sabin and Daniel Spiewak for reviewing drafts for this series of posts.

Mixfix Operators & Parser Combinators, Part 1

Until recently, Slang’s parser really sucked. It was a quick hack implemented with Scala’s parser combinator library. Nothing really wrong about that in particular, but there was a gaping hole in the grammar: no operator precedence. So to get an expression like a + b * c to mean a + (b * c) I had to add the parentheses myself. In fact, there were even more problems — some things that should have been left-associative were right-associative. This resulted in very hairy test code, with lots of parentheses everywhere.

Although I think parsers are cool, I am actually not very good at writing one for a complex grammar. I feel that I just know too little about the theory behind them or how to put it to practical use. I’ve used parser combinators before and think they are probably the easiest way for newbies like me to implement parsers, so that’s what I used. The use of symbolic names in the library might be scary the first time, but actually I think parsing is one of the few contexts where use of lots of symbols and extremeley concise code is desirable. It allows one to put a lot of code on a few lines, and when you are looking at or writing a parser, you want to see many productions of the grammar at the same time to understand what is going on. At least I do.

For Slang, I implemented something minimal that could parse the language. I had no idea how to solve operator precedence well with parser combinators, and I didn’t want to spend a lot of time studying parsers, because the next compiler phases seemed more interesting at first. But getting the parser right is important for actually using the language because it’s the first thing that processes the code and reports errors. A parser that only kind of works can be very annoying.

Thankfully Miles Sabin suggested that I should look into mixfix operator parsers, and I did. I don’t know exactly where the word mixfix comes from, so I’m assuming it means mixed fixity — operators can be prefix, infix, postfix or closed. Here are some samples:

  • prefix : -a
  • infix : a + b
  • postfix: n!
  • closed : (a)

Of course, most languages have operators with all of these fixities. The term mixfix actually refers to something more flexible than that — a mixfix operator can be seen as a sequence of alternating name parts and “holes in the expression”. A hole is where the operator’s arguments go.

_ + _ has two holes and one name part + (and is infix)
if _ then _ else _ has three name parts if, then, else and three holes (and is prefix)

In the mixfix viewpoint, many syntactic constructs might be seen as operators that can have precedence in relation to others, and this concept of many name parts can make it easier to let users define their own operators in a more flexible way than just a single prefix or infix word (as is allowed by Scala). I think this would be a really nice way of creating internal DSL-s. In Slang, like in Scala, most operators are really methods. Slang doesn’t allow user-defined fixity or precedence for methods yet (or even multiple argument lists), but I may add this feature one day.

There are existing languages that support mixfix operators, such as Agda, Maude and BitC. To my knowledge, all these languages assign numeric precedence values to operators, and no language currently uses the exact scheme we will look at, although it was proposed for Agda.

Mixfix operators can be implemented in many ways, but one of the first things I found was the paper Parsing Mixfix Operators by Anders Danielsson and Ulf Norell that was a great help to me. I was able to implement the grammar scheme described in that paper on top of Scala’s parser combinators and patch that into Slang’s existing parser with minimal changes to existing productions. The characteristics of the grammar scheme described in Danielsson’s paper seemed like a good enough fit for what I wanted for Slang:

  • operator name parts and holes alternate — there can’t be two subsequent name parts or two subsequent holes

if _ then _ else _ is ok, if _ _ else _ is not

  • operator precedence is described as a directed acyclic graph (DAG), not as a total or partial ordering. You only have to describe the precedence relations where they make sense (more about this in the next post)

a directed edge '+' -> '*' means “* binds tighter than -

  • operator precedence is not transitive

'=' -> '+' and '&' -> '=' does not mean “+ binds tighter than &

  • prefix operators are treated as right-associative

!!a = !(!(a))

  • postfix operators are treated as left-associative

n!! = ((n)!)!

  • left-associative and right-associative operators of the same precedence can’t appear next to each other

assuming +: is a right-associative +, a + b +: c would not be allowed

  • parses are precedence correct
  • implementation using left-recursion is possible, for example when using Scala’s Packrat parsers

There weren’t any restrictions I couldn’t live with (in fact, we could relax some of the above requirements and the scheme would still work for some grammars), so I decided to implement this grammar scheme for Slang, pretty much as described in the paper. Although I didn’t really grok all of the Agda code samples, the principles were easily understandable. I implemented it as a separate library (available on GitHub) that builds on top of the existing Scala parser combinator library. It might even be somewhat usable in it’s current state, but needs improvement.

In the next post we’ll look at how to define a grammar for an arithmetic and boolean algebra language using mixfix operators. In the third part, we will actually implement the parser for the grammar, look at performance issues and whether we can solve them with packrat parsers.

Thanks to Miles Sabin and Daniel Spiewak for reviewing drafts for this series of posts.

Klang: Program Structure

I thought I’d keep my posts about Klang, the language I’m designing, shorter (at least I’ll try). So in this post I’ll describe the overall program structure and just a few constructs, focusing more on current status rather than goals.

Version 1 of the Klang compiler is going to be pretty basic. I’m not even sure if it’ll have separate compilation (well, it kind of does have that via externs). As I started moving some stuff from the compiler into a library, I just made it prepend the library source code to the source being compiled. So at least right now, the compiler takes just one source file (or string) and turns it into a single LLVM assembly module (.ll file), which can in turn be compiled to machine code using the LLVM toolchain.

Functions

A Klang module is currently made of function definitions and function declarations (externs). Classes will come later. A function named main is the entry point, as usual. A Hello World program in Klang might look like this:

// puts is defined in the C library
extern puts(string: Byte *): Int
def main() = puts("Hello, World!")

Byte * is a pointer to a Byte, but I added pointer types only as a quick hack to interface with native libraries. I might keep them for that reason, but limit their use to extern declarations — extern declares a function defined in another module.
You probably noticed that functions are defined similarly to Scala.

'def', name, argument list, '=', body expression

Expressions

In Klang, like in Scala, most things are expressions i.e. evaluate to a value. Here’s what the mandatory naive Fibonacci example looks like:

def fib(n: Int): Int =
  if n == 0 then
    0
  else if n == 1 then
    1
  else
    fib(n - 2) + fib(n - 1)

Specifying the return type is necessary here because the function is recursive, and the type inferencer can’t handle that. if is an expression, so both the then and else branches must be of the same type. An if without an else would evaluate to (), the empty tuple value, also known as Unit. It is similar to void, except void is the lack of a value.

def addOneThenSquare(n: Int) = {
  val nPlus1 := n + 1;
  nPlus1 * nPlus1
}

A block is a list of statements surrounded by { and }. Statements are usually separated by semicolons because the current parser is a quick hack implemented with parser combinators and can’t do the semicolon inference well. The statements can be definitions as well as expressions, but the last statement is what is returned and needs to be an expression.

val starts a local value definition. Local values cannot be reassigned after they are defined. I want to avoid mutable local variables for as long as I can, though in the end I will probably have to add them. := is perhaps an unusual choice for assignment and I might revert to using = for that instead.

Loops

I haven’t even implemented Arrays yet, but loops are still useful, especially as there are no tail calls yet and the Fibonacci function above would probably exit with a segmentation fault for a large value when the stack grows too big. But how do you do loops without mutable variables? I’m talking about primitive loops that must be the basis for other constructs such as comprehensions. I thought about it for a couple of days and came up with something like this (there are probably similar constructs in other languages):

def helloHello(n: Int) =
  // loop takes a tuple of parameters similar to a function argument list
  // but it must always specify default (starting) values
  loop(i := 0)
    // and also an expression that ends with continue(...) or break()
    if (i < n) then {
      puts("Hello?");
      // continue "calls" the loop again with the new values
      continue(i + 1)
    } else {
      // break returns from the loop
      break()
    }
  // yield can see the *last* values of the loop parameters
  yield i

def main() = helloHello(3)

This program will print “Hello?” three times and return 3. But this kind of loop seems very verbose and actually what it does is similar to tail recursion, using LLVM’s Phi nodes. I might rethink what kind of loops to have in the future, but for now I’ll stick with this. It can be simplified a bit:

def helloHello(n: Int) =
  loop(i := 0) while (i < n) {
    puts("Hello?");
    i + 1
  } yield i

There are some potential programming errors that can easily arise from this kind of loop, but I’ll talk about that later (or maybe you can guess).

while can only come directly after loop(...) and is defined in terms of if, break and continue:

while (condition) expression = if (condition) continue(expression) else break()

This translation is done immediately after parsing, to simplify the compiler phases.

With these constructs, and passing functions around, one could already write a program that does something a little bit useful. I think my next focus is on adding Strings and Arrays into the mix, but my next post will probably go more into functions and tuples.

My Language Experiment

Last time I posted about how it came to be that I started designing a new programming language and writing a compiler for it. I think I should continue by explaining a bit about how I started the project, how I’m implementing the compiler, and describe the project’s goals and the language I want to create.

About the same time I started having ideas of a more high-level object-functional language allowing a data-oriented programming style and more close-to-machine runtime than the JVM, Geoff Reedy‘s Scala on LLVM appeared. I think that’s a really interesting project and I hope it turns into something awesome. But I think that is not quite what I was looking for, because it still needs the Java libraries and must carry some of the baggage that comes from the Scala/Java interoperability (such as null).

But LLVM caught my attention as I had been hearing more and more about it. I went through the Kaleidoscope tutorial and shortly had my mind set that LLVM would be the library/toolset I would use as the back-end of my compiler to output machine code. By the way, I suggest that anyone who is new to implementing compilers like me should try that tutorial — it takes about a day to get through and in the end you have a simple toy language with a basic REPL and JIT.

I tried to continue adding stuff to Kaleidoscope, but realized yet again that I really don’t like C++ (which is what the tutorial used). I found no Java bindings for LLVM, so I thought to learn a language that has bindings: Haskell. I Learned me a Haskell, or part of it. It is another awesome language, but the syntax and purity scares me a bit and I just couldn’t imagine being as comfortable writing Haskell as I am when writing Scala. So, back to Scala. I initially thought to output LLVM Bitcode from Scala, but after reading some docs, it seemed easier to generate LLVM assembly text files (.ll), similarly to the Scala LLVM project (I even reused some bits of code from there). And indeed, it was much easier getting something running that way, and that’s how I’ll implement the first compiler.

But lets talk about the language and the project goals as well. Actually, I want to split this project into separate phases.

  • The first phase is to create a relatively simple language that has a nice Scala-like syntax.
  • The second phase is trying out various designs and implementations based on that small base language.
  • The third phase, if I get that far, is at the moment an Unknown, trying to take what I’ve learned in the previous phases and try to apply that to actually implement the language I want.

At some point, I will try to bootstrap a compiler in the language itself. The project might end up creating yet another esoteric language, but I hope it will turn out to have some usefulness.

The separation into phases is necessary because I still have a lot to learn and don’t want to rush into creating a full-blown general programming language. And also because I don’t want to think too far ahead at the moment. I started reading some books on programming languages and compilers: Programming Language Pragmatics for the introduction, next will be the Dragon Book and Types and Programming Languages (thanks to Daniel Spiewak for the suggestions). I have a lot of theory to go through and progress may be slow.

The language from the first phase is codenamed

Klang

The name comes from Kaleidoscope + language (no relation to Viktor Klang :)), because Kaleidoscope was what I started with, although I altered the syntax to be more like Scala. What features will make it into Klang is not yet set in stone, but I’ll list some of them and describe the language in more detail in the next post, because this one is already pretty long. Anyway, some of the intended features

  • A Scala-like (but not always identical) syntax
  • Type safety
    • Type inference (somewhat similar to Scala’s)
  • Functions
    • Named and default arguments
    • Passing functions as arguments to other functions
    • Implicit conversions (but they can’t be used for pimping)
    • Anonymous functions
    • Extern function declarations (for linking to native C libraries)
  • Primitive types: Byte, Short, Int, Long, Float, Double, Boolean
    • Considering having Byte be unsigned, but not sure
  • Tuple types with optionally named elements
    • function argument lists are tuples
      • can call a function with any expression that returns a tuple (if the argument list matches)
    • multiple named return values from functions
  • Arrays and UTF-8 Strings
  • Blocks and local values (no mutable local variables)
  • Control structures
    • If expressions
    • Some kind of for-loop like structure, specifics not decided yet
    • && and ||
  • Packages or namespaces
  • A tiny standard library
  • … maybe a few more things

A feature that will probably not make it is polymorphism. There will likely not be an Any or Object type that all other objects inherit from. And you will not be able to ask for the type of a value at runtime.

At least the above is roughly what I think will be in the first version of Klang. Once that version is done, I might make it available on Github under some liberal license. More in the next post.

Taking Advantage of Scala 2.8: Replacing the Builder

In Scala 2.8, using the builder pattern is no longer necessary (or the most optimal solution) in many cases, as Scala 2.8 adds support for named and default parameters.

I’ll give an example of this based on ScalaBox2D. In the Box2D object model, physics Bodies are defined mostly by a set of Fixtures, which in turn are defined by a Shape, density, friction, restitution and some other parameters.

In the Scala 2.7 version of ScalaBox2D, there were mutable fixture definitions, which were used by the engine to create the actual fixtures. The user code only worked with definitions, through an internal DSL that looked something like this (note: all code examples are simplified for clarity):

body {
  box(halfWidth, halfHeight) density 1 friction 0.3f restitution 0
  computeMassFromShapes
}

For the DSL implementation, I used simple builders which left values to defaults if not specified:

def box(halfW: Float, halfH: Float) = new FixtureBuilder(FixtureDef(PolygonDef.box(halfW, halfH)))

class FixtureBuilder(defn: FixtureDef) {
  def userData(userData: AnyRef) = { defn.userData = userData; this }
  def material(material: Material) = { defn.apply(material); this }
  def friction(friction: Float) = { defn.friction = friction; this }
  def restitution(restitution: Float) = { defn.restitution = restitution; this }
  def density(density: Float) = { defn.density = density; this }
  def filter(filter: FilterData) = { defn.filter = filter; this }
  def sensor(isSensor: Boolean) = { defn.isSensor = isSensor; this }
  def define = defn
}

The FixtureDefs are mutable mostly to simplify the builder. In Scala 2.8, I can use named and default parameters, and drop some more lines of code by not having builders at all. As a bonus, I can easily make the definitions immutable.

def fixtures(fd: FixtureDef*) {...}
val fixture = FixtureDef // a shorthand to the companion object of FixtureDef
val box = BoxDef // a shorthand to a BoxDef object that creates PolygonDefs

case class FixtureDef(
  shapeDef: ShapeDef,
  /** The friction coefficient, usually in the range [0,1]. */
  friction: Float = 0.2f,
  /** The restitution (elasticity) usually in the range [0,1]. */
  restitution: Float = 0f,
  /** The density, usually in kg/m^2. */
  density: Float = 0f,
  /** A sensor collects contact information but never generates a collision response. */
  isSensor: Boolean = false,
  /** Contact filtering data. */
  filter: FilterData = FilterData.Default,
  /** Use this to store application specific fixture data. */
  userData: AnyRef = null
)

As you can see, the FixtureDef is now a case class with immutable parameters that have default values. Previously it looked very similar, but had only one parameter (ShapeDef) and all fields were mutable. The usage of the “DSL” now becomes a little bit more verbose (maybe I shouldn’t even call it a DSL any more) but I think it also becomes easier to understand for someone who knows the language but not the library, due to less moving parts and using built-in features instead of more code:

body {
  fixtures(
    fixture(box(halfWidth, halfHeight), density = 1, friction = 0.3f, restitution = 0)
  )
  computeMassFromShapes
}

Deleting code while maintaining functionality always makes me glad and this is one of those cases. I guess there may be some more complex cases where builders may still work better, but for simple things like the above example, I really like the named and default arguments feature.

Note: moving ScalaBox2D to Scala 2.8 is still a work in progress for me and there may be some further changes to this “DSL” as well.

A Simple Permission Based Security Library in Scala

I started writing a simple security library in Scala that supports permission checks similar to the java.security package, but is a lot simpler to extend or use and meant for securing application code and model objects based on user roles and permissions, not for protecting malicious code from executing. I personally like the basic concept of how the java.security.Permissions work, but that implementation is not suitable for handling user permissions for several reasons. The programming model with PAW (current code name for the library, it may change) is simple:

Extend the Permission trait in objects or case classes to define your own permissions. The Permission trait defines only one method:

def implies(that: Permission): Boolean

This is very similar to how java.security works: permissions can imply other permissions. Testing if a permission is implied is the same as testing if a user has a permission. For your own permission classes you only have to implement the implies method. But if you have simple permissions that don’t imply each other, but only themselves, you can just do this:

object MyPermission extends SimplePermission // A SimplePermission only implies itself and nothing else

Coming versions of the library may provide convenience classes or traits for hierarchically named permissions, CRUD permissions etc.

Permissions are grouped into PermissionSets, and they can be nested because PermissionSets are Permissions as well. There are two special Permission sets: AllPermissions and NoPermissions. The former can be used for “superadmin” users or during development; the latter can be used for guest users for example.

When you have defined your permissions, you need to set up a SecurityContext. JAAS integration, which gets the user Principal from JAAS, but stores permissions in a ThreadLocal, is provided with the library, but other implementations (such as Spring Security) can be plugged in. To set up the JAAS Context, you need to make sure that JAASSecurityContext is loaded, by making your code refer to it somewhere during initial setup:

init {
// You could also skip this and just use JAASSecurityContext everywhere instead of SecurityContext
JAASSecurityContext
}

You can ask SecurityContext for the current user Principal (this is still java.security.Principal) and it’s permissions. But most of the time you should need just two methods:

SecurityContext has MyPermission // SecurityContext.has(MyPermission) if you prefer

will tell you if the user has the permission or not. However, this still requires you to write an if-else statement if you want to throw an exception in case the user doesn’t have the required permissions. You can also do this:

SecurityContext mustHave MyPermission // will throw AccessControlException() if the permission is not implied

SecurityContext mustHave List(MyPermission, YourPermission) // must have both of them

Mostly, that’s it. You can write one-liners to check for user permissions and IMHO, this is in most cases superior to using Java EE annotations like @RolesAllowed("storeManager", "truckDriver", "cleaner"). You can go into details as much as required and you can put the permission checks anywhere in your code, including unmanaged objects.

I personally strongly prefer this security programming model. When I’ve had to use the Java EE role-based stuff I’ve usually had to “hack in” permissions by representing them as roles, which seems conceptually wrong. I haven’t taken a look at Spring Security for a while, but it seemed to follow a model similar to Java EE when I last tried it (years ago).

Of course, my library currently doesn’t cover roles or authentication. These can be handled by Spring Security, JAAS or something else, and may require custom integration work. But some of these features may also make into the PAW library in the future.

Feedback about the library would be greatly appreciated. Go a head and take a look at the sources at GitHub, it’s only about 250 lines of code currently, half of which are ScalaDoc comments. The sources and binaries are also available in my Maven repo.

PS. One thing to consider is this: the implication of Permissions is not transitive by default. i.e. A implies B, B implies C does not mean A implies C. When you test: SecurityContext has C, but the user has only A, an instance of B may not even exist. A.implies(that) is coded so that if the argument is B, it will return true, but there is no way to ask A for a list of all permissions it implies and if it does not directly imply C, then C is simply not implied. Can you think of a good solution for this?