Slang: Review of 2012

Already the first quarter of 2013 is behind us, but I wanted to take a look whether I accomplished the goals I set for Slang development in 2012. I’m not going to reiterate the goals here, but almost none of them were completed.

I wanted to bring the language into a usable form, but so far there is still a lot of work to do. I wanted to make the compiler open source, but I’m still not sure at which point I would like to do that. I think I will postpone that quite a bit, because I don’t have the time to run an open source project at the moment (and parts of the compiler are still full of horrible hacks). Actually, lack of development time was one of the things that prevented me from fulfilling the goals. I worked on Slang mostly during vacations and when I found a drive to start work during other times I would continue for a couple of weeks and then take a break for a few months.

I am suspecting that development will continue at a similar pace. I have made some progress, though, sporadically during 2012 and more continuously during the last few weeks.

New Parser

I think I spent most of my Slang development time last year on the new parser, using the mixfix parser combinators I blogged about (sorry, I kind of forgot to post the third part of that). I have improved the parser framework a lot since then, but it does have a downside of being quite slow for large programs, even with Packrat parsers. So I have some further work to do here, like maybe replacing the whole parser again, but I will probably leave that to some unknown future date.

Custom Operator Precedence Rules

The new mixfix parser theoretically enabled a modular grammar: the default grammar can be relatively small, but programs could define their own prefix, postfix, infix and closed operators and their precedence rules. And recently, I made that actually work. You still can’t quite define a grammar per Slang source file, but you can add operator graph (.ops) files to the list of compiled files and together with the standard rules, they will define the grammar for that compiler run. Some examples.

New Compiler Infrastructure

I just finished the initial work on this, a step towards separate compilation: instead of including library code into the source file being compiled, now the new Slang compiler driver can compile multiple source files at the same time, but still requires that the library sources are available. The end result is linked together with the LLVM IR linker.

Slang IDE

I started work on an Eclipse-based Slang IDE. Currently it includes syntax highlighting, error reporting, a few quick fixes, and quick-assist for rewriting expressions to use different aliases of methods. E.g. if you write x.times(y) you can then click Ctrl+1 and select “Use alias: *”. Then the expression will be rewritten to x * y. This is much more useful if you have Unicode operators that can’t be easily typed. Another example: you write array(1), Ctrl+1 suggests “Use alias: subscript” and selecting it turns the expression into array₁

The IDE also shows warnings and errors with squiggly lines, and I added a few more useful warnings to the compiler and improved the messages.

Debugging

This is still in progress, but for some programs, Slang compiler can now emit DWARF debug info, and the programs can be debugged using the GNU Debugger. I was quite pleased to get that working, and together with the IDE, and new compiler driver, it has made Slang much more usable for myself.

Next Steps

This time, I don’t want to give any deadlines, but the next steps are to make the tooling more usable, improve error messages, add some of the language features I didn’t get to add in 2012, etc. I think I might even release a build in the form of the Slang IDE with the compiler embedded, even if it will not be open source.

I have various language features in mind that I want to explore, such as proper generic types, some form of dependent types, Strings, IO, better interoperability with C, JVM back-end. Strings and IO are pretty much given, and generics as well. Not sure how soon I’ll get to the other things.

I think I will pretty much be doing pain-driven development: what is currently giving me most pain will get fixed first. I think actually generics might be the thing, because adding collection methods for only a specific type of arrays is no fun.

Advertisements

Slang language goals for 2012

I’m setting myself some goals for the Slang language (I’m renaming Klang to Slang) for the coming year. By the end of 2012 I should have

  • a usable and useful language
    • programs must not crash, unless you purposefully make them crash
    • compiler must always provide nice error messages (and not crash)
    • must be able to do IO
    • must be able to do OpenGL
    • must have a small standard library
      • IO, Unicode strings, basic collections
      • parser combinators (maybe)
  • programs that are mostly pure by default, except when they’re not (doing IO etc.)
  • allocation in pools or a garbage collected heap
  • performance mostly on par with C
  • open source compiler written in Scala
  • a comprehensive test suite
    • all language features must have tests
    • all library functions must have tests
    • all compiler errors and warnings must have tests
    • some use case tests, maybe a Project Euler based test suite
  • all the things mentioned in the post where I outlined the first language (most of those exist already)

There are a few additional things that would be nice to have, but are not my explicit goals for the year. The first is a self-hosting compiler. The current compiler spews out .ll files, which then get parsed and compiled by LLVM tools such as llc, but it would be nice to work with LLVM directly. Another thing is separate compilation, which I’m probably not going to implement in the current compiler. Same goes for JIT. And lastly: support for debugging; syntax highlighters; maybe a bare-bones Eclipse IDE.

I should be able to pull off the things listed here, some sooner and some later. I still have a lot to learn about language design, type systems and compilers and am not sure at which point I’m going to open source it, but I’m aiming for June 2012 or so.

Klang: How To Solve Overloading?

I haven’t had much time to work on Klang since my vacation ended, but I’ve been thinking about how to solve overloading. Note that this post contains pseudo-code that bears resemblance to, but is not actual Klang code — these are just thoughts so far.

Given types Double and Vector2(x: Double, y: Double), there are mathematical operations that take both types as operands in different combinations, but have the same name. For example, multiplication:

a: Double * b: Double = // intrinsic operation a * b
a: Double * v: Vector2 = Vector2(a * v.x, a * v.y)
v: Vector2 * a: Double = Vector2(a * v.x, a * v.y)

Solution 1: Allow overloading of class methods

package klang
class Double {
  def *(that: Double) = // intrinsic this * that
}

package vecmath
extending Double {
  def *(v: Vector2) = Vector2(this * v.x, this * v.y)
}

Solution 1a: Allow overloading of package level functions

package klang
def *(a: Double, b: Double) = // intrinsic a * b

package klang.vecmath
def *(a: Double, v: Vector2) = Vector2(a * v.x, a * v.y)

Solution 2: Allow for Haskell-style type classes

package klang
typeclass Multiplication[A, B, C] {
  def *(a: A, b: B): C
}
instance Multiplication[Double, Double, Double] {
  def *(a: Double, b: Double) = // intrinsic a * b
}

package klang.vecmath
instance Multiplication[Double, Vector2, Vector2] {
  def *(a: Double, v: Vector2) = Vector2(a * v.x, a * v.y)
}

Pros and Cons

The first solution of allowing method/function overloading as in Java or Scala will certainly complicate things, especially when I introduce some form of inheritance (currently there is none). However, I’m not sure the second method, which is more like the Haskell way of handling overloading, is any better. Especially the third type class argument (return type) will be confusing: am I then allowed to also provide an

instance Multiplication[Double, Vector2, String]

which differs only in the last argument. Obviously it shouldn’t be so: if selecting a type class instance based on method return type was allowed, that wouldn’t work with Klang’s Scala style type inference. Maybe the type class could have an abstract type member instead:

typeclass Multiplication[A, B] {
  type Result
  def *(a: A, b: B): Result
}
instance Multiplication[Double, Vector2] {
  type Result = Vector2
  def *(a: Double, v: Vector2) = Vector2(a * v.x, a * v.y)
}

But I haven’t really decided whether to even have type classes or type members. There seems to be a lot more ceremony required with type classes compared to simply allowing overloading methods or module-level functions. And I’m not sure if that provides any real advantage…

I could say that defining a function *(a: Double, b: Vector2) creates an implicit type class *[A, B] and an implicit instance of it *[Double, Vector2]. It would be equivalent to the manually defined type classes, wouldn’t it? Except in the manual case you could put division into the same type class with multiplication, but that would still require more ceremony than simple overloading.

And what of functions like max(a: Double, b: Double) and max(a: Collection[Double])? It would be nice to be able to do have the 2-argument version for performance reasons. Type classes wouldn’t necessarily solve that unless one could abstract over method argument arity.

I should probably read more literature to make better informed decisions about these things, just haven’t gotten around to those parts yet :)

What do you think? Do you have an opinion about which way to go or any pointers to reading material?

Klang: Classes and Math Notation

Just got started with adding classes to Klang. At the moment, only immutable classes can be created and there is no inheritance.

The fields and methods are more separated than usual — the data is defined in the beginning of the class body as a single tuple type. On the other hand, there is a uniform access principle and both field accesses and method calls are represented the same way internally and in syntax. Actually the field access is implemented similarly to an intrinsic method call. Intrinsics in Klang usually translate to a single LLVM instruction and I’m going to expose some of them to users as well.

Here’s a sample program with a class:

class Vector2 {
  data (x: Double, y: Double)

  // other methods skipped

  def +(v: Vector2) = Vector2(x + v.x, y + v.y)
  def length() = √(x² + y²)

  alias |…| = length
}

def computeVectorLength(): Double = {
  val v1 := Vector2(1, 1)
  val v2 := Vector2(2, 3)
  | v1 + v2 |
}

def main() = computeVectorLength().toInt

Vector2 is a class whose data is represented as (x: Double, y: Double). Reusing tuples again :) The compiler also generates a no-op function named Vector2 that takes the same tuple type and returns an instance of the class. It’s a no-op because the instances of classes are still held in LLVM registers (LLVM has an infinite number of registers) simply as the data they represent.

Mathematical syntax

There is also some nice syntax resembling mathematical notation in the example. I want to allow as much mathematical syntax as can be represented in plain Unicode text, but also provide aliases for those who prefer plain English words. I took some ideas from Fortress, but I don’t want to go into that double-syntax stuff where the written syntax is weird, but can be rendered into readable mathematical notation.

I defined some specific characters as left/right braces that when surrounding an expression get turned into a specially named method call on that expression:
| v1 + v2 | is turned into (v1 + v2).|…| (the ellipsis is a single character, not three dots).

Unfortunately, I think using | as a left/right brace makes it hard to use | in the or meaning or allowing it as a regular identifier. Maybe it’s possible, but I’m not yet good enough with grammars to know for sure. Anyway, I think actual words and and or might be better than & and |. Certainly ^ as xor doesn’t make much sense. Of course, since I want to enable mathematical syntax, , and are aliases for and, or and xor as well.

The above code example doesn’t actually parse yet — until I improve the parser and the lexer, there is some additional spacing and parentheses needed. For example √((x ²) + y ²) would compile.

If you know any programming languages which are good at mathematical notation in plain text, let me know in the comments.

Klang: Program Structure

I thought I’d keep my posts about Klang, the language I’m designing, shorter (at least I’ll try). So in this post I’ll describe the overall program structure and just a few constructs, focusing more on current status rather than goals.

Version 1 of the Klang compiler is going to be pretty basic. I’m not even sure if it’ll have separate compilation (well, it kind of does have that via externs). As I started moving some stuff from the compiler into a library, I just made it prepend the library source code to the source being compiled. So at least right now, the compiler takes just one source file (or string) and turns it into a single LLVM assembly module (.ll file), which can in turn be compiled to machine code using the LLVM toolchain.

Functions

A Klang module is currently made of function definitions and function declarations (externs). Classes will come later. A function named main is the entry point, as usual. A Hello World program in Klang might look like this:

// puts is defined in the C library
extern puts(string: Byte *): Int
def main() = puts("Hello, World!")

Byte * is a pointer to a Byte, but I added pointer types only as a quick hack to interface with native libraries. I might keep them for that reason, but limit their use to extern declarations — extern declares a function defined in another module.
You probably noticed that functions are defined similarly to Scala.

'def', name, argument list, '=', body expression

Expressions

In Klang, like in Scala, most things are expressions i.e. evaluate to a value. Here’s what the mandatory naive Fibonacci example looks like:

def fib(n: Int): Int =
  if n == 0 then
    0
  else if n == 1 then
    1
  else
    fib(n - 2) + fib(n - 1)

Specifying the return type is necessary here because the function is recursive, and the type inferencer can’t handle that. if is an expression, so both the then and else branches must be of the same type. An if without an else would evaluate to (), the empty tuple value, also known as Unit. It is similar to void, except void is the lack of a value.

def addOneThenSquare(n: Int) = {
  val nPlus1 := n + 1;
  nPlus1 * nPlus1
}

A block is a list of statements surrounded by { and }. Statements are usually separated by semicolons because the current parser is a quick hack implemented with parser combinators and can’t do the semicolon inference well. The statements can be definitions as well as expressions, but the last statement is what is returned and needs to be an expression.

val starts a local value definition. Local values cannot be reassigned after they are defined. I want to avoid mutable local variables for as long as I can, though in the end I will probably have to add them. := is perhaps an unusual choice for assignment and I might revert to using = for that instead.

Loops

I haven’t even implemented Arrays yet, but loops are still useful, especially as there are no tail calls yet and the Fibonacci function above would probably exit with a segmentation fault for a large value when the stack grows too big. But how do you do loops without mutable variables? I’m talking about primitive loops that must be the basis for other constructs such as comprehensions. I thought about it for a couple of days and came up with something like this (there are probably similar constructs in other languages):

def helloHello(n: Int) =
  // loop takes a tuple of parameters similar to a function argument list
  // but it must always specify default (starting) values
  loop(i := 0)
    // and also an expression that ends with continue(...) or break()
    if (i < n) then {
      puts("Hello?");
      // continue "calls" the loop again with the new values
      continue(i + 1)
    } else {
      // break returns from the loop
      break()
    }
  // yield can see the *last* values of the loop parameters
  yield i

def main() = helloHello(3)

This program will print “Hello?” three times and return 3. But this kind of loop seems very verbose and actually what it does is similar to tail recursion, using LLVM’s Phi nodes. I might rethink what kind of loops to have in the future, but for now I’ll stick with this. It can be simplified a bit:

def helloHello(n: Int) =
  loop(i := 0) while (i < n) {
    puts("Hello?");
    i + 1
  } yield i

There are some potential programming errors that can easily arise from this kind of loop, but I’ll talk about that later (or maybe you can guess).

while can only come directly after loop(...) and is defined in terms of if, break and continue:

while (condition) expression = if (condition) continue(expression) else break()

This translation is done immediately after parsing, to simplify the compiler phases.

With these constructs, and passing functions around, one could already write a program that does something a little bit useful. I think my next focus is on adding Strings and Arrays into the mix, but my next post will probably go more into functions and tuples.

My Language Experiment

Last time I posted about how it came to be that I started designing a new programming language and writing a compiler for it. I think I should continue by explaining a bit about how I started the project, how I’m implementing the compiler, and describe the project’s goals and the language I want to create.

About the same time I started having ideas of a more high-level object-functional language allowing a data-oriented programming style and more close-to-machine runtime than the JVM, Geoff Reedy‘s Scala on LLVM appeared. I think that’s a really interesting project and I hope it turns into something awesome. But I think that is not quite what I was looking for, because it still needs the Java libraries and must carry some of the baggage that comes from the Scala/Java interoperability (such as null).

But LLVM caught my attention as I had been hearing more and more about it. I went through the Kaleidoscope tutorial and shortly had my mind set that LLVM would be the library/toolset I would use as the back-end of my compiler to output machine code. By the way, I suggest that anyone who is new to implementing compilers like me should try that tutorial — it takes about a day to get through and in the end you have a simple toy language with a basic REPL and JIT.

I tried to continue adding stuff to Kaleidoscope, but realized yet again that I really don’t like C++ (which is what the tutorial used). I found no Java bindings for LLVM, so I thought to learn a language that has bindings: Haskell. I Learned me a Haskell, or part of it. It is another awesome language, but the syntax and purity scares me a bit and I just couldn’t imagine being as comfortable writing Haskell as I am when writing Scala. So, back to Scala. I initially thought to output LLVM Bitcode from Scala, but after reading some docs, it seemed easier to generate LLVM assembly text files (.ll), similarly to the Scala LLVM project (I even reused some bits of code from there). And indeed, it was much easier getting something running that way, and that’s how I’ll implement the first compiler.

But lets talk about the language and the project goals as well. Actually, I want to split this project into separate phases.

  • The first phase is to create a relatively simple language that has a nice Scala-like syntax.
  • The second phase is trying out various designs and implementations based on that small base language.
  • The third phase, if I get that far, is at the moment an Unknown, trying to take what I’ve learned in the previous phases and try to apply that to actually implement the language I want.

At some point, I will try to bootstrap a compiler in the language itself. The project might end up creating yet another esoteric language, but I hope it will turn out to have some usefulness.

The separation into phases is necessary because I still have a lot to learn and don’t want to rush into creating a full-blown general programming language. And also because I don’t want to think too far ahead at the moment. I started reading some books on programming languages and compilers: Programming Language Pragmatics for the introduction, next will be the Dragon Book and Types and Programming Languages (thanks to Daniel Spiewak for the suggestions). I have a lot of theory to go through and progress may be slow.

The language from the first phase is codenamed

Klang

The name comes from Kaleidoscope + language (no relation to Viktor Klang :)), because Kaleidoscope was what I started with, although I altered the syntax to be more like Scala. What features will make it into Klang is not yet set in stone, but I’ll list some of them and describe the language in more detail in the next post, because this one is already pretty long. Anyway, some of the intended features

  • A Scala-like (but not always identical) syntax
  • Type safety
    • Type inference (somewhat similar to Scala’s)
  • Functions
    • Named and default arguments
    • Passing functions as arguments to other functions
    • Implicit conversions (but they can’t be used for pimping)
    • Anonymous functions
    • Extern function declarations (for linking to native C libraries)
  • Primitive types: Byte, Short, Int, Long, Float, Double, Boolean
    • Considering having Byte be unsigned, but not sure
  • Tuple types with optionally named elements
    • function argument lists are tuples
      • can call a function with any expression that returns a tuple (if the argument list matches)
    • multiple named return values from functions
  • Arrays and UTF-8 Strings
  • Blocks and local values (no mutable local variables)
  • Control structures
    • If expressions
    • Some kind of for-loop like structure, specifics not decided yet
    • && and ||
  • Packages or namespaces
  • A tiny standard library
  • … maybe a few more things

A feature that will probably not make it is polymorphism. There will likely not be an Any or Object type that all other objects inherit from. And you will not be able to ask for the type of a value at runtime.

At least the above is roughly what I think will be in the first version of Klang. Once that version is done, I might make it available on Github under some liberal license. More in the next post.

Holy Bitcode Batman, You’re Writing a Compiler!

Recently I’ve taken an interest in programming language design and have started working on a compiler for a new language. The reasons for doing that are perhaps less practical than I’d like, because I’m a practical man, or sometimes pretend to be. At least I’m usually more interested in the application of mathematics than the beauty of mathematics itself. The same used to go for programming languages.

As the few readers of this blog may know, I’ve been experimenting with game programming now and then, but haven’t been driven enough to actually complete and publish a game. In early years I tried Pascal and C++. I did complete some stupid little games in Pascal, but C++ was more a hindrance to me than an enabler. At some point I learned UnrealScript while developing a Deus Ex mod and kind of liked it. Actually I think that was part of what lead me to Java and getting jobs in Enterprise Java programming. I never really thought to write games in Java, though.

When I discovered Scala I fell in love and thought it was the be-all and the end-all of programming languages and very applicable to games. Tried to write a couple of games in it, wrote some libraries for my own use and even ported a physics engine from Java and C++ to Scala. I loved doing it, loved the language features, the type system, the syntax, the preference of immutability over mutability. Perhaps most of all I loved Scala’s elegant mixture of object-oriented and functional programming.

Eventually something about making games in it started nagging me, though. The performance was good enough for the type of game I was making, but the game was also using a lot of resources for the type of game it was. It started to feel wasteful. The idea that games are a big part of what is pushing hardware forward and have to take the most out of it was somehow stuck in my head.

Scala runs on the JVM, which has the nice abstraction of a big heap, and the memory management is done for you. My boss Jevgeni has given a really awesome talk on that topic titled “Do you really get memory?”. But whatever tricks the JVM does to make that abstraction work well enough for most applications, it does produce some amount of waste — extra CPU cycles and garbage objects which need to be collected to free the memory for new objects. And that is a big part of what the JVM engineers are continuously improving on. They are probably the most efficient at collecting (virtual) garbage in the world! But there are cases where that kind of heap abstraction doesn’t seem to hold well, and high-performance games are one of those cases.

My game engine used lots of immutable, short-lived objects. Things soon to become garbage, in other words. Garbage, dying a slow death in the heap. Every small Vector2(x, y) tracked by the collector, maybe living in a separate heap region from its closest friends. And looking up bits from here and there in the heap is really expensive from the CPUs perspective. Even when the Sun Java VM started optimizing away heap allocation of very short-lived objects (enabled by escape analysis), that only gave me a small performance boost. The situation has improved, but back then I decided to try and avoid so much garbage being produced. I optimized some functions manually, doing scalar replacement of Vector and Matrix objects. That made the code look really ugly and unreadable because it hid the mathematical formulas.

I couldn’t stand it. Neither could I stand all these cycles wasted on GC. So I wrote an optimizer that plugged into the Scala compiler and did the scalar replacement automatically. It worked, and gave a more significant improvement than the JVM’s escape analysis optimizations at that time; garbage production was being reduced, I was going green! But it was hard for me to maintain the optimizer as I knew almost nothing about compilers and was just going by my nose. There were some corner cases that were hard to handle correctly. It only worked on code written against a very specific library, optimizing away well-known constructor and method calls.

Writing that optimizer got me somewhat interested in compilers, though. I remember saying to someone during a job interview a few years ago that I like complex problems, but am not interested in the really complex stuff like compilers. Sometimes things work out as the reverse of what you think.

Anyway, working on my game engine, I wanted to create a really powerful entity system. I wanted to use mix-ins and other nice Scala features. Reading some blog posts about game entity systems, I realized that most people seemed to be moving away from inheritance-based systems into component-based systems. Reading more about component-based systems made me run into the topic of data oriented design (PDF), which is all about thinking about data first, and how the program processes it. A couple of presentations on that left an impression on me and made me realize just how expensive it actually is to make the CPU churn through megabytes of random memory.

But I didn’t want to switch to C++ or C to be able to take advantage of the kinds of optimizations that data-oriented programming can give. I had the idea that maybe there should be a language that was object-functional like Scala, but compiled down to very CPU- and cache-friendly data structures and functions, the kind of structures one would use when doing data-oriented programming manually. And I have huge respect for people who do the latter. But I noticed that some of them seemed to be wanting a better language than C++ as well.

So, a language as expressive and type safe as Scala, similarly object-functional, but with more efficient memory access, CPU cache and parallelization friendly, enabling a data-oriented programming style with less hassle than C++. Perhaps one that could even run some functions on the GPU. What could that look like? I started thinking that maybe I should find out first-hand. [Update to clarify my goals] Well, at least I want to find out what it would be like to try to get there. I’m sure combining the type safety and power of Scala with the raw performance of C is way too ambitious for me. So I’m setting the goal way lower, but more about that in the next post.[/Update]

I’ve never really been a language geek. I’ve programmed in several different languages and at the moment am good friends with only two: Java and Scala. There are scores of other programming languages out there and I’m sure there is some language that is at least 2/3 of what I’m looking for.

But I think some bug bit me, because I couldn’t let go of the idea of creating one myself. And this blog post was a lengthy, boring preface to a series of hopefully less boring posts documenting my experiment. I have no illusions — I don’t think I’ll create the “next big language” or anything close — there are people much smarter than me who have been working on programming languages for years and decades. But this will be a fun learning experiment and maybe something useful will come out of it. Next time I will talk about the kind of language(s) I want to create. The (s) is because I want to create a really simple language first, to learn more about compilers during the process, and later expand from that base.