Category Archives: Java

Options for getting JRuby 1.6 to use Ruby 1.9 Syntax

In 2012 and beyond you really want to be using Ruby 1.9 syntax (as per the standard ruby implementation KRI). JRuby 1.6 uses Ruby 1.8 syntax by default, but this can be changed to be 1.9. There are a bevy of different ways to do this. I’ll outline these here, and give you my recommendation on what to do.

Command Line parameter

The first way to set the ruby syntax version is to call ruby with a –1.9 parameter thusly:

This I don’t like at all because it doesn’t work with irb, and you have to type different things when using KRI and JRuby.

RUBYOPT Environment variable

A second option is to set the RUBYOPT environment variable to –1.9

This works for irb, but will actually prevent KRI from starting up as KRI doesn’t support the –1.9 parameter (KRI is Ruby 1.9 so the parameter doesn’t make sense).

JRUBY_OPTS Environment variable

Another option is to set the JRUBY_OPTS environment variable.

This works great for JRuby , irb in JRuby, and KRI.

.jrubyrc file

The next useful option is in a user specific .jrubyrc file. This gets called before JRuby starts up. Keep it in ~/.jrubyrc

This works well if you are working in an environment where you will always be running 1.9 compatible code. It’s what I currently do.


If you are using rvm there are a number of ways to do things.

First you can use a .rvmrc and set the JRUBY_OPTS variable, or a PROJECT_JRUBY_OPTS environment variable.

The second option is to enable rvm jruby hooks.

hooks live in $rvm_path/hooks.

In rvm 1.14.3 there are a series of after_use hooks, including some after_use_jruby hooks. Inspect these to see what they do (after_use runs all the other after_use hooks, and the after_use_jruby scripts will get some JRuby magic happening). You could put one of the earlier options here to help make JRuby work.


In conclusion my recommendations for getting JRuby 1.6 to use Ruby 1.9 syntax are:

  1. use a .jrubyrc file wherever possible
  2. use a .rvmrc for situations where you need different versions of JRuby.

Executing Play! from outside of Play! code

As I’ve said earlier, I think that the Play! framework is lovely. It makes it easy to develop and write code quickly. One of the ways that it enables this is through performing runtime byte code enhancement of the code. This makes execution of your code somewhat non-trivial when coming from a non-Play! context. Play! aims to meet all your needs, but use cases exist where it is important non-Play! code with Play! code, and have your non-Play! code call into Play!

Having said that this is non-trivial, it is reassuring to know that the process to do this is very straight forward.

  1. Create a subclass of play.Invoker.Invocation.
  2. Override the public void execute() method.
  3. Call the run() method of the invocation.

Invoker.Invocation invocation = new Invoker.Invocation() {
public void execute() {
//do stuff with play here

With this simple snippet of code, it is possible to have non-Play! code easily and cleanly call your Play! application code.

Eleven reasons to use the Play! Framework for Java Web Development

The Play! Framework is a great tool for rapidly building Java web applications. Play! takes many of the ideas from the dynamic languages world (Rails and Django), and provides them to Java web development. Reasons to conside Play! for Java Development are:

  1. Rapid development via a local development server that automatically compiles your java code for you. It’s amazing how good it is to develop like this, and what a difference the rapid feedback loop makes.
  2. A good clean MVC famework.
  3. Nice testing support baked in.
  4. A useful routing table to make clean urls easy to work with.
  5. A focus around REST, but no slavish observence of it.
  6. built-in simple JSON support.
  7. A good module framework with useful modules including a “CRUD” module, and a Scala module currently under development
  8. An interesting mix of Java class enhancement that makes it easy to work with code, and then have the enhancer provide some of the hard work for ensuring that multiple threads are handled well.
  9. Deployment to a range of platforms, including JEE Servlets (Play! 1.0.2 has been tested on containers such as tomcat, jetty, JBoss and IBM WebSphere Portal 6.1), and the GAE.
  10. Enhancements to the JPA which make it really easy to work with.
  11. An active and supportive community. There is the right balance between having strong opinions about the “Play!” way of doing things, and helping people to get things done.

Play! makes Java web development fun and productive. The feedback loop is really quick, and much of the boilerplate code is removed. It’s well worth considering for any application you want to write in Java.

Take a look at the video, and work through the tutorial to get a feel for what development with Play! is like.

Making the Home and End Keys work in Eclipse 3.4 on Apple Mac OSX

Hidden in the comments of the article of Starry Hope – Mac Home and End Keys are some instructions for how to make the home and end keys work well as begin and end line in eclipse.  I've done all the other tricks to make this work on my Mac, so was getting really frustrated with Eclipse.  double home and double end are common key combinations for me in IntelliJ and Eclipse on Windows, so the current behaviour of going to the beginning or end of the file drives me crazy.  The details of doing this differ slightly in Eclipse 3.4.1, so I'll list the steps I followed below.

  1. open the eclipse preferences pane
  2. general->keys
  3. in the filter type line start and note that there will be existing bindings when editing text.
  4. select line start type home, and ensure that the "when" field stays with Editing Text
  5. apply
  6. follow this process for select line start, line end, and select line end.

After doing this, expect your anger at eclipse on Mac to decrease to much more manageable levels.


A Review of 5 Java JSON Libraries lists 18 different Java libraries for working with JSON (Flexjson gets a double mention). These provide varying levels of functionality, from the simplest (the default org.json packages), to more comprehensive solutions like XStream and Jackson. Join me on a quick review of some of these, focusing on those which have friendly licenses, and meet my requirements.  If you are lazy, you can fast forward to my summary

My Requirements

  1. Serialises and Deserialises JSON
  2. Lightweight and Simple
  3. runs on Java 1.4
  4. Friendly license

The contendors

  1. org.json
  2. Jackson
  3. XStream
  4. JsonMarshaller
  5. JSON.simple

Serialises and Deserialises JSON

This might sound like an obvious requirement, but I’ve seen at least one library which was completely focused on spitting out JSON, without any support for reading JSON. I’m actually using this as a pre-requisite for inclusion in my comparison. If a library can’t read AND write JSON, I’m not going to consider it.


I’ll begin by stating that my actual usecase is to operate within a plugin for EditLive!. I don’t need a all singing all dancing JSON serialisation/deserialisation library. There are some very cool libraries out there that do awesome stuff, but all I need to do is read and write JSON data.

Coupled with this is that I’ll want to be able to keep the memory footprint pretty low, so want to work with Java Streams without needing to necessarily pull in the whole serialised object if I don’t need it.

Runs on Java 1.4

Yep it’s still out there. Thankfully Java 1.4.2 has reached it’s EOL, but businesses can still request patches, and there are most definitely still Ephox clients running on this JRE, even though more recent JRE’s work so much better. (side note: If you have the option of upgrading your JRE to Java 6, please do it, the children in Africa will be much happier. Everytime someone runs up a 1.4 JRE a puppy dies). 1.4 is in it’s final death throws, but it is still kicking.

Friendly License

For Ephox to make money from the product/component that uses JSON (gotta think about the $$$ at the end of the day), I’ll need to make sure that the license is non-viral and Enterprise friendly. Apache license good. GPL bad. (sorry FSF)


So having run through the requirements, we can now consider the options. For each library, I’ll provide a simple table.

The metrics I’m using to judge the libraries are included in the table. The most crude metric that I’ve got is the number of classes. I’m more than happy to admit that this is a very crude way to measure how lightweight the library is, but it does provide an ok rough heuristic, particularly given that there are order of magnitude differences.


The granddady of them all. This comes pretty close to being a reference implementation. It provides a nice simple API (7 classes), doesn’t try and do any magic, and just makes sense. I’ve used it before when working with small amounts of data. Unfortunately it doesn’t provide any streaming goodness.

classes 7
Streaming support No
Friendly License Yes
Java 1.4 Yes


Jackson advertises itself as a fast powerful conformant JSON processor. It provides heaps of features, and looks to be a good tool for reading and writing JSON in a variety of ways (see the Jackson tutorial for more). The drawback of Jackson for my purposes is that it isn’t exactly svette at 250 classes.

classes ~250
Streaming support Yes
Friendly License Yes
Java 1.4 Yes


XStream gets a mention because it’s cool :). I haven’t really considered it because it provides more of a direct object serialisation format, which wasn't quite what I'm looking for. Also, it’s heritage as an xml serialisation format shows, and it likes Java 5 much better. The ability to directly go between Javabeans and JSON java classes is cool, but I don't need this magic or the 200+ classes that come with it.

classes >200
Streaming support Yes
Friendly License Yes
Java 1.4 Yes

Json Marshaller

Json Marshaller sells itself (it almost sounds like a bolierplate project description by now) as “Fast, Lightweight, Easy to Use and Type Safe JSON marshalling library for Java”. It’s been under consistent active development for a number of years, and looks to be headed in the right direction. Unfortunately the current version has 3 deal stopping flaws for my environment at the moment.

  1. It requires Java 5
  2. It has a dependancy on ASM (the developers are looking to remove his dependancy)
  3. While it hasn’t quite piled on the bulk of XStream or Jackson, it still has a couple to many classes for me to consider

These constraints make it not quite fit for my purposes, but like all decisions, it depends on your own situation.

classes ~50
Streaming support Yes
Friendly License Yes
Java 1.4 No


JSON.simple advertises itself as “a simple Java toolkit for JSON”. It provides reading and writing to JSON streams. It’s lightweight and focused on generating JSON from Java code. The critical feature it provides is support for Java IO readers and writers.

classes 12
Streaming support Yes
Friendly License Yes
Java 1.4 Yes


For the interested, here’s a table that summarises my findings.

  org.json Jackson XStream Json Marshaller JSON.Simple
classes 7 ~250 >200 ~50 12
Streaming support No Yes Yes Yes Yes
Friendly License Yes Yes Yes Yes Yes
Java 1.4 Yes Yes Yes No Yes



If you are looking for a simple lightweight Java library that reads and writes JSON, and supports Streams, JSON.simple is probably a good match. It does what it says on the box in 12 classes, and works on legacy (1.4) JREs.

Choosing a data storage format

In case you haven’t noticed, XML is not a silver bullet. (google xml+silver+bullet). It is not, and should not be an automatic choice when thinking of a data storage format. The ubiquitous libraries for working with XML are often hard to use, and are often overkill for a simple storage format. In today’s world, I’d suggest that the following options should be considered (at least briefly).

  1. Native Object Serialisation
  2. Custom format
  3. XML – Extensible Markup Language
  4. YAMLYAML Ain’t a Markup Language (obviously created by geeks with the recursive name)
  5. JSON – JavaScript Object Notation

Join me in having a look at these formats, and I’ll let you know some of the issues to consider. The main problem I’m solving is for data that belongs to your own application. I’m not considering databases or interoperability.

Native Object Serialisation

Consider this briefly before running away. I’m particularly familiar with the idea of Java Object serialisation. I’ve used Prevayler in the past storing java objects, and xml (So while I’m having a dig at Java Object serialisation in general, I’m not specifically having a go at prevayler).

While the use of native object serialisation is often easy, it has costs, making the content unreadable by humans, coupling the data storage to your implementation language, and can create object migration issues. These costs will typically outweigh the benefits. Having human readable data to aid debugging would provide reason for not using native object serialisation if there was nothing else.

Custom Format

The use of a custom simple text format should not be discarded out of hand. The lack of any third party dependancies is a useful feature, and should be considered. That said, if you have a library that does the parsing for you, that should not be sneezed at.


As wikipedia says, “XML is a general-purpose specification for creating custom mark-up languages” (Wikipedia on XML). Parsers and tools exist for many platforms and environments, which makes it a useful tool when you want to share information between different environments. While a good tool, the syntax is verbose, and can be hard for humans to read.

XML has influenced the birth of two of two more recent notations which are useful for data storage: YAML, and JSON


YAML purports to be “a human friendly data serialization standard for all programming languages” ( It has a well defined specification (YAML Spec), and makes for an easy to understand data storage format. Implementations of YAML exist for a wide range of languages, including Java, C++, Ruby and Javascript. It’s been around for a while, and has a decent amount of uptake. If it wasn’t for JSON, it would probably be a good default choice.


At first glance JSON seems much less suitable than YAML for languages other than JavaScript. The kicker against it is that it has “JavaScript” in the name, which has always made people feel icky. That said, it does make for a good cross platform format, it is human readable, and is implemented on a wide range of platforms (

JSON has also has the advantages of having mindshare, and is slightly more familiar to developers than YAML. Every developer who has had anything to do with the web has done stuff with JavaScript, so the basic format will be familiar to them. Also in JSON’s favour is the fact that JSON and YAML are syntactically very close (see Redhanded). JSON appears to be very close to a subset of YAML(Ajaxian). In addition, the general applicability of JSON is higher, particularly for people who are going to be doing Javascript development. Also, if you have any possibility of playing with JavaScript, JSON is a very good option because of the native support in JavaScript.

These factors combine to make JSON an excellent choice.


Tim Bray makes a good case for this being an automatic choice based on your circumstances( ). You’ll still need to think about the pros and cons of the different technologies for your situation (see, but you’ll often find that JSON is a good format to use for data storage.

Fixing”No backend servers available” running Vignette on Weblogic

After spending a considerable amount of time looking at WebLogic and Vignette groups and sides on the net, I finally worked out the cause of a recent “No backend servers available” error message which was occuring when trying to access the vcm 7.5 AppConsole on a recently installed version of Vignette.  In the end google didn’t know the answer to my question.  It’s time to teach google a lesson.

So after getting quite frustrated with what was happening, being unable to find documentation, and finding that a reinstall wasn’t helping much, I pulled out a developers true friend, and started poking around with the runtime services console.

After playing around with the runtime services, I was able to work out that the default install of Vignette  was setting up a cluster, containing the single server.  The combination of suspending and restarting my vmware virtual machine was leading to the Weblogic server to not be started correctly, resulting in the “No backend servers available” error when trying to visit the AppConsole. 

This problem was resolved by following two simple steps:

  • being patient after a restart, allowing the system to do all its checks, (waiting about 5 minutes worked for me) and then
  • simply starting the server using the runtime services console. clusters->[cluster name]-> control -> start/stop -> startup.

So if you end up in a situation where you are seeing “No backend servers available” with Vignette and WebLogic, don’t despair, it can be resolved by simply starting the servers in your cluster (even if it’s a cluster of 1) after patiently waiting (about 5 minutes) for the server to get into a stable state.

Thankfully google knows this too now.

Reducing the Startup time for IBM WebSphere Portal

After spending too much time today waiting for WebSphere Portal to restart (due to a nasty bug causing Portal to crash when I was working with it) I've been revisiting how to improve the startup time.

The key step in doing this is to uninstall portlets that aren't needed. (see for a list). For the lazy developers (read me) there are a series of scripts that have been developed that do this. The article Installing and configuring WebSphere Portal V6.0 Servers for development with Rational Application Developer V7.0 and Rational Software Architect V7.0 includes these scripts in a zip file (search for in the article). The article also has some good suggestions for memory and jvm settings for your server.

I have known about the existence of this process for a little while, but it took me a little to find them just now (when tuning a development server). Google was have troubles pointing me to the right article, so I thought for my sake I'd try and help google find this in the future. (Even if google did find the right spots, the title of the developerworks article wouldn't have made me think that it had the nugget of information about IBM WebSphere Portal Startup performance).

SSH Tunnelling is your Friend

I've been using ssh tunnels for a while now. I first encountered the technique when setting up my e-mail client when doing research at the ISI. The ISI are understandably quite security conscious, so a ssh tunnel was required to access my mail on my mac. I'll admit that I set this up initially withot really having a clue, simply following the directions of the guys at the ISI that knew what they were doing. I had a couple of ssh commands sitting in an e-mail (and more often than not in my history), which I would simply refer to and reuse.

More recently at Ephox I've been doing a fair bit of ssh tunneling. I've been doing this to access the UK demo server for E2, exposing the Websphere admin and wcm portal urls via the tunnels. This process has worked relatively well for a couple of weeks for me, but in my current stage I'm going through some pain as the transfer of my ear file from the local build to the remote server is taking between 15 and 20 minutes currently. To get around this for some rapid bug fixing work, I've entered the next stage of my ssh tunneling jujitsu

  • ssh to remote server
  • ssh back to local Brisbane office, setting up a tunnel into subversion server
  • then use subversion to check out the codebase.

After having performed this setup work, I can easily do my changes on my local box, update the remote server via svn, then build and deploy from the remote server. The build and deploy are currently too manual, and could use some automation, but I'm currently happy to wear the pain, and drop my deploy process from 20 minutes to under five.  

My current next step is to automate the rest of the process.  Assuming my recent foray into sysadmin land didn't completely kill the server, this should be done very soon.

IntelliJ Analyze Stack Trace

The IntelliJ Analyse stack trace is a good tool to add to your bag of tricks.

Simple copy a stack trace to the clipboard, then go to IntelliJ -> Analyze Menu -> Analyze Stack trace

You will see the option to normalize the stacktrace (which is worth choosing), after pressing ok, you will see the output put into the run menu, which will allow you to navigate to classes inside your current project.

This is a nice tool for debugging support stacktraces you may receive, or stack traces originating from a remote server.