Category Archives: Development

Things to think about when building a new web project

I’ve been thinking and talking with people about building webapps. Every project has it’s own set of contexts to consider, and questions to work through. As such each will have things to consider. Here are some questions to work through.

  • Single page vs multi-page
  • JSON HTTP API for data and resources vs generating server side html
  • Big framework vs roll your own
  • Angular/Ember/Knockout
    • testing/learning/state today vs tomorrow
  • javascript module system
    • requirejs/browserify/framework based
  • What server-side frameworks/technologies do you want to use
  • What do you want to learn?
  • CSS Framework for getting started
    • foundation/bootstrap/pure css
  • mobile strategy (responsive design)

For many of the questions and options above, I think that I have strong opinions about the answer (as I’m sure many people do). Thinking through the questions above is an important thing to be doing on any project.

What questions do you think about?

How to use any JavaScript library with RequireJS 2.1

I’ve been using Require JS for a while, having cut my teeth on 0.27.0, and gotten quite familiar with the order plugin that was a part of life when using non require-js libraries.

RequireJS 2.1 provides a different tool for including third party libraries, providing a “shim” object that forms a part of the requirejs config.

With the shim it is easy to add dependencies on non-require js pieces of code. Take a look a the documentation on the shim config(http://requirejs.org/docs/api.html#config-shim) for some details or see some examples below:

Using Twitter Typeahead with Require.js

http://twitter.github.io/typeahead.js/

shim: {
“typeahead”: ["jquery"]
}

Typeahead depends on jquery being loaded first. Because the API is done via jquery, we don’t need to worry about exporting anything.

Using Twitter Hogan with Require.js

http://twitter.github.io/hogan.js/
shim: {
“hogan”: {exports: “Hogan”},
}

hogan doesn’t depend on anything being loaded before it. It exposes itself on the global namespace as Hogan.

Using Twitter Bootstrap with Require.js

http://twitter.github.io/bootstrap/javascript.html

shim: {
“bootstrap”: ["jquery"],
}

Twitter bootstrap depends on jquery being loaded first. Because the API is done via jquery, we don’t need to worry about exporting anything.

URI with Require.js

http://medialize.github.com/URI.js/

shim: {
“URI”: ["jquery"],
}

pushing in a jquery dependency to uri. It has magic in it to detect if amd is being used, and will define itself as an amd module.

SerializeJSON jQuery plugin with Require.js

https://github.com/marioizquierdo/jquery.serializeJSON

shim: {
“jquery.serializeJSON”: ["jquery"],
}

serializeJSON depends on jquery being loaded first. Because the API is done via jquery, we don’t need to worry about exporting anything.

D3 with Require.js

http://d3js.org/

shim: {
“d3″: {exports:”d3″},
}

D3 doesn’t have any dependencies and exports itself with the d3 global object.

Rickshaw with Require.js

http://code.shutterstock.com/rickshaw/

shim: {
“d3″: {exports:”d3″},
“rickshaw”: { exports: “Rickshaw”, deps: ['d3']}
}

rickshaw depends on d3, and exposes itself with the Rickshaw global namespace.

Notes From Yehuda Katz’s visit to Brisbane

Yehuda Katz(http://yehudakatz.com/) had a brief visit to Brisbane Australia, doing a public presentation, and a more private breakfast meeting. In this blog post I’m going over some of the things that struck me as particularly interesting or worth thinking about.

For those of you who don’t know Yehuda, he is an opinionated and very active developer. Yehuda is a member of the JQuery and Ruby on Rails core teams, and he is also one of the founders of Ember.js (a framework for building rich MVC JavaScript applications – http://emberjs.com).

In the public talk Yehuda went through ember.js, talking through what the paradigm of the framework is, and walking through a demo of some of the key features of the framework. It looks like an interesting option for JavaScript applications. It’s on the brink of going 1.0, but already has some high profile applications using the framework. Apart from seeing the interesting elements of ember and how it’s used, it was very interesting to see the way people are using Ember.js with D3: http://corner.squareup.com/2012/04/building-analytics.html

I’m definitely going to keep my on the framework as it moves forward.

In his spare time Yehuda is working to push the future of the web in ways that help facilitate rich applications. He has got a couple of ways that he is doing this:

  1. he is a member of the W3C TAG
  2. he is working to influence members of the chrome team to build things well.

W3C TAG

The Technical Architecture Group works to specify and guide the architecture of the web moving forward. It has the feel of a good internal architecture group in an organisation, filled with smart people trying to make the web better (it’s membership includes Tim Berners-Lee and representatives of the community, large organisations and browser vendors).

Chrome Team

Through some of the work Yehuda has done, he has had opportunity to spend time with some of the people building new versions of Chrome, and helping to guide some of the thinking towards making apis and decisions that work well for web developers.

So I should have convinced you that Yehuda has some stuff worth listening to. Over the period of time he was I had some good opportunities to listen to both his public conversations and hear some of the more informal conversations. Here are some of the things that I found particularly interesting around where he sees the web heading.

Web RTC looks like a cool technology for real time communications (http://www.webrtc.org/). The support for peer connections looks particularly interesting.

The new world being demonstrated by google polymer (http://www.polymer-project.org/) looks to be very exciting. Definitely well worth a look for web developers who want to get an idea of the way they will be writing applications in the future. Model Driven Views (http://www.polymer-project.org/platform/mdv.html), and custom elements (http://www.polymer-project.org/platform/custom-elements.html) are extremely exciting, and the shadow dom (http://www.polymer-project.org/platform/shadow-dom.html) looks like a good tool for helping to support and customise the new features brought in. The HTML + CSS workflow currently is the language of the web, with many people speaking it, and with these tools, I think the language is moving in good directions.

The mechanisms for doing asynchronous JavaScript have been moving on from the straight callback approach that has been familiar to people, particularly through the use of node. There has been much discussion around the web around promises and futures, with things heading towards promises. Martin Fowler has an article describing JavaScript promises (http://martinfowler.com/bliki/JavascriptPromise.html), which is where the W3C TAG is currently headed (http://infrequently.org/2013/06/sfuturepromiseg/). I look forward to having this come into play, and having a standard option that doesn’t involve the deep nesting that can come from callback nesting.

It was interesting hearing Yehuda’s perspective on computer science and functional topics like Monads and functional reactive programming. The binding approach used in Ember.js takes inspiration from FRP, and Promises allow a transformation to a monadic approach.

One of the interesting new things coming to javascript in browsers is object.observe, a feature which will make it possible to observe any object for modifications on it or its attributes.

All in all there is a bunch of interesting stuff in the web future. It’s a great time to be doing web development, and I look forward to what the future holds.

Comparision of 4 approaches to playing audio in iOS

There are at least four different ways of playing audio in iOS.  Each has their own wrinkles and advantages. In this article I briefly compare and contrast some of the differences between the options.  For the TL;DR version skip to the table at the end which compares the approaches. (actually, to be honest, I’ve had this post in my drafts for a very long time, and just wanted to get it published. The body of the article isn’t as good as the table — skim the article and go to the table :))

  • MPMusicPlayerController(using the iPodMusicPlayer)
  • MPMusicPlayerController(using the applicationMusicPlayer)
  • AVAudioPlayer
  • AVPlayer

 

The MPMusicPlayerController using the iPodMusicPlayer is the highest level player, and matches closely to the iPod.  The actual iPod music player can be accessed and used with the factory method:

+[MPMusicPlayerController iPodMusicPlayer];

The nowPlaying item property will expose the current playing item in the iPod when using the iPodMusicPlayer. Changes to the current item in your application will also update the main iPod music player.

To load your own music player with the same API, use:

+[MPMusicPlayerController applicationMusicPlayer];

In iOS 5.1 and below it is not easily possible to use the AVAudioPlayer to play items from the iTunes library.  While it is possible to retrieve the url for the object, it doesn’t really do what you want. When trying this you will see a somewhat unhelpful error:

Domain = NSOSStatusErrorDomain
Code = -43
Description = Error Domain=NSOSStatusErrorDomain Code=-43 “The operation couldn’t be completed. (OSStatus error -43.)”

The AVPlayer however, allows access to the  iTunes library, playing in the background, custom background behaviour, and playing non-Library audio.

Of the accesible API’s this is the most functionality.  For the hard-core/determined developer, there are lower levels to dig down into, but these 4 options provide a decent set of options to play audio in iOS.  The table below outlines the highlights.

iosMediaPlayer applicationMusicPlayer AVAudioPlayer AVPlayer
play library Y Y N Y
access iPod play location Y N N N
Play in Background Y N ? Y
Custom Background behaviour N N/A ? Y
Play non-Library Audio N N Y Y

Writing Beautiful RSpec Matchers

thoughtbot have created a really nice set of custom matchers for RSpec.  The Shoulda matchers make writing tests for rails models beautiful and clean.

Shoulda matchers make it possible to write specs of the form:

It’s easy to do this for yourself using the friendly matcher DSL that ships with RSpec. Let’s take a look at how.

In the simplest form, all you need to do is to call the define method, passing a name, and a block which in turn calls out to match. The simple case might be:

We’ve just defined a matcher :be_less_than which allows us to do the following:

If we use the describe auto subject feature of RSpec we can also write:

and

With that example we can see that we are getting close to the shoulda matcher syntax.  There’s still an extra level that helps make the shoulda matchers nice, chaining. Happily this is easy to do as well. The dsl provides a chain method.  The chain methods are called before the match, so you can use the chain calls to collect up additional information to validate.  Using our example above, we might want to add in an :and_greater_than chain.

This now will give us the ability to write:

Unfortunately the default descriptions don’t quite work as nice when there are chains in place. You’ll find that you’ll want to write a description in the matcher definition.

Unfortunately the description doesn’t automatically get used in the error messages when the matcher fails, so you’ll want to specify a failure_message_for_should. Basing it off the description is an ok starting point.

Happily RSpec does the right thing with the should not description, so with the details above the matcher will be good. There might be a case where you want a different failure_message_for_should_not, which can also be specified.

That covers how to write simple reusable RSpec matchers to make your test code look as beautiful as your production code. Go ahead and try it yourself.  I’d love to see comments or questions you might have on this.

A vocabulary for product licensing

Getting a shared vocabulary for conversations is always useful. Here’s a pattern/vocabulary that I’ve recently been introduced to for thinking about software product licensing.

  • Product Key
  • License
  • SKU

Product Key

A product key is the cryptographic software control used to control who/what can use the software

License

A license is the concept encapsulating that a customer is allowed to use the software. A customer is licensed to use software. This probably aligns with the legal contract that is used.

SKU

The SKU/stock keeping unit is how sales people can talk about the product. They sell a SKU to a customer.

The customer licenses the SKU (product), and then they are issued a product key which enables them to use the software.

The precise definitions for product keys, licenses and SKUs is valuable when communicating between product, sales and marketing teams. The list above seems to work pretty well. Let me know in the comments if you’ve got better ideas.

A Version Labelling Scheme for Software Product Development

Computer software typically has a . separated version number. These numbers have value in helping to understand which version is being used. There is a shared understanding that things go from the most significant part of the version to the least. Having some precision around talking about a version labelling scheme is useful for setting up product lifecycles and communicating the details of a product.

I’ve heard numerous ways of thinking about the build numbers, normally involving terms like major and minor. I’ve never quite been able to keep the terms and distinctions clear in my head, so was pretty excited to recently hear a different approach, which I’ll share below.

Version.Release.Modification.Build

In numbers this will look something like:
3.4.5.1234

The idea is:

Version — a major product version. Customers care and know about versions. Versions are where potential breaking changes might happen.

Release — a major product release. Customers will care about releases. New functionality and bug fixes are introduced in releases.

Modification — a publicly available bugfix modification to the software. Users should be able to safely move between modifications of a product without breaking things. Modifications are focused around fixing problems in software.

Build — the internal build number for the software. A build is useful for the unique identifier of the software by development. The build number will be incremented automatically by the build system. Typically an officially released modification will only have one build number.

Another factor to consider in the system above is that the first two numbers are typically controlled by the Sales and marketing arms of a company, and the last two should be controlled by engineering. In fact if the last two numbers are being done correctly they should be automatically incremented, not being changed apart from automated systems and well defined rules.

For the first sets of numbers, engineering should be able to suggest that new functionality should be increasing at least the release, but it is a sales and marketing decision as to how much of an increase these might get.

A Thought On Dynamic Languages

Reading http://www.codecommit.com/blog/ruby/monads-are-not-metaphors got me thinking on a tangent not related to the article. The examples start in ruby then move to Scala. Daniel points out the difference between some of the scala and ruby code which led me to the following thoughts.

Duck type code encourages reading the source of methods to see the type requirements of params. This is  a good thing.

For this to work well methods need to be short and well written, another good thing. While this doesn’t negate the value of STRONG types, it does make the case for dynamic languages better.

4 Tips from the Unix Greybeards

So I’ve been using Unix for a number of years, and think of myself as relatively competent in using Posix based operating systems. I’ve recently had the great pleasure of working with a few guys who’ve been using Unix for longer than I’ve been able to read, and I’ve been able to add a couple of more commands that are extremely useful. It’s always great to watch a master using their tools, and to learn while working with them.

The commands below are a mix of ones that I’ve just learnt. I probably should have known these already – but they are lovely, and you need to add them to your arsenal.

  1. !!
  2. locate
  3. look
  4. xargs

    1. !!

    re-execute the last command.

    !! also has friends like !COMM where it will execute the last command in your history that starts with COMM.
    e.g.
    !find will re-execute your last find.

    2. locate

    With a locate database setup, locate is magic. Think spotlight for the commandline.

    3. look

    The man page for look talks about it searching a file, then mentions about the default location being /usr/share/dict/words. The default location is the key for the best usage. It’s a great little dictionary for looking up how to speel words.

    4. xargs

    The most masterful developers that I work with have advanced beyond xargs to the next level, but for people continuing on their pathway to true unix mastery, xargs is a great tool to add in.

    I commonly end up using variations of find and xargs together to search the contents of files, or to find out information about a project. If nothing else, the command:find . -name "*.java" |xargs wc -l | sort is your friend.

    Eleven reasons to use the Play! Framework for Java Web Development

    The Play! Framework is a great tool for rapidly building Java web applications. Play! takes many of the ideas from the dynamic languages world (Rails and Django), and provides them to Java web development. Reasons to conside Play! for Java Development are:

    1. Rapid development via a local development server that automatically compiles your java code for you. It’s amazing how good it is to develop like this, and what a difference the rapid feedback loop makes.
    2. A good clean MVC famework.
    3. Nice testing support baked in.
    4. A useful routing table to make clean urls easy to work with.
    5. A focus around REST, but no slavish observence of it.
    6. built-in simple JSON support.
    7. A good module framework with useful modules including a “CRUD” module, and a Scala module currently under development
    8. An interesting mix of Java class enhancement that makes it easy to work with code, and then have the enhancer provide some of the hard work for ensuring that multiple threads are handled well.
    9. Deployment to a range of platforms, including JEE Servlets (Play! 1.0.2 has been tested on containers such as tomcat, jetty, JBoss and IBM WebSphere Portal 6.1), and the GAE.
    10. Enhancements to the JPA which make it really easy to work with.
    11. An active and supportive community. There is the right balance between having strong opinions about the “Play!” way of doing things, and helping people to get things done.

    Play! makes Java web development fun and productive. The feedback loop is really quick, and much of the boilerplate code is removed. It’s well worth considering for any application you want to write in Java.

    Take a look at the video, and work through the tutorial to get a feel for what development with Play! is like.