Wolfram Alpha: the promise of divine intelligence?

The hype that surrounded Wolfram Alpha bore similarities to the Segway hype machine of 7 years ago: not for the first time was the media abuzz with promises of a revolution in some small part of our lives, that hackneyed ‘paradigm shift’; the ‘game changer’.

Some weeks later, the news that their traffic levels have plummeted is hardly surprising, though it dovetails nicely with a wonderfully incisive piece about WolframAlpha’s UI as an intelligent control interface.

“There is actually a useful tool inside Wolfram Alpha, which hopefully will be exposed someday. Unfortunately, this would require Stephen Wolfram to amputate what he thinks is the beautiful part of the system, and leave what he thinks is the boring part.”

This article centres on the UI problem being solved Wolfram Alpha, opting for natural language interpretation.

“WA is two things: a set of specialized, hand-built databases and data visualization apps, each of which would be cool, the set of which almost deserves the hype; and an intelligent UI, which translates an unstructured natural-language query into a call to one of these tools. The apps are useful and fine and good. The natural-language UI is a monstrous encumbrance, which needs to be taken out back and shot. It won’t be.”

My personal experience with WA is very much as a ‘false affordance’ as the article details so well:

“For serious UI geeks, one way to see an intelligent control interface is as a false affordance – like a knob that cannot be turned, or a chair that cannot be sat in. The worst kind of false affordance is anunreliable affordance – a knob that can be turned except when it can’t, a chair that’s a cozy place to sit except when it rams a hidden metal spike deep into your tender parts.

Wolfram’s natural-language query interface is an unreliable affordance because of its implicit promise of divine intelligence. The tool-guessing UI implicitly promises to read your mind and do what you want. Sometimes it even does. When it fails, however, it leaves the user angry and frustrated – a state of mind seldom productive of advertising revenue.”

The full article is a good read and though relatively long, it’s conclusion is not: Keep it Simple, Stupid.

JSON Webtests with Grails

I recently figured out how to use WebTest for functional testing of Grails controller actions that render JSON. That said, I’m not convinced it’s the best way – I’m fairly sure the gFunc plugin would do it nicely, though I ran into problems with it clean compiling the whole app on every run.

Custom steps

It’s been possible to add custom steps to Webtests for some time. Assuming you have v0.6 of the plugin (or you use Grails 1.1), then this writeup provides some useful background and also a ‘Hello, World’ type example.

In an ideal world

On the surface it seems that we could therefore have a jsonVerify step which is quite simply:

class JsonVerifyStep extends Step {

    String expected

    void doExecute() {
        def jsonServed = context.currentResponse.inputStream as JSON
        def jsonExpected = expected as JSON
        assert jsonExpected, jsonServed
    }

}

Annoyingly, it’s not this easy.

Grails, Webtest and (sigh) Classpath’s

Webtest is spawned by a forked Ant process (see ${pluginDir}/webtest-n.n/scripts/call-webtest.xml) which means you get a  limited classpath due to JAR version conflicts.

So it’s not possible to add the Grails classpath (or even just $GRAILS_HOME/dist/grails-web-n.n.jar) which contains all the handy JSON library code that we’re so accustomed to when rendering JSON responses.

Solution (has some camembert)

My “solution” was to add two jars (json-lib & ezmorph) to your Grails lib and also tweak the call-webtest.xml file with the following change:

<fileset dir="${grailsHome}/lib" includes="commons-cli*.jar,commons-beanutils*.jar"/>

With the resulting custom step you can test your JSON response

import com.canoo.webtest.steps.Step
import net.sf.json.JSON
import net.sf.json.groovy.GJson
import org.apache.commons.io.IOUtils
import org.apache.log4j.Logger
import net.sf.json.test.JSONAssert

class JsonVerifyStep extends Step {
    private static Logger log = Logger.getLogger(JsonVerifyStep)

    String expected

    void doExecute() {
        GJson.enhanceClasses() // neccessary for the net.sf.JSON stuff in Groovy
        def jsonServed = IOUtils.toString(context.currentResponse.inputStream) as JSON // wants a string
        def jsonExpected = expected as JSON
        JSONAssert.assertEquals jsonExpected, jsonServed
    }
}

And you would implement your webtest as follows:

    def testSomeJSONResponse() {
        webtest('Example JSON webtest') {
            invoke('/controller/actionJSON')
            jsonVerify(expected: '{"totalRecords":2,"results":[{"id":16,"year":2009,"name":"ZZ Top"},{"id":2,"year":2009,"name":"Aerosmith"}')
        }

    }

Selenium has evolved; enter Bromine

As a long time user and advocate of Selenium I have found that using it on large projects ultimately consumes a lot of time with the following concerns:

  • Organising tests into logical groups
  • Executing testcases against multiple target browsers
  • Parallel execution of testcases

The motivation for these is to speed development; as the number of tests grow you eventually hit a crunch point where tests are too numerous (ie: 1000’s) for teams to realistically run every last available test prior to a check in.

Handrolling solutions to the points mentioned is entirely possible; indeed we used Hudson running parallelised Selenium test groups on multiple slave nodes against Firefox and IE. But wouldn’t it be nice if it we’re made even easier? Bromine proposes just this.

Whilst I haven’t yet had the chance to use it, this impressive 9 minute screencast has at (the very least) convinced me to try it out.