The sites we build need to work for machines as well as humans, says Greg Roekens
A few weeks ago I came across a visionary piece of writing entitled: "Darwin among the Machines" by Samuel Butler. One paragraph struck me the most: "What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors: the machine. We are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race". Now what is particularly astonishing about this article is the date it was published: 13th June 1863. To put in context this is 17 years before Edison’s light bulb invention.
Fast forwarding to 2011 this couldn't be closer to the truth. Indeed ‘we are daily adding to the beauty’ of machines and indeed ‘we are giving them greater power’, greater power to act on our behalf. One way we do this is by empowering them to access, retrieve, consume and aggregate content from the web.
Machines, including the likes of search engines, content aggregators and countless applications, are increasingly consuming web content. Take Twitter.com for instance: only 25 per cent of the traffic is generated by people accessing the site directly. The remaining 75 comes through its API and from the 'Machine' which, acting as proxy for humans, is personified by the thousands of third-party applications that exist in the Twittersphere.
So what does this mean for designers and developers? Working recently on a couple of very large web projects I realised that although the semantic web is now a well-established, trusted and accepted model, it surprised me how little forward planning was put towards designing and developing web solutions that work for machines as well as humans.
Research has shown that brands that have jumped on the API bandwagon have seen increases in sales, innovations, reach, partner synergies and customer satisfaction. So why is it that, despite the fact that now even CEOs hear about the need to create 'open platforms', we're still primarily designing websites for humans only?
Then I realised. We are missing a persona. The all mighty user-centered design process, which uses personas at its heart, is missing a major profile: the machine-based persona.
I also realised that, from a technical point of view, we were also missing a trick by not leveraging the presentation layer enough where more can be done to create an API organically.
The missing persona
Broadly speaking, the goal for establishing personas is to understand important tasks in the UI and the user's motivations. Like traditional personas, machine-based personas need to have a name, a picture, some goals, background information and usage scenarios on how the persona would interact with the interface. It is interesting to note, however, that machine-based personas are functionally-led rather than emotionally-led, which means that a machine persona's descriptions can be more methodical and factual.
Let's take an example in what would be, in many cases, a key machine persona: the web crawler.
Background – Spider is a web crawler, a computer program that browses the World Wide Web in a methodical and automated manner.
Key goals – as a web crawler its main objective is to capture and index as much information about sites on the web as accurately and as fast as possible.
Usage scenario – on a regular basis Spiders crawl your site and identify any changes since their previous visit. Your site publishes recipes, and it happens that the Spider has a new functionality that can deliver your site's recipes straight onto the search engine's results page and ultimately provide more exposure for your site. The spider needs the content to be semantically structured for this feature to work.
This example shows that by adding a simple machine-based persona we are able to identify specific scenarios and opportunities that could be easily missed otherwise.
Create an organic API
While machine-based personas can help with implementing open solutions from the word go, it will not solve the issue alone. Creating dedicated APIs can appear to be complex and costly and therefore quite often get depreciated during the scoping exercise.
Luckily there are easy ways to open your websites by organically integrating basic API functionalities in your existing or new web interfaces.
One of the easiest ways to create an organic API is to create a ménage-à-trois between Microdata, REST and JSON. Firstly, look to make your HTML machine-readable by adding semantic technologies like microdata. Semantic technologies have come a long way especially with the increased adoption of microdata. Secondly, organise part or all of your web pages in a REST URL architecture. And thirdly, to achieve the perfect storm you can add JSON handlers that can, for instance, be used for handling business rules.
Do try this at home
Or at work. In my experience I have found that the best scenario to try this approach first hand is on product pages.
Typically product pages have key and trusted information that we often want to reuse in other digital formats (ie technical specifications, features, prices, ratings and reviews). Try to build machine-based personas that are specific to how your product information needs to be consumed across the web. And finally, look at turning the product area of your website into an organic API by using the microdata/REST/JSON technique described above.