It takes little more than a quick scan through popular technology news media to see how AI is shaping 2017’s tech landscape. The world’s biggest consumer technology firms are putting their full research and development efforts into creating “smart” products and tools like Google Home, Amazon Echo and Microsoft Cortana. Even Apple is rumoured to be working on smart speaker hardware to bring the power of digital assistant Siri to a standalone product.
These products are designed to help users automate increasingly complex tasks by combining the power of natural language processing, advanced neural networks and integrations with thousands of web services. In a relatively short space of time, we’ve moved from basic voice commands for skipping music tracks and initiating phone calls, to multi-step conversational queries that would have previously been beyond the capabilities of most home computer systems.
A new digital age
It’s taken a while, but we may now finally be entering a new digital age where human-computer interaction moves beyond screens and touch inputs, to a future previously only ever imagined in science fiction – screenless, conversational interactions with machine intelligence.
As professionals in the digital industry we recognise that this represents a fundamental shift in the way users will engage with digital content, in the same way that the touchscreen smartphone redefined the mobile web with the introduction of the iPhone 10 years ago. At the time, the proliferation of touchscreen mobile devices caught both the digital industry and brands off-guard, forcing the rapid development of new responsive frameworks and methodologies for generating flexible digital content that would work just as well on 5-inch screens as it would on 30-inch desktop monitors.
Preparing for an AI future
How then do we prepare for a future where users can interact with brands and services through voice alone? What steps do we need to take to ensure digital content is accessible both to humans and to their new AI assistants?
The good news is that the ideas and concepts behind a web built for both humans and machines have been around for at least 20 years.
Most individuals working in the web industry will be familiar with the concept of the Semantic Web. It was popularized through a May 2001 Scientific American article by none other than Tim Berners-Lee, the inventor of The World Wide Web. His vision, originally shared in 1994 at the very first World Wide Web conference, described how “a new form of Web content that is meaningful to computers will unleash a revolution of new possibilities.” This semantic web would be structured using a standardised syntax, allowing web content to be readable to humans, while also having underlying semantic meaning – and therefore utility – to machines.
Unfortunately, the complexity of the Semantic Web technology stack proved prohibitively complicated for non-technical content creators of the time, who were already quite happy working with the comparatively simple technologies of the Web. Without an imminent need to build content for machines, the popularity of the Semantic Web concept dwindled.
However, in the last 10 years huge efforts have been made to keep the vision alive. Tim Berners-Lee proposed in 2006 a shift from a web of linked documents to a web of linked data. This Data Web would represent a distributed repository of information that would allow machines to combine data from multiple sources in new, more powerful ways.
Google’s Knowledge Graph and Wikimedia’s Wikidata take this concept further, with massive Linked Data knowledge bases that can be used to serve both humans and machines far more effectively than the traditional document-based web.
But the World Wide Web as we know it is still a bit of a mess. The perfect utopia of the universal Data Web isn’t here yet, and we have a long way to go before we can fully purge the effects of years of bad coding habits, changing browser standards and proprietary formats like Flash and Silverlight. In the meantime, tech companies continually work to improve their digital assistants’ abilities to sort through our information junk pile by using increasingly intelligent neural networks and algorithms to bridge the semantic gaps we’ve left behind. Most major online services also provide publicly accessible APIs that allow independent software systems to interact with them, ensuring accessibility for both humans and machines.
Is your website ready?
As designers, developers and marketers in the digital industry our responsibility has always been to represent brands online effectively, ensuring content is accessible and built to the latest web standards. This adherence to coding quality is now more important than ever, as users shift from directly interacting with brands via web pages to relying on the abilities of their digital assistants to access and interpret brand content for them.
There are many other challenges to face when considering the rising tide of smart digital assistants, but keeping the code structures behind our digital content clean, accessible and semantically correct should be the bare minimum to aim for moving forward.