It might be hard to believe, with Siri sitting pretty in our pockets and Alexa on the mantlepiece, but the idea of an artificial assistant has been in the public consciousness since the 1800s. What started with Mary Shelly’s arguably mechanical Frankenstein’s monster, soon became Samuel Butler’s question of whether such evolutions could trigger machines to become our ultimate masters.
But is A.I. really something to worry about? Science fiction’s most memorable villains are artificial in nature, from Skynet’s disdain for humanity, to HAL 9000’s famous “I’m sorry Dave, I’m afraid I can’t do that.” But are we right to feel such trepidation about our mechanical creations? After all, they’re naturally limited by the data we allow them to access.
The dream of a truly artificial intelligence relies heavily on programming capacity, intelligent algorithms and the ability to learn from patterns in data, and at present our data is largely isolated.
Of course, all that could change should a Semantic Web ever come to fruition. An idea coined by Tim Bernes-Lee, a Semantic Web would allow machines to read data as well as we do. A prospect made possible in part thanks to Linked Data.
Using metadata, database models like Resource Description Framework (RDF) and Web Ontology Language (OWL) allow for data to not just be categorised, but also to form relationships.
The idea may not sound all that revolutionary, indeed anyone who’s lost an afternoon thanks to TV Tropes will know the ease at which we can jump from one subject to another. But while the human mind is brilliant at making intuitive leaps, software needs direction. Machines don’t use the same language we do and there’s more to linking data than just a few extra lines of code. RDF databases use SPARQL in order to manipulate and retrieve data, and then comes the question of XML vs JSON regarding storage and organisation. Thinking on a local scale isn’t enough; for a truly Semantic Web to be realised, global solutions need to be agreed by everyone.
It’s no surprise that companies like Google, Microsoft and Yahoo have been trying. After all, what could make a web search more efficient than returning not just the results you want, but related information you didn’t realise existed. The launch of schema.org allowed webmasters to use free mark-up language to link their websites to others via metadata. And while it’s in use by over 10 million websites, the process is proving slow going.
In fact, Floridi argues that the idea, even at its most modest, is destined to remain largely unrealised. While using metadata to link documents is certainly helpful, the ontologies using them rely on abstractions that we may never fully achieve.
“One may wish to consider a set of restaurants not only in terms of the type of food they offer, but also for their romantic atmosphere, or value for money, or distance from crowded places or foreign languages spoken… the list of potential desiderata is virtually endless, so is the number of levels of abstraction adoptable and no ontology can code every perspective.”
And unfortunately, these aren’t the only challenges standing in the way of a Semantic Web. Never mind the fact that the data we produce isn’t just vast, it can be vague or downright misleading. And we’re often hesitant to release raw data at all. But as Berners-Lee emphasised in his TED talk back in 2009, making this data available and allowing it to be linked is central to the ultimate goal.
So while a Semantic Web might well be the catalyst the machine lifeforms from the Matrix were waiting for… it looks like it might still be a while before we need to worry about which colour pill we’d take.
______________________________________________________________________________
References:
Butler, S., 1863. Darwin Among The Machines [To The Editor Of The Press, Christchurch, New Zealand, 13 June, 1863.] | NZETC.
Berners-Lee, T., 2000. Weaving The Web. London: Texere, pp.157-175.
Floridi, L. (2009). Web 2.0 vs the Semantic Web: A Philosophical Assessment, Episteme 6(01), 25-37