Artificial intelligence is no longer the sole preserve of data scientists and science fiction writers, with its technological applications now pervading daily life, says James Fitzgerald.
The “Fourth Industrial Revolution” was a term that tripped off the tongues of movers and shakers at the recent World Economic Forum in Davos.
Harry Elmer Barnes, the historical sociologist, coined the phrase in 1948, to describe the imminent adoption of atomic energy and supersonic transport by society as he saw it.
The media’s coverage of AI has more recently focused on fears around loss of control, and the ethics associated with its impact on the employment market. But the positive implications for health care have also come to the fore in the public mind.
A recent sitting of the government’s Artificial Intelligence Committee in the House of Lords heard from experts in an attempt to better formulate a government-wide approach to managing the technology’s myriad effects on society.
“We have this very particular sense of AI as a world transforming innovation which I think is an unhelpful way of thinking about any technological advance,” David Edgerton, Hans Rausing Professor of the History of Science and Technology at King’s College London told the committee.
“I think there’s a problem of elite understanding here. The notion of the Fourth Industrial Revolution is a perfect illustration of that: why do people who are supposed experts in this field talk in this extremely crude, ahistorical, unanalytical, evidence-free way? Not so much about AI, but a whole host of other novelties. So, it’s far too easy to blame the media for this way of thinking.”
In 1963 British prime minister Harold Wilson gave a speech that would sound familiar today. “When machine tools have acquired, as they now have, the faculty of unassisted reproduction you have reached a point of no return where, if man is not going to assert his control over machines, then machines are going to assert their control over man.”
So, it seems we have been here before — but what evidence exists to the contrary?
“This rhetoric is just reheated nonsense from a hundred years ago,” says Prof Edgerton.
Peter McOwan, a vice-principal at Queen Mary University of London is not overawed by the technology.
“All that artificial intelligence does in the terms defined today is find patterns in data. And we are constantly finding patterns in data ourselves when we are reading something, for example — we are segmenting the letters from the page — and that is all that artificial intelligence does,” says Prof McOwan.
This raises questions of science fiction versus science fact.
“This is an area that has been full of puff for decade after decade. Now there is some fantastic work going on in machine learning, in algorithms, self-driving cars – all this technology is quite extraordinary. Whether any of it can really be considered as intelligent [is] another matter … I think that AI should be reserved for generalised artificial intelligence; stuff that really does show intelligence,” says Sir David Spiegelhalter, president of the Royal Statistical Society.
“Media representations are incredibly important for any scientific technology, and I think the people working in this area need to take more responsibility for the representation of their subject. If there are puff stories, they need to be called out … so that the stories told are gripping but accurate.”
Academics could learn from Brian Cox and David Attenborough, who have successfully pushed scientific narratives on reductionist astrophysics and Darwinism, respectively, with their engaging and conversational styles.
“When people did surveys of the type of technologies they were scared of in the 1970s, microwave ovens were up there with nuclear power stations, but people got to like microwave ovens … because they are very useful. Mobile phones: is it giving you brain cancer? Well, it’s not a big issue because people like using their mobile phones and they don’t want to get rid of them.”
The “affect heuristic” – a quick way to make decisions based on emotional “gut” feeling – comes into play when people warm to a technology. Once you decide something is good, you are likely to discount criticisms of it out of hand. The opposite happens, says Prof Spiegelhalter, with developments such as fracking, where no discernible benefit is perceived – except to a corporation.
“I think the crucial thing is whether people feel the technology is being useful to them. And it’s already being massively useful … every time they use Google maps or take a picture, where the phone is identifying the eyes, focusing. People are already using this technology [AI] all the time,” says Prof Spiegelhalter. “My feeling is that this is not the type of technology that will have the same intrinsic fear associated with it as some of the others where people don’t feel they are getting any benefit.”
However, in a world where the consensus reality is formed through the machinery of vested or commercial interests — utilising marketing, branding and lobbyists — questions of choice encompass broader philosophical parameters.
“I think it’s rather concerning that we talk about technology in terms of the final consumer,” says Prof Edgerton. “Technical choices are being made by all kinds of agents who are not the final consumer, so this positing of a certain type of consumer who is inherently distrustful doesn’t capture the problem at all. Lots of different bodies are taking decisions, some openly; some cause controversy, most don’t. If the question is, does the market system and particular research agendas of governments produce the optimum technical development, then I think the answer is almost certainly no, it doesn’t.”
Rather than assume that novel techniques just come out of the ether, Prof Edgerton says we must shift the discussion to ask, what kinds of things would we like as a society and how do we ensure they come about?
‘Ethics and innovation are two sides of the one coin, where one is about the production and the other is about the control or the criticism. You need both.’
Whether society’s relationship with AI will be guided by commercial imperatives, governmental agendas or hidden vested interests remains to be seen. But the optimal path may start with the questions we ask about ourselves and how this technology might serve us.
It is arguable whether the type of AI currently in the public sphere is truly artificial or intelligent. The everyday devices relying on its algorithms could also be described simply as “smart tech”. In a recent Eurobarometer survey, 74 per cent of respondents in the UK said they would make more use of digital technologies if there was more widespread trust of the providers.
When it comes to AI or algorithms “the idea of interpretability and explanation is incredibly important,” says Prof Spiegelhalter. “How can you make Deep Mind interpretable? People are desperately trying to work out ways to produce explanations for why things are happening [in black box algorithms].”
There is a debate in the industry as to whether it might be better to reduce predictive accuracy in order to have something simpler that can be explained to people – and therefore make it more trustworthy.
The news last week that Chinese artificial intelligence is capable of outperforming humans in reading comprehension can only add to public anxieties.
A neural network model created by e-commerce group Alibaba outperformed participants on a 100,000-question Stanford University test. The system, developed by Alibaba’s Institute of Data Science of Technologies, scored 82.44, while humans scored 82.304. Microsoft’s artificial intelligence model recently achieved 82.65 on the exam.
However, it may still be possible for humans to manipulate their online car insurance renewals (run by simple black box algorithms) by reverse-engineering the answers they give, on address, mileage, driving behaviour, etc — in other words, by lying.
A website that briefly appeared in 2010 — PleaseRobMe.com — provided a sharp shock on what simple AI could achieve with snippets of people’s personal data. The website, which only lasted for 48 hours, aggregated information about individuals from the “geo tags” on pictures and posts online and combined that with “time stamps”, so that the system could identify addresses or locations and thereby ascertain whether individuals were at home, allowing those people to be targeted for a household robbery.
Facial recognition methods are evolving rapidly and have expanded out of the security sphere to consumer applications, including Facebook’s adoption of the method. Identity politics is becoming a hot topic in cyberspace.
“It seems to me that it will be impossible to tell everybody in every situation when they are providing data,” says Prof Spiegelhalter. Which brings in questions over data governance. “Rather than each person having to be responsible for everything that’s being extracted from them, it seems to be a scenario where regulation and governance are appropriate. At the same time, data literacy, especially among children, is enormously important,” he says.
“There is nothing free on the web. You give your data and that is your payment,” says Prof McOwan.
‘All that artificial intelligence does in the terms defined today is find patterns in data. And we are constantly finding patterns in data ourselves …’
Prof Edgerton suggests that the blanket acceptance of driverless transport or full automation of manufacturing does humanity a disservice and may skew the public’s understanding of what the future may be like. “It is to misunderstand the nature of our society to assume that one particular technique will have a transformative effect that is out of proportion to all the other methods in play.”
There may be many benefits from hands-free driving but why should they eclipse the beneficial social and health effects from walking or cycling?
David Puttnam, the former film producer, expressed concerns about the “productisation” of people. “I spent the first dozen years of my life in advertising, and by the time I left I had no illusions that if you offer advantages to advertisers to find more information about their customers, they will take them.” Lord Puttnam says he is troubled by the possibilities of data misuse.
The UK already has a variety of bodies that provide a regulatory framework. The Information Commissioner’s Office upholds information rights in the public interest, and the Nuffield Convention on Data Ethics has just been signed. A council for data science ethics has also been proposed. But clearly regulatory oversight will need to keep up with a rapidly evolving technology.
Lord Puttnam says the government reacted with “lightning speed” following the release of Vance Packard’s 1957 book The Hidden Persuaders, which revealed methods of subliminal advertising. “The IBA [Independent Broadcasting Authority] moved within a year to set out clearly what you could and couldn’t do on television with the speed of images.”
The parameters of data protection agencies are less defined in a globally connected world, where data havens exist outside of national regulations and protections. Indeed, the question of whether an individual supports or benefits from the use of their data will also further complicate the issue.
“We need to get back into a position – I think we were there, at least partially, in the years after 1945 – where we can take a collective view as how to improve society and act on it collectively,” says Prof Edgerton. “I think we should empower ourselves as a collectivity to think through the kind of society that we want and kind of machines that we want; the type of techniques we want to deploy and, indeed, who should have control over this.”
“Ethics and innovation are two sides of the one coin, where one is about the production and the other is about the control or the criticism. You need both,” says Prof Spiegelhalter.