Stefano Ermon of Stanford University’s Artificial Intelligence Lab speaks to Chief-Exec.com about the hopes and hazards of a world populated by learning machines
As anyone who has ever tried to get a packet of crisps from a vending machine will testify, technology is periodically prone to take your money and not deliver the goods.
In Stanley Kubrick’s 2001: A Space Odyssey we were introduced to the seemingly benign and exquisitely efficient computer, HAL, whose conversational tone and engaging nature belied a deadly glitch in its programming. It is fair to say that the film was, and remains, ahead of its time, not least for its exploration of human fallibility through machines.
The artificial intelligence (AI) of the fictional HAL 9000 system operated as an autonomous nervous system for the spaceship and its occupants. Its creators, of course, had not anticipated the machine’s paranoid response to contradictory mission parameters – on the one hand protecting the crew, but on the other secretly investigating evidence of extra-terrestrial life as part of a military-industrial agenda.
“If we ever get to the point where we have a general artificial intelligence, where we are able to build machines to match human intelligence, then that will create all sorts of problems, in the sense of who will control the technology, what we will do with it – and the risks are pretty big,” says Professor Stefano Ermon (pictured) at Stanford University’s Artificial Intelligence Laboratory. “There are all sorts of issues, like autonomous weapons.”
The Association for the Advancement of Artificial Intelligence has been pushing for a ban on autonomous weapons. Modern armaments contain a host of technological features, but currently the decision to fire still rests with human controllers.
“If we start putting algorithms to computers to make life or death decisions, that creates all kinds of ethical [issues],” says Prof Ermon. “Those are the big problems that will arise if we get to a strong AI position. But we are not there yet.”
Prof Ermon points out that AI is already pervasive in modern life. He cites internet search engine facilities as an example.
“It’s not just autonomous driving. When you type something into Google, there is an AI system – a machine learning system – that will give you answers to your questions. And even that can potentially create all sorts of problems, because it decides what to show and what not to show. Are the results that it provides in any way fair? There are all these examples; if you search and query you are going to get these very, very biased results – both in terms of race and gender and other issues – because of the way these systems have been developed and trained.”
The degree to which AI systems help us to make decisions worries academics even now. In some US states, judges are referring to AI to make decisions about bail terms.
Prof Ermon says: “They use these predictive systems that tell judges how likely a convict is to repeat a crime, essentially, and they use these predictive measures to make bail decisions. That is extremely dangerous, because we don’t know how these systems have been built and how they make decisions, and whether they are fair or not, or what kind of data they look at.”
AI would seem to be a double-edged sword.
“On the one hand, there are huge opportunities. Autonomous cars and AI tech in general can lead to massive improvements in energy efficiency; they can help us find information – the needle in the hay stack. They can help us make better decisions in a lot of cases, but you also need to be careful, because if you delegate a computer to make decisions for us, you need to think hard about the how’s and what’s.”
… They use these predictive systems that tell judges how likely a convict is to repeat a crime, essentially, and they use these predictive measures to make bail decisions. That is extremely dangerous
Prof Ermon says that machines are not necessarily impartial or unbiased. The way “intelligent” machines are currently trained to carry out tasks is by showing them examples of how that task has been solved in the past.
“If you show a lot of biased training data to the machine, then you are going to get answers that might not be fair or the ones you expect,” he says.
The race to develop clean, clever and connected transportation is getting wide exposure in the media, as technology groups vie with carmakers to redefine the automotive industry, both socially, through ride sharing schemes, and technically, through sensor-led hands-free travel. The complexity of the manufacturing process has so far deterred tech groups from making their own cars, but AI may eventually dominate that process too.
“This is not the first time that we have developed new technologies that make some kinds of jobs obsolete,” says Prof Ermon. But in the past this has been relatively slow. It destroyed some jobs but then created others, based on new needs. Is this going to happen with AI? Yes, probably, but it is going to happen much faster. If we get to a strong AI, then you can imagine that any job that is done by a human can be automated, assigned to AI. At that point it is not so obvious if it will create new needs or new jobs.”
Job security was a key issue in the recent presidential election in the United States, with “working-class” voters venting their frustration at low wages and the threat from technology. Hard on the heels of that came Amazon Go, a shop that dispenses with human cashiers.
However, the Wall Street Journal ran an article earlier this month with the headline “Automation can actually create more jobs”. It pointed to the introduction of automated teller machines in banks during the 1970s, and how the number of bank tellers in America had doubled.
James Besson, an economist at the Boston University School of Law, claims that jobs and automation often go hand in hand.
The WSJ article presents empirical evidence to suggest that increased productivity brought about by automation ultimately leads to more wealth, cheaper goods, increased consumer spending power – and more jobs. The example of banks was based on the spread of ATM machines, which meant banks could be smaller, and cheaper. The banks then opened more branches and employed more people.
In the government space the key concern seems to be about what will happen to jobs. Nobody knows what will happen to the jobs that can be automated, but low-skilled jobs would be the first to go
Beyond the question of jobs, an all-encompassing system of intelligent machinery may incrementally – or suddenly – be in a position to perform tasks and make decisions with a much higher capacity than its human designers. Uber’s ridesharing concept, for example, is already in 500 cities around the world. A degree of added intelligence undoubtedly enhances safety on the road, but when the individual jettisons the private space of their own vehicle in favour of a corporately controlled ride-hailing service, with its predetermined routes and monitoring, do they relinquish a fundamental, primordial freedom under a pretext of “efficiency?”
When bodily functions become unfashionable in such a clinical society, do we collectively agree to become cyborgs? Do notions of the “spirit” evaporate under the material blanket of conformity?
We may never be able to replicate the self-perpetuating complexity and interconnectedness of Nature. While the natural world gives of itself unconditionally, what price does a machine demand for its blandishments? The sages and gurus of old retreated to the caves and wilderness to transcend their material bonds precisely because the noise and distractions of society tend to drown out the inner voice of the spirit.
“We don’t know yet if it is possible to build a machine that has a human-level intelligence; that achieves consciousness; that is able to be creative, even. Nature is the closest thing we know to that. We could try to create an artificial brain built exactly that way [like nature], but we don’t know if it’s possible to achieve the same results using a different technology. For a long time we saw birds, and thought maybe flying was possible; but then the first planes were only loosely inspired by birds, and used a different kind of technology,” says Prof Ermon.
So, we are heading towards creating something that looks like intelligence – but not quite the way that evolution has crafted it over millennia.
Prof Ermon’s work at Stanford University encompasses the SAIL-Toyota programme, which focuses on intelligent automobiles. The university’s AI Lab also functions in concert with large companies. Its corporate affiliate programme states that the Lab “is devoted to the design of intelligent machines that serve, extend, expand, and improve human endeavour, making life more productive, safer, and healthier”.
Science fiction writers are fond of portraying corporate influence as sinister and opaque. What’s to stop pure science from becoming tainted by vested interests?
“In the academic community there is a big push to think more seriously about these issues. They think that if we don’t start thinking about them now, while a strong AI is still in the future, [while there is still time to put in place appropriate rules and regulations] so that when the time comes we are prepared. It is possible that to some of these questions there is a technical answer. Maybe there is a way to build in a safety switch into these machines, so that there is a method to mathematically prove that they will never harm humans or that they will follow some kind of ethical rules that we put into them. At the moment it’s not possible to make machines that are provably safe.”
But like any other technology, there are many ways in which you can use [AI]. It’s up to us – or to the governments or corporations – to decide how to use it. It can be used for good or for bad
There is another side of the academic community that says being concerned about these things is like worrying about over-crowding on Mars – meaning that it is more important to prove the premise before we fret over its possible effects.
“In the government space the key concern seems to be about what will happen to jobs. Nobody knows what will happen to the jobs that can be automated, but low-skilled jobs would be the first to go,” says Prof Ermon.
The great promise of technology in the post-war era was hinged on domestic and industrial efficiencies that would emancipate people from the drudgery of labour and create more leisure time for individual pursuits. Instead the global population has been swept up into the tenets of globalisation, where the duration and intensity of work has increased while inflation has massively outpaced wage growth.
“In my research, I am trying hard to apply AI to problems in the public space. We recently published a paper on a technique that can be used to map poverty in developing countries by creating an AI system that looks into high resolution satellite images, and the features of the landscape, and uses them to predict how poor an area is. We can then create these hi-res poverty maps that are used by NGOs and governmental organisations. There are many other positive examples, but this does not mean the big push is in this area. Most of the money is coming from corporations and, of course, their interest is monetary, so…”
Prof Ermon points to government involvement as a possible happy medium – through state support for research into questions of safety and reliability, of fairness, and the social good.
“The potential of the technology is enormous. You can imagine doing all sorts of things in education, healthcare, sustainability, the environment – even to speed up the scientific discovery process. We can use these AI methods to analyse data for us. The societal potential is so massive that it’s hard to imagine what it could do for us. But like any other technology, there are many ways in which you can use it. It’s up to us – or to the governments or corporations – to decide how to use it. It can be used for good or for bad.”
Morals and ethics are routinely adjusted and twisted to accommodate commercial imperatives – as with religious dogmas and the parameters of imperialism. However, by demonstrating to the public that AI can be used for everyone’s benefit, society may be persuaded to take collective responsibility for its development and application.
Prof Ermon says: “Some of the big societal challenges that we are facing today, like poverty and food security, have better solutions than are currently on offer. I look at the [UN’s] 2030 development agenda, these big problems, and I can see ways of making better decisions based on data, based on AI, based on finding solutions that scale at the work level, that are cheap and economic, because there is automation behind it.”
In the sequel to 2001 – 2010: The Year We Make Contact – we are treated to the transcendent technology of advanced ET life. The oblique and geometric Monolith of the first film is revealed as the genesis for seeding life on Jupiter’s moon, Europa, and HAL himself supersedes his limited mechanisms to join the collective consciousness of the universe. Heady stuff, but once again, a demonstration that technology is only as smart as the people wielding it.
The philosopher René Descartes coined the phrase, “I think, therefore I am”, as an assertion that thinking is the one thing that cannot be faked; that we know we are real because we think. The technological future beckons with a similar proposition – to think, or not to think, that is the question.
By James Fitzgerald