{"id":1750,"date":"2016-12-28T18:48:30","date_gmt":"2016-12-28T18:48:30","guid":{"rendered":"http:\/\/chief-exec.com\/?p=1750"},"modified":"2017-02-15T15:46:44","modified_gmt":"2017-02-15T15:46:44","slug":"the-big-read-life-under-the-all-seeing-i","status":"publish","type":"post","link":"https:\/\/chief-exec.com\/?p=1750","title":{"rendered":"The big read: Life under the all-seeing \u201cI\u201d"},"content":{"rendered":"<h4><span style=\"color: #333399;\">Stefano Ermon of Stanford University\u2019s Artificial Intelligence Lab speaks to <em>Chief-Exec.com<\/em>\u00a0 about the hopes and hazards of a world populated by learning machines<\/span><\/h4>\n<p>&nbsp;<\/p>\n<p>As anyone who has ever tried to get a packet of crisps from a vending machine will testify, technology is periodically prone to take your money and not deliver the goods.<\/p>\n<p>In Stanley Kubrick\u2019s <em>2001: A Space Odyssey <\/em>we were introduced to the seemingly benign and exquisitely efficient computer, HAL, whose conversational tone and engaging nature belied a deadly glitch in its programming. It is fair to say that the film was, and remains, ahead of its time, not least for its exploration of human fallibility through machines.<\/p>\n<p>The artificial intelligence (AI) of the fictional HAL 9000 system operated as an autonomous nervous system for the spaceship and its occupants. Its creators, of course, had not anticipated the machine\u2019s paranoid response to contradictory mission parameters \u2013 on the one hand protecting the crew, but on the other secretly investigating evidence of extra-terrestrial life as part of a military-industrial agenda.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignright size-medium wp-image-1759\" src=\"http:\/\/chief-exec.com\/wp\/wp-content\/uploads\/2016\/12\/Stefano-Ermon-IMG_8217_s_1-240x300.jpg\" alt=\"\" width=\"240\" height=\"300\" srcset=\"https:\/\/chief-exec.com\/wp\/wp-content\/uploads\/2016\/12\/Stefano-Ermon-IMG_8217_s_1-240x300.jpg 240w, https:\/\/chief-exec.com\/wp\/wp-content\/uploads\/2016\/12\/Stefano-Ermon-IMG_8217_s_1-768x961.jpg 768w, https:\/\/chief-exec.com\/wp\/wp-content\/uploads\/2016\/12\/Stefano-Ermon-IMG_8217_s_1.jpg 818w\" sizes=\"auto, (max-width: 240px) 100vw, 240px\" \/>\u201cIf we ever get to the point where we have a general artificial intelligence, where we are able to build machines to match human intelligence, then that will create all sorts of problems, in the sense of who will control the technology, what we will do with it \u2013 and the risks are pretty big,\u201d says Professor Stefano Ermon (pictured) at Stanford University\u2019s Artificial Intelligence Laboratory. \u201cThere are all sorts of issues, like autonomous weapons.\u201d<\/p>\n<p>The Association for the Advancement of Artificial Intelligence has been pushing for a ban on autonomous weapons. Modern armaments contain a host of technological features, but currently the decision to fire still rests with human controllers.<\/p>\n<p>\u201cIf we start putting algorithms to computers to make life or death decisions, that creates all kinds of ethical [issues],\u201d says Prof Ermon. \u201cThose are the big problems that will arise if we get to a strong AI position. But we are not there yet.\u201d<\/p>\n<p>Prof Ermon points out that AI is already pervasive in modern life. He cites internet search engine facilities as an example.<\/p>\n<p>\u201cIt\u2019s not just autonomous driving. When you type something into Google, there is an AI system \u2013 a machine learning system \u2013 that will give you answers to your questions. And even that can potentially create all sorts of problems, because it decides what to show and what not to show. Are the results that it provides in any way fair? There are all these examples; if you search and query you are going to get these very, very biased results \u2013 both in terms of race and gender and other issues \u2013 because of the way these systems have been developed and trained.\u201d<\/p>\n<p>The degree to which AI systems help us to make decisions worries academics even now. In some US states, judges are referring to AI to make decisions about bail terms.<\/p>\n<p>Prof Ermon says: \u201cThey use these predictive systems that tell judges how likely a convict is to repeat a crime, essentially, and they use these predictive measures to make bail decisions. That is extremely dangerous, because we don\u2019t know how these systems have been built and how they make decisions, and whether they are fair or not, or what kind of data they look at.\u201d<\/p>\n<p>AI would seem to be a double-edged sword.<\/p>\n<p>\u201cOn the one hand, there are huge opportunities. Autonomous cars and AI tech in general can lead to massive improvements in energy efficiency; they can help us find information \u2013 the needle in the hay stack. They can help us make better decisions in a lot of cases, but you also need to be careful, because if you delegate a computer to make decisions for us, you need to think hard about the how\u2019s and what\u2019s.\u201d<\/p>\n<blockquote>\n<h4><span style=\"color: #333399;\">\u00a0&#8230; They use these predictive systems that tell judges how likely a convict is to repeat a crime, essentially, and they use these predictive measures to make bail decisions. That is extremely dangerous<\/span><\/h4>\n<\/blockquote>\n<p>Prof Ermon says that machines are not necessarily impartial or unbiased. The way \u201cintelligent\u201d machines are currently trained to carry out tasks is by showing them examples of how that task has been solved in the past.<\/p>\n<p>\u201cIf you show a lot of biased training data to the machine, then you are going to get answers that might not be fair or the ones you expect,\u201d he says.<\/p>\n<p>The race to develop clean, clever and connected transportation is getting wide exposure in the media, as technology groups vie with carmakers to redefine the automotive industry, both socially, through ride sharing schemes, and technically, through sensor-led hands-free travel. The complexity of the manufacturing process has so far deterred tech groups from making their own cars, but AI may eventually dominate that process too.<\/p>\n<p>\u201cThis is not the first time that we have developed new technologies that make some kinds of jobs obsolete,\u201d says Prof Ermon. But in the past this has been relatively slow. It destroyed some jobs but then created others, based on new needs. Is this going to happen with AI? Yes, probably, but it is going to happen much faster. If we get to a strong AI, then you can imagine that any job that is done by a human can be automated, assigned to AI. At that point it is not so obvious if it will create new needs or new jobs.\u201d<\/p>\n<p>Job security was a key issue in the recent presidential election in the United States, with \u201cworking-class\u201d voters venting their frustration at low wages and the threat from technology. Hard on the heels of that came Amazon Go, a shop that dispenses with human cashiers.<\/p>\n<p>However, the <em>Wall Street Journal<\/em> ran an article earlier this month with the headline \u201cAutomation can actually create more jobs\u201d. It pointed to the introduction of automated teller machines in banks during the 1970s, and how the number of bank tellers in America had doubled.<\/p>\n<p>James Besson, an economist at the Boston University School of Law, claims that jobs and automation often go hand in hand.<\/p>\n<p>The WSJ article presents empirical evidence to suggest that increased productivity brought about by automation ultimately leads to more wealth, cheaper goods, increased consumer spending power \u2013 and more jobs. The example of banks was based on the spread of ATM machines, which meant banks could be smaller, and cheaper. The banks then opened more branches and employed more people.<\/p>\n<blockquote>\n<h4><span style=\"color: #333399;\">In the government space the key concern seems to be about what will happen to jobs. Nobody knows what will happen to the jobs that can be automated, but low-skilled jobs would be the first to go<\/span><\/h4>\n<\/blockquote>\n<p>Beyond the question of jobs, an all-encompassing system of intelligent machinery may incrementally \u2013 or suddenly \u2013 be in a position to perform tasks and make decisions with a much higher capacity than its human designers. Uber\u2019s ridesharing concept, for example, is already in 500 cities around the world. A degree of added intelligence undoubtedly enhances safety on the road, but when the individual jettisons the private space of their own vehicle in favour of a corporately controlled ride-hailing service, with its predetermined routes and monitoring, do they relinquish a fundamental, primordial freedom under a pretext of \u201cefficiency?\u201d<\/p>\n<p>When bodily functions become unfashionable in such a clinical society, do we collectively agree to become cyborgs? Do notions of the \u201cspirit\u201d evaporate under the material blanket of conformity?<\/p>\n<p>We may never be able to replicate the self-perpetuating complexity and interconnectedness of Nature. While the natural world gives of itself unconditionally, what price does a machine demand for its blandishments? The sages and gurus of old retreated to the caves and wilderness to transcend their material bonds precisely because the noise and distractions of society tend to drown out the inner voice of the spirit.<\/p>\n<p>\u201cWe don\u2019t know yet if it is possible to build a machine that has a human-level intelligence; that achieves consciousness; that is able to be creative, even. Nature is the closest thing we know to that. We could try to create an artificial brain built exactly that way [like nature], but we don\u2019t know if it\u2019s possible to achieve the same results using a different technology. For a long time we saw birds, and thought maybe flying was possible; but then the first planes were only loosely inspired by birds, and used a different kind of technology,\u201d says Prof Ermon.<\/p>\n<p>So, we are heading towards creating something that looks like intelligence \u2013 but not quite the way that evolution has crafted it over millennia.<\/p>\n<p>Prof Ermon\u2019s work at Stanford University encompasses the SAIL-Toyota programme, which focuses on intelligent automobiles. The university\u2019s AI Lab also functions in concert with large companies. Its corporate affiliate programme states that the Lab \u201cis devoted to the design of intelligent machines that serve, extend, expand, and improve human endeavour, making life more productive, safer, and healthier\u201d.<\/p>\n<p>Science fiction writers are fond of portraying corporate influence as sinister and opaque. What\u2019s to stop pure science from becoming tainted by vested interests?<\/p>\n<p>\u201cIn the academic community there is a big push to think more seriously about these issues. They think that if we don\u2019t start thinking about them now, while a strong AI is still in the future, [while\u00a0there is still time to put in place appropriate rules and regulations] so that when the time comes we are prepared. It is possible that to some of these questions there is a technical answer. Maybe there is a way to build in a safety switch into these machines, so that there is a method to mathematically prove that they will never harm humans or that they will follow some kind of ethical rules that we put into them. At the moment it\u2019s not possible to make machines that are provably safe.\u201d<\/p>\n<blockquote>\n<h4><span style=\"color: #000080;\">But like any other technology, there are many ways in which you can use [AI]. It\u2019s up to us \u2013 or to the governments or corporations \u2013 to decide how to use it. It can be used for good or for bad<\/span><\/h4>\n<\/blockquote>\n<p>There is another side of the academic community that says being concerned about these things is like worrying about over-crowding on Mars \u2013 meaning that it is more important to prove the premise before we fret over its possible effects.<\/p>\n<p>\u201cIn the government space the key concern seems to be about what will happen to jobs. Nobody knows what will happen to the jobs that can be automated, but low-skilled jobs would be the first to go,\u201d says Prof Ermon.<\/p>\n<p>The great promise of technology in the post-war era was hinged on domestic and industrial efficiencies that would emancipate people from the drudgery of labour and create more leisure time for individual pursuits. Instead the global population has been swept up into the tenets of globalisation, where the duration and intensity of work has increased while inflation has massively outpaced wage growth.<\/p>\n<p>\u201cIn my research, I am trying hard to apply AI to problems in the public space. We recently published a paper on a technique that can be used to map poverty in developing countries by creating an AI system that looks into high resolution satellite images, and the features of the landscape, and uses them to predict how poor an area is. We can then create these hi-res poverty maps that are used by NGOs and governmental organisations. There are many other positive examples, but this does not mean the big push is in this area. Most of the money is coming from corporations and, of course, their interest is monetary, so\u2026\u201d<\/p>\n<p>Prof Ermon points to government involvement as a possible happy medium \u2013 through state support for research into questions of safety and reliability, of fairness, and the social good.<\/p>\n<p>\u201cThe potential of the technology is enormous. You can imagine doing all sorts of things in education, healthcare, sustainability, the environment \u2013 even to speed up the scientific discovery process. We can use these AI methods to analyse data for us. The societal potential is so massive that it\u2019s hard to imagine what it could do for us. But like any other technology, there are many ways in which you can use it. It\u2019s up to us \u2013 or to the governments or corporations \u2013 to decide how to use it. It can be used for good or for bad.\u201d<\/p>\n<p>Morals and ethics are routinely adjusted and twisted to accommodate commercial imperatives \u2013 as with religious dogmas and the parameters of imperialism. However, by demonstrating to the public that AI can be used for everyone\u2019s benefit, society may be persuaded to take collective responsibility for its development and application.<\/p>\n<p>Prof Ermon says: \u201cSome of the big societal challenges that we are facing today, like poverty and food security, have better solutions than are currently on offer. I look at the [UN\u2019s] 2030 development agenda, these big problems, and I can see ways of making better decisions based on data, based on AI, based on finding solutions that scale at the work level, that are cheap and economic, because there is automation behind it.\u201d<\/p>\n<p>In the sequel to 2001 \u2013 <em>2010: The Year We Make Contact <\/em>\u2013 we are treated to the transcendent technology of advanced ET life. The oblique and geometric Monolith of the first film is revealed as the genesis for seeding life on Jupiter\u2019s moon, Europa, and HAL himself supersedes his limited mechanisms to join the collective consciousness of the universe. Heady stuff, but once again, a demonstration that technology is only as smart as the people wielding it.<\/p>\n<p>The philosopher Ren\u00e9 Descartes coined the phrase, \u201cI think, therefore I am\u201d, as an assertion that thinking is the one thing that cannot be faked; that we know we are real because we think. The technological future beckons with a similar proposition \u2013 to think, or not to think, that is the question.<\/p>\n<p style=\"text-align: right;\"><em>By James Fitzgerald<\/em><\/p>\n<hr \/>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignright size-medium wp-image-1660\" src=\"http:\/\/chief-exec.com\/wp\/wp-content\/uploads\/2016\/12\/Fitzgerald-VB1-300x135.jpg\" alt=\"\" width=\"300\" height=\"135\" srcset=\"https:\/\/chief-exec.com\/wp\/wp-content\/uploads\/2016\/12\/Fitzgerald-VB1-300x135.jpg 300w, https:\/\/chief-exec.com\/wp\/wp-content\/uploads\/2016\/12\/Fitzgerald-VB1.jpg 371w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/p>\n<p style=\"text-align: right;\"><em>\u00a0<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Stefano Ermon of Stanford University\u2019s Artificial Intelligence Lab speaks to Chief-Exec.com\u00a0 about the hopes and hazards of a world populated by learning machines &nbsp; As anyone who has ever tried to get&#8230;<\/p>\n","protected":false},"author":5,"featured_media":1758,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[105,61],"tags":[113,36,26,114,58,35,18,38,25],"class_list":["post-1750","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-editors-picks","category-innovation","tag-ai","tag-automobiles","tag-education","tag-employment","tag-innovation","tag-manufacturing","tag-opinion","tag-productivity","tag-universities"],"_links":{"self":[{"href":"https:\/\/chief-exec.com\/index.php?rest_route=\/wp\/v2\/posts\/1750","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/chief-exec.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/chief-exec.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/chief-exec.com\/index.php?rest_route=\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/chief-exec.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1750"}],"version-history":[{"count":11,"href":"https:\/\/chief-exec.com\/index.php?rest_route=\/wp\/v2\/posts\/1750\/revisions"}],"predecessor-version":[{"id":1822,"href":"https:\/\/chief-exec.com\/index.php?rest_route=\/wp\/v2\/posts\/1750\/revisions\/1822"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/chief-exec.com\/index.php?rest_route=\/wp\/v2\/media\/1758"}],"wp:attachment":[{"href":"https:\/\/chief-exec.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1750"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/chief-exec.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1750"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/chief-exec.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1750"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}