cikon.de | pdf (79 kB) |
Will “intelligent” machines “take over”? by Hans-Georg Stork (h-gATcikon.de) A popular question? A burning question? Apparently, yes, judging from a plethora of journalistic products, of article- and book-size alike. It is a question variations of which have - for quite some time - grabbed the imagination not only of many a science fiction enthusiast. It is a question that of late has become even more stirring given the mind-boggling increase in computer power: processing speed, memory and storage capacities, transmission bandwidth and connectivity. An increase that not only has entailed the emergence of Big Brother structures across our planet but also the feasibility of sophisticated adaptive control of big and small machines and machine-tools, of vehicles and weapons, and entire sociotechnical systems. Artificial Intelligence (AI), Artificial Cognitive Systems, Big Data and Smart Robots have become the buzzwords of the day, tickling many a journalist's fancy. A rising branch in particular, rather arcane, of Theoretical Computer Science, called "Machine Learning", appears to rapidly gain appeal. Sadly, many journalists (and Popular Science writers) who commit their thoughts on this type of question to paper or the Web seem to be
Often, the results of their ruminations are rather superficial doom and
gloom prophecies, mixed with standard hyperbole, uncritically
attributing to machines superhuman faculties and power - which of
course they trivially have, while lacking the typically human
capabilities. (Any pocket calculator has superhuman power when it comes
to doing mental arithmetics.) Journalists also like quoting high-tech
entrepreneur celebrities, dubbed thinkers, from Silicon Valley and
similar real or virtual locations, in support of the odd claim. Even
Stephen Hawking may help.
While doom and gloom may well be justified when discussing new technologies[1] many proponents of negative scenarios seem to believe that it is only now that the products of human ingenuity can be turned against us. In fact, like early predecessors in the 19th century[2] they prefer the active voice: that our products turn against us, that they take over, making the human species superfluous, obsolete, enslaving us, destroying us, and similar bleak phantasies. And here lies the rub. Because Man, not Machine, is Man's worst enemy (and always has been: homo homini lupus). Machines - “intelligent” or not - are not bad in themselves. They never have been and never will be “taking over”. They have no Self, no soul, no volition of their own and no natural autonomy, regardless of inbuilt Artificial Intelligence, Machine Learning, et cetera. Being made by Man they are being used by Man. If the latter appears to be the other way round then there is someone of flesh and blood, conspicuous or not, hidden behind the ostensibly oppressive machine. But what kind of machines are we talking about, actually? Dishwashers? Digital cameras? Motorbikes? Networked information and control systems? Welding devices? Tanks? Unmanned Aerial Vehicles? Nanobots? Whatever it is: we talk about machines, devices and technical systems that have the potential to empower people to gain, retain and wield power over other people. This holds in particular (but not only) for machines as means of production (or capital goods) and for machines as weapons. The more sophisticated (“intelligent”) they become - for instance because scientists and engineers incrementally add Information Technology (IT) based capabilities to them - the easier and more profitable it will be for the few (the owners of capital and their minions) to take over, largely to the detriment of the many. And it will make military power more effective and “safer” to use against perceived enemies, in taking over their territories, resources, infrastructures. Technology products - “intelligent” or not - are indeed a potential threat to mankind, including its very existence. Not because they (the “intelligent” ones) might take over or - in dystopian phantasies - enslave mankind (or for whatever other reasons), but because they can in many ways be grossly misused by the powers-that-be. Gross misuse includes ignoring the risk of failure, possibly due to unmanageable, uncontrollable complexities. Examples abound, among which nuclear technology provides the most conspicuous ones so far. Genetic engineering, Synthetic Biology, et cetera, may be next in line and, yes, Robotics as well. A more immediate threat, however, lies in the more and more patent incompatibility of the prevailing socioeconomic order and its underlying paradigms, with the latest technologies that very order has spawned. It becomes manifest in the widening gap between the haves and the rest of humanity, both globally and on regional scales, between private wealth and the impoverished commons. It is not A(rtificial) I(ntelligence) that kills - as some journalists seem to believe (and want to make us believe). These same journalists should take a lesson from the current Pope who famously points the finger at the real problem: “An unfettered pursuit of money rules. The service of the common good is left behind. Once capital becomes an idol and guides people’s decisions, once greed for money presides over the entire socioeconomic system, it ruins society, it condemns and enslaves men and women, it destroys human fraternity, it sets people against one another and, as we clearly see, it even puts at risk our common home.”[3] And: “Such an economy kills” Apart from all that we (at all levels of decision-making) may be all too eager (and too lazy?) to allow "intelligent" machines or systems to "take over", deliberately giving up making choices that only we - as humans, with a human mind, and with human understanding - ought to make. There is indeed a fine line between letting a technical device or system render a useful service and uncritically accepting whatever results some obscure algorithm yields. But a genuinely human understanding of matters of life and death, of love, empathy and solidarity cannot be fully formalised and hence not be programmed. And it can not be learned by a machine, simply because machines are not human. Why should we relinquish it? Footnotes: 1 One of the more serious contributions to this discussion is Bill Joy's
essay “Why the future doesn't need us” at https://www.wired.com/2000/04/joy-2/. In it he quotes one of the most
sinister scenarios imaginable in this context, made up by Ted
Kaczynski, known as the Unabomber
(https://www.wikiwand.com/en/Ted_Kaczynski).
|