Artificial Intelligence: When will the machines be smarter than us?
Artificial Intelligence (AI) is a specialised field of computer science which is dedicated to the creation of systems or machines capable of performing tasks and reacting like humans. The merging of computer science, social science and biology has enabled the creation of algorithms capable of learning, recognising patterns, analysing huge amounts of data and, even analysing and communicating with humans. Some current examples of AI include driver-less cars and autonomous army technology such as drones and bomb disposal robots.
A concern that is inevitably raised with respect to any debate around AI, is whether humans could be outclassed. Movies such as The Terminator and The Matrix show possibilities of a dystopian future where machines gain consciousness and then realise that humans are endangering the future of the Earth, hence decide to kill or control them to ensure their own safety. Whilst this seems to be unrealistic science fiction, we are already seeing vast amounts of jobs, once performed by humans, being replaced with computers & AI. Specifically, at the Guangzhou Second Provincial Central Hospital in China, AI currently performs the following duties: patient pre-diagnosis; CT scans; the transportation of operating theatre supplies; and the organisation of patient records. Every year jobs are lost to automation and the future of the vast majority of job remains highly uncertain as their technology develops.
Recently, a humanoid robot named Sophia became the world’s first android citizen when ‘she’ was granted Saudi-Arabian citizenship . Sophia was created by Hanson Robotics in Hong Kong and can animate human expression (Sophiabot.com). However, for Sophia to learn and understand the emotions that form human expression, she must interact with real people. This is referred to as machine learning, and it means that Sophia is programmed with certain protocols to perform a task, but that she can then apply those protocols to perform that task with new data or novel situations. This may create the illusion of Sophia being a sentient being, but the reality is that she needs much more advancement in terms of programming.
Attempts to replicate the human mind are also gathering momentum. Google, among other research institutions, has built neural networks that have, to date, managed to compare to the brain of a cat. The efficiency and speed of, for example, image recognition in the human brain remains unmatched by the likes of Google’s DeepMind neural network. This is, in part, because the architecture of the human brain and that of a computer is vastly different.
A far deeper investigation into the possibilities of AI is currently being conducted. Google’s director of engineering, Ray Kurzweil is optimistic that advancements in technology will enable the human brain and AI to merge by the 2030’s. Kurzweil believes that the human neocortex could connect directly to the cloud, on the premise that the AI component will replicate the pattern processors which involve human thought, eventually surpassing the abilities of the biological brain. Effectively, the non-biological AI component will be so sophisticated that it will “model, simulate and understand fully the biological part”. “We will be able to fully back up our brains”.
Advancements in this technology may allow for the possibility of brain-machine interfaces - what Neurobiologist Mikhail Lebedev refers to as an “exo-brain” – in around 20 years. Although, for now, the focus of brain augmentation research is drawn to improving memory and concentration, as well as reversing sensory disabilities due to paralysis.
Should brain augmentation and AI research progress as rapidly as predicted by Kuzweil and Lebedev, the bioethics of such endeavours will need urgent addressing. Neuroprosthetics that have the capability of compensating, bypassing injured regions, or replacing flawed neural circuits, can greatly improve an individual’s quality of life. However, neurophysiological and psychological harm are the risks that must be balanced against these potential benefits. The ethics of informed consent, identity, agency, autonomy - and even neural privacy in the case of brain-to-brain interfaces (BTBIs) - become more salient with respect to certain neuroprosthetic treatments which have the capacity to mediate mental states or, as with BTBIs, disrupt motor functions.
It has been proposed that if adequate protections are put in place, unreasonable risk of harm is minimised, and that the likely benefit of the research exceeds the risk, then justification can be made in experimenting with neuroprosthetics on research subjects. One specific protection involves clinicians making sure that their subjects have realistic expectations about the benefits of the treatment. There is a particular philosophical fear that neuroprosthetics may one day replace the biological circuitry of the brain, turning humans into cyborgs. Some think that neuroprosthetics will merely aid neural dysfunction and supplement rather than replace neural circuits that function normally. However there is a growing trend of 'neurohacking' and it seems likely that whilst in mainstream the technology is used for people with brain-based disorders, in reality people will always be tempted to play around with neuro-enhancement. Whilst people may debate the ethics around these issues, we still haven't been able to control current trends of the misuse of stimulant medications and stimulant drugs like cocaine in students and executives who want performance enhancement. There are certainly arguments to suggest that we should be allowed to make these decisions for ourselves once we are of age. It certainly is questionnable that people can purchase and drink as much alcohol as they choose (placing themselves and others at risk in order for them to feel happy), but cannot legally obtain a stimulant to help them achieve better at work.