Artificial Intelligence & The Coming Singularity

Artificial intelligence has gained renewed interest due to the many new innovations being introduced to the market, supported by increasingly transparent development and the collaborative efforts of technology companies and institutions. Such innovations have the ability to learn and become more efficient without human intervention. The principal goal is to develop systems that can learn from experience with human-like breadth and surpass human performance in most cognitive tasks, thereby having a major impact on business and society.

From a business intelligence perspective, the advent of AI as a means to process Big Data is significant. Disciplines such as marketing face major, positive disruption through the use of AI techniques to learn, model, and predict human and market behaviours. The explosion of data is facilitated in part by sensors that produce ambient intelligence, along with autonomous speech and facial recognition, intelligent environments, and motion capturing. This is possible even today using surveillance cameras and AI software, e.g. Google’s DeepMind.

A global effort to develop AI is likely to result in the Singularity by 2045 or earlier – the inflection point where machines surpass human intelligence, and thus emerge as the dominant force behind future innovation. There is an expectation of a critical transition point at which humans will have to cede control of the technology to the machines. It is a point in time where machines will learn to self-improve in a recursive manner, and succeeding technological iterations are likely to be exponentially superior and arguably changing at rate that is beyond our understanding.

In the years prior, there will be a widespread integration of technology, data and human experience, with considerable advances in areas such as the health sciences, genetics, gene editing, job and task automation, and military capability. Our present systems are already showing the way forward; today’s computers are not just capable of giving answers in real time, they are even able to predict what we are going to ask. Indeed we are already living this symbiotic life together with our intelligent devices. We have products such as self-driving, self-learning cars, and voice enabled technologies such as Amazon’s Alexa and Apple’s Siri that use a very basic form of machine learning.

The potential impact of AI on society is a key concern when dealing with the Singularity; the notion of mass displacement of workers, many of whom will be rendered irrelevant and seen as less productive compared to AI-enabled machine workers. It is also prudent to mention that there is some anxiety regarding the consequences of unchecked development in AI: Will systems work to preserve and advance society or will they eventually serve as an existential threat to the world and its inhabitants? How will privacy, ethics, and security be protected given the automation of Big Data and deep learning? Finally, who is ultimately legally responsible for the actions of automated systems and services, e.g. self-driving cars? Such questions will likely remain unanswered for a number of years.

At present, we know that automation continues to affect society, even at this early stage of development in AI. Because of their superior cognitive, mathematical and data processing capabilities, intelligent systems and even robots are already replacing workers in the areas of medicine, manufacturing, stock trading, insurance, banking, and more. IBM’s Watson, for example, is being deployed in major hospitals around the world in order to help doctors better diagnose patients. In addition, a Japanese medical insurance company has recently reduced its workforce in favour of AI-capable systems that not only gather information about customers in seconds, but also detect past payouts, fraud and many other nuanced details with a high degree of accuracy. Hedge funds are also beginning to replace traders and financial advisers with systems capable of processing and learning from Big Data, and they can also offer superior investment recommendations when compared to their human counterparts. China, a major developer and importer of robotics, is well-known for its human-intensive manufacturing facilities. Nevertheless, robotic manufacturing is growing exponentially and is likely to drive significant social pressures in China in the near future.

There’s no doubt that AI will usher in a new era of technological benefits to society and business. Productivity is expected to rise and society will become the beneficiary of a much higher standard of living as a result of the socio-economic benefits created by highly intelligent machines. These machines are considerably more productive, reliable, and efficient in terms of performing cognitive tasks and many other types of work, e.g. production and manufacturing, compared to human workers. It should be noted that humans will likely retain the advantage in terms of interpersonal, emotional, creative, and other high level aspects of thinking and producing. One author supports this by describing AI as “carrying out tasks in a zombie fashion without any commitment to intrinsic experience”. However, some suspect that machines may even catch up in this area, mimicking creativity and being able to read emotions as they evolve in the process of self-learning and self-improvement.

This is likely to create enormous pressure on governments around the world, even well before the point of the Singularity. There will need to be a shift in the focus of education and the entire notion of a career will change. Unfortunately, as futurist Satish Tyagi explains, “Silicon Valley is way faster than political bodies”. Leading up to this, societies will have to invest in retraining large numbers of people in areas where they can be more valuable and effective than AI, e.g. roles requiring emotional intelligence and social work. In anticipation for widespread job displacement, countries such as Norway and Canada have begun experimenting with the concept of a universal basic income for individuals and families as a way to maintain a first world standard of living (as discussed in The Economist, 2016).

The potential risks of AI – and ultimately the Singularity – include concepts that were once restricted to the realm of science fiction – that machines will eventually turn against us. Indeed Stephen Hawking has called for a dialogue on such risks, including a discussion around future regulation to best manage development. Other thinkers, such as Steven Pinker of Harvard, counter the notion that AI will lead to the point of Singularity. This is based on an idea that Moore’s Law will soon cease and computing power will reach a ceiling or saturation point, thus any progress on AI would stagnate.

Raymond Kurzweil and others, however, do not agree with this sentiment. They argue that the exponential growth in processing power observed in Moore’s Law will continue, even if replaced by another technology. Others take issue with the dispassionate nature of Singularity studies, and the notion that intellect can be reduced to a mere calculation, given that human achievements in our society by definition are measured in terms of the quality of human satisfaction that they generate. This is supported by Toby Walsh; “Intelligence is much more than thinking faster or longer about a problem than someone else”.

Cybersecurity is another key concern with regards to AI – it’s both good and bad. From a defence point of view, machine learning provides significant advantages in the detection of intrusions and in maintaining security. On the other hand, AI-based cyberattacks may be extremely effective, though it is thought that the advantage lies in defence. Further, as AI develops, it will most certainly be used for cybercrime, and cyber-sabotage could be directed at critical infrastructure. Threats such as this serve as a reason to pursue effective defence research wholeheartedly.

Some feel that it is important to have a centralised governing body which lays down the framework for prioritising the positive outcomes of AI. Nevertheless, interest is steadily increasing and investment is now focused on developing new AI technologies that enhance transhumanism and technological convergence, thus making people healthier, happier, more high-performance, and more intelligent. George Petelin stresses caution however, noting that there is a “cult of optimism” surrounding AI, suggesting that this convergence of technology is akin to putting all of our resources “into one ultimately fragile basket”. It is self-evident that the growing capabilities of AI are leading to an increased potential for impact on human society. It is therefore the duty of AI researchers to ensure that the future impact is beneficial. Finally, Satish Tyagi believes that change should be embraced regardless, whether it is related to employment, privacy, or eventually the very existence of humanity.

In a future post I will be specifically discussing the impact of AI on marketing practice.

© 2017 James McDowall

0.Comments

    Leave a Comment