The Hybrid Age is mankind’s next industrial revolution where humans and machines connect in digital with the help of AI applications. After the biological and cultural stages, this will be the technological stage of human life.
The technological phase includes the possibility of people being outsmarted by artificial general intelligence eventually reaching human level, or by superintelligence. The upcoming questions for mankind are therefore so difficult that we must start discussing them now, so that we have the answers ready by the time we need them.
Global AI ranking shifting to Asia
In the meantime, Europe is suffering from AI research brain drain and has been losing talent mostly to corporations in the United States (U.S.). In terms of AI basic and applied research, Europe is still the largest, most diverse and collaborative region. But China has already overtaken the U.S. and is fast closing in on Europe’s lead in the research area. Japan ranks third in research publications, the UK fourth and Germany fifth. China, however, while still operating in relative isolation from the wider research and business community, is set to become the global leader in AI by 2030. Its continent-like population in and of itself gives the country a unique advantage with its sheer quantity of AI business and research experts.
From the digital utopian’s myth to dystopian future scenarios
The early years of Digitalization have been marked by a U.S. scientists and Silicon Valley-led digital utopian’s view on the great potential that digital technologies are said to have for human progress. And it is true, digital technologies such as AI are giving human life the potential to flourish like never before – or to self-destruct if they will not be aligned with our goals.
Reflecting on this, in 2017 and 2018 media coverage on AI applications has been the first of its kind. The question of being human in the Hybrid Age was asked by describing the insecurity of cyberspace, the world’s increasing connectivity, and AI applications mainly as negative advances while painting a dystopian future for mankind.
Reasons for this have been incidents such as the 2015-2016 Cambridge Analytica scandal. Scientists at Cambridge’s Psychometrics Centre signed a cooperation contract with Facebook. The research project centered around AI was later hijacked. It ended up being an unethical and criminal project that misused the private data of millions of Facebook users for economic reasons. The incident proved Facebook’s incapability to monitor implications of its business model. The company has significant questions to answer when it comes to privacy.
In reaction to these developments, most European governments hastily came up with AI policies. The German government announced its AI Strategy in November 2018. A key measure is to put a huge amount of the three billion Euro program into one hundred AI professorships to spread AI knowledge in higher education and research.
But is the science our main problem with AI?
Over the last four years, dramatic success in the AI sub-research field of ‘machine learning’ has led to a torrent of AI applications in economies worldwide. The potential of these technologies have even triggered the fantasies of authoritarian politicians such as China’s Xi Jinping and Russia’s Putin who wish to capture this power to lead the world with the help of unleashed AI applications that might serve their goals. These years have also seen an explosion of concrete economic interest, especially in machine learning that applies statistical and probabilistic methods to large data sets and in ‘deep learning’ models.
The complexity of Deep Learning
A technical reason for today’s widespread feeling of insecurity and dystopian imaginations is that those third-wave AI machine learning models are opaque, non-intuitive, and even difficult for experts to understand. And machine learning’s own sub-research field of deep learning is especially cryptic to humans because of its incredible complexity. While deep learning techniques are incredibly good at finding patterns in data, it can be impossible for humans to understand how it reaches its conclusion.
The remote control of our life
The more we humans are becoming reliable on these AI applications and the more impact AI has on our lives, the more important it becomes to humans that AI is robust and aligned with our goals. At the moment Silicon Valley scientists, Chinese companies and other startups are applying technologies in an experimental status. They are developed in a research lab and directly enter the real world. To avoid unforeseen consequences of these methods, it is important that a goal-led beneficiary AI does what humans want it to do. This only leaves us with the key question of who will be the one that decides what those goals might be.
Enabling goals with AI safety research
Since the 1990s, experts have already been working on standards for understanding how the technology behind deep learning – neural networks – make decisions. In 2004, the concept of explainable AI (XAI / ex AI) was introduced. Today, an ex AI is an AI whose actions can be trusted and easily understood by humans. 2015 was the year that AI safety research went mainstream after the world’s first AI safety conference took place. Until that moment, talk of AI risks was often misunderstood as aiming at impeding AI process. But in August 2016, the U.S. Defense Advanced Research Project Agency (DARPA) has initiated an Explainable Artificial Intelligence Program (XAI) for military reasons. DARPA doesn’t want agents and military operatives blindly trusting in any algorithm. The program’s final delivery will be a toolkit library that could be used to develop future ex AI systems. After the program is complete, these toolkits would be available for further refinement and transition into defense or commercial applications in the U.S.
The old world’s way
In Europe, the High-Level Expert Group on Artificial Intelligence (HLEG on AI) – with the general objective to support the implementation of the 2018 European strategy on AI – announced its draft Ethics Guidelines on trustworthy AI. For the HLEG on AI, achieving trustworthy AI means that the general and abstract principles deriving from human rights need to be mapped into concrete requirements for all AI applications.
Don’t let people be outsmarted
However, in areas like law-enforcement, medicine and in the media, the consequences of mistakes and abuse of goal-led AI applications for democracy may be serious. The main risk for Europe’s democracy isn’t malice but the humans’ competence and further loss of talent. An important step in fighting this is to empower citizens to become smarter through life-long AI education, with findings from ex AI research to take on a specific position in the discussion about how to be human in the Hybrid Age and how to develop or apply AI.
Denise Feldner is a Lawyer, Tech-Enthusiast & Science Manager, Senior Partner at KAIROS Partners and Member of Atlantik-Brücke.
This article is the edited and shortened version of the introduction of a book titled “Redesigning Organizations — Concepts for the Connected Society”. It will be published in Spring 2019 by Springer Nature, Switzerland.