Olaf Groth, Tobias Straube and Johannes Glatz - Cambrian Group (http://cambrian.ai)

Remember the moment you fell in love for the first time – that enervating, giddy feeling that made the sun on your skin just a little warmer and the song on the radio just a little more poignant? Imagine feeling that same sensation on demand, simply by pressing a button. Or imagine seeing a blind relative come out of a routine brain surgery having fully recovered her eyesight. How about hopping onto a conference call with 100 people, and everyone communicating with or listening to some conversations in the room selectively or simultaneously?

It sounds like the stuff of far-off science fiction, but it’s not. In the coming decade, significant breakthroughs in quantum computing, artificial intelligence and brain-computer interfaces will enable machines and humans to model and understand entirely new dimensions of the world and significantly augment our capabilities. Already, each of these fields have developed applications that hint at futuristic scenarios. But as those technologies and trends converge, interact and integrate with one another, we will see an even deeper symbiosis of human and machine that alters the fabric of our economies, our societies and our everyday lives. These new sparks and the directions they send us could redefine reality, truth and our engagement with both. So, even though we have yet to fully digest the radical changes induced by early generations of these technologies, we need to start preparing for the disruptive power shifts that will emerge when AI, quantum and brain-computer interfaces converge—a new cognitive era when transformations unfold “inside-out” from our brains, rather than merely the outside-in changes that we’re use to and that happen as we adapt when we work with new technologies.

You might argue that artificial intelligence has already penetrated much of our social and economic life, as we explore in Solomon’s Code: Humanity in a World of Thinking Machines[1]. Yet, despite the increased capabilities of AI and the growing attention paid to it, it is still based on statistical methods that leave it restricted to narrowly defined tasks. Because of that, we might not be well equipped to analyze and understand the coming avalanche of sensory data from this exceedingly complex world, all captured by sensors embedded in nearly everything around us. According to IDC, humanity is expected to create a “datasphere” of 175 zettabytes (1,4e+15 gigabytes) per year by 2025, up from roughly 33 zettabytes in 2018.[2] Currently, about 80 percent of the world’s overall public and private data is still unstructured (90 percent for corporations).[3] So, we’ve already outstripped the capabilities of even the most advanced algorithms and computing power, leaving us with a lot of data noise – an uncaptured treasure trove. With Moore’s Law reaching its limits, how will we make sense of all that additional data and use it to meaningfully improve our lives?

Enter the recent advances in quantum computing, such as the launch of the world’s first integrated quantum computer for scientific and commercial use in January 2019. New AI applications fueled by immensely powerful quantum computing technologies could begin to make sense of these massive data pools. Put simply: If we speed up computing, we enhance AI. Because quantum computers work in connected yet different states simultaneously, they can process computing tasks across far more multidimensional states that, for some applications, vastly surpass the limited capacity of the binary choices between zeros and ones on which traditional computing is based. While still in its infancy, quantum computing research has made significant progress, triggering ever greater emphasis on commercialization.

In the near term, we’ll already see cases of quantum deployment designed to optimize already known processes. Pharmaceutical companies will use it to research new molecules for drugs; banks will develop powerful FinTech applications; and automakers will deploy it to improve traffic management. Even in these narrower use cases, though, we can begin to see both the opportunities that quantum and AI offer and the risks that they present. For example, quantum availability will likely follow the path of supercomputers in the not too distant past, with its computing power found not in your firm’s data center but in centralized locations in the early phase. This is mainly due to the tremendous cost of the early prototypes. Government agencies or tech giants will own and host quantum computer systems and make processing available via the cloud. This consolidation of computing power will require close attention from policy makers and citizens alike, especially in terms of security: for instance, an algorithm called Shor, when executed by a sufficiently large quantum computer, is expected to be able to break many current public encryption systems.[4] Given that, it should’ve come as no surprise that the current U.S. administration named “leadership in AI” and “quantum information sciences and strategic computing” as the second-highest R&D priorities for fiscal 2020, trailing only national security for the American people. On the other side of the Pacific the Chinese government has also made quantum a top focus area, reportedly investing at least $10bn.[5]

The immense capabilities in this combination of AI and quantum computing—and our chance to capitalize on its remarkable potential if we simultaneously mitigate its risks—expand even further when you consider these developments colliding with yet another key technological development: brain-computer interfaces (BCI). BCI refers to technologies that directly connect a human or animal brain with an external device for recording or stimulating brain signals. Technologically speaking, making BCI work is no longer a question of basic research innovation. Those riddles are solved. BrainQ, an Israeli startup, already combines machine learning with neuroscience, following the Hebian Theory of “Cells that fire together, wire together.”[6] Hence, it has become a question of engineering—specifically, building up the bandwidth to make that connection worthwhile. Computers can communicate to other connected devices seamlessly and with speeds up to 20,000 megabits per second on a 5G network. Humans so far can only communicate with a computer via speaking or typing—which, according to neuroscientist Moran Cerf, in the case of typing is equivalent to a bandwidth of just 0.63 megabits per second.[7] So, to reap the full potential of AI-powered services and products, humans need to increase the bandwidth for communications between their brains and smarter, faster machines. This is a staggering task, if for no other reason than the sheer complexity of human gray matter.

Yet, we’re already seeing applications that hint at what benefits we might reap from the nascent combination of BCI, AI and quantum computing. For example, Matthew Fisher of the Max-Planck Institute in Germany has suggested that the concept of superposition in quantum computing will help us better understand and influence core brain functions. Scientists at the University of Pittsburgh have managed to control a robot arm via a human brain, and experts at Stanford University have found a way to allow paralyzed people to work functions on a tablet. General Electric, Kernel and other private-sector members of the BRAIN Initiative launched by the U.S. government are taking these developments to the next level, aiming to augment individual patients suffering from epilepsy, Parkinson’s disease or post-traumatic stress disorder (PTSD).

The potential benefits for the human condition from each of these technologies—and especially the integration of all three—establishes an almost moral imperative to keep pushing the frontiers of this research. Our ability to cure disease and heal trauma, make us more knowledgeable about and respectful to the wonderous complexity of the natural world around us, increase human wellbeing and enhance happiness across the globe should compel us to keep seeking new development and opportunity. Yet, let us not be naïve: the second- and third-order effects of widespread adoption of AI-integrated, quantum-powered BCI technologies will eventually trigger mainstream discussions about the risk, as well, and rightly so. Make no mistake, these risks begin to gnaw at the very essence of our humanity—even calling into question the nature of free will. For starters, scientific evidence shows that the brain initiates movements before an individual consciously acts, so connecting the brain to supercharged AI that can stimulate it to act even faster will naturally raise questions about human agency and accountability. Furthermore, various interfering components in the brain could potentially insert stimuli into the anterior cingulate cortex, the area of the brain responsible for decision-making, morality and emotions. Thus, external manipulation of those stimuli, driven by the agendas of others, might very well be concealed under the guise of “free will.” Phillip Alvelda, a former program manager at DARPA’s Biological Technologies Office says that scientists using BCI devices can already induce feelings of touch, pressure, pain, and about 250 other sensations, a number that is likely to increase with ever better mappings of the brain.[8] Electrodes right below the dura mater (the first and thickest of three membranes right under the skull) can induce relaxation and potentially stimulate complex emotions. Connect the dots and you begin to create a hyper-personalized experience that exposes users to real emotions and employs them to anchor personal development, whether as gamers entertaining themselves or professionals seeking to increase their productivity and influence. This could produce virtual worlds that are indistinguishable from and in competition with the physical truth.

For now, discussions about these increasingly intimate, existential forces are largely confined to academic circles, but they will spread rapidly. After all, another former DARPA official expects initial commercial applications of BCI implants to hit the market around 2025. So, while broader conversations about security and privacy have emerged around BCI, AI and quantum technologies alike, we need to begin considering our ethical and policy responses to a still-hard-to-imagine future—one that’s not as far off as we might like to think. These developments will undoubtedly open opportunities for everything from how we work, learn and get rewarded, to how we relax after a long day of work and how we relate to others in society. But if we can find refuge in a constantly comfortable and real-feel virtual reality, will the physical yet messy truth still reign supreme? Who will have access to these experiences, and will they be affordable to the masses or a luxury for the few? How will we secure our agency and self-determination?

Although AI currently receives the most attention by the mainstream media, political and businesses representatives, quantum computing and BCI are on the rise as well. We’re only beginning to understand the interweaving of these technologies and what that produces, and further breakthroughs are expected across all three fronts in the coming decade. Although we cannot say exactly where and when these three trends will collide, they are coming hard and fast. We need to pay more attention to the confluence of these trends and not get distracted by hype or dystopia on narrowly delineated areas. Let’s instead embrace the potential, but guide and shape it, because — as we’ve learned from the past — the future is negotiated on intersections of different technological, economic, political and social trends. Only this time, that convergence might happen “inside-out,” from our own brains to the rest of the world.

About the Author

Dr. Olaf Groth is a futures & strategy adviser, capacity builder and new ventures executive for disruption through DeepTech and the 4th Industrial Revolution toward a Society 5.0. From a home base of San Francisco and Silicon Valley Dr. Groth serves as a Professor of Global Strategic Management, Innovation & Economics and Program Director for Disruption Futures at Hult International Business School, teaching across a global campus network in San Francisco, Boston, New York, London, Dubai and Shanghai. As CEO of think tank network Cambrian.ai Dr. Groth advises on reconceiving futures, strategies, networks and organizations for clients seeking to initiate or adjust to disruption globally. He has served on advisory boards of Fortune 100 companies as well as startups, having held managerial roles at or consulted with the likes of Bechtel, Boeing, Caruma, Chevron, Ferrari, GE ecomagination, GKN, Monitor, NASA, Phillips 66, Qualcomm, Q-Cells, Siemens, VCs, Vodafone, VW, etc.

Dr. Groth holds PhD & MALD degrees in international affairs with business, economics and technology focus from the Fletcher School at Tufts University, MAIPS & BA degrees with economics focus from Middlebury Institute of International Studies at Monterey.


[1] Groth, O.; Nitzberg, M. (2018): Solomon’s Code: Humanity in a World of Thinking Machines.

[2] https://www.seagate.com/files/www-content/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf

[3] https://www.ibm.com/blogs/watson/2016/05/biggest-data-challenges-might-not-even-know, https://www.wired.com/insights/2014/07/rewiring-tackle-unstructured-data/

[4] https://arxiv.org/pdf/1804.00200.pdf

[5] https://www.wired.co.uk/article/quantum-computing-china-us

[6] Groth, O.; Nitzberg, M. (2018): Solomon’s Code: Humanity in a World of Thinking Machines.

[7] https://waitbutwhy.com/2017/04/neuralink.html

[8] Groth, O.; Nitzberg, M. (2018): Solomon’s Code: Humanity in a World of Thinking Machines.


The views, opinions and positions expressed here are those of the authors and do not necessarily represent Samsung or its employees.