Getting your Trinity Audio player ready...
|
Artificial intelligence (AI) has been around since the 1950s, capturing popular imagination and igniting passionate dialogue over the promise and peril of intelligent machines. In 2015, for example, Bill Gates expressed grave concern and wondered why others hadn’t caught on to the dangers of unchecked AI.
In 2018, Elon Musk sounded alarm bells regarding AI’s potential to destroy humanity and become a super intelligent, immortal, and inescapable dictator. Needless to say, when it comes to AI—or more specifically, artificial general intelligence (AGI) and artificial super intelligence (ASI)—the mix of relentless technological progress and ethical concerns is daunting.
Today’s AI Landscape
This year’s frenzy over ChatGPT and generative AI have led over thirty-three thousand people to sign an open letter calling for a pause on training large language models (LLMs) and AI systems, including Musk once again.
The United Nations has urged world leaders to develop the first global standards framework for ethical AI development, US congress has amped up efforts to regulate AI, the Department of Commerce’s National Institute of Standards and Technology released an AI risk management framework, the White House published an AI Bill of Rights, and business leaders are calling for risk-based rules of the road. Conversely, a group of AI developers and leaders, including Bill Gates, and separately China, have pushed back on those calls for pause believing them to have exaggerated its urgency.
Nevertheless, the AI market is a massive and structural platform shift, akin to, but more profound than, the internet platform shift of the 1990s and mobile in the 2010s. In 2023 alone over $40 billion of risk capital has been invested in AI-centered start-ups (including a $10 billion investment by Microsoft into OpenAI, which was cofounded by Musk) and there are no signs of slowing down. Despite serious concerns, the AI market is potentially worth trillions of dollars and is a field ripe with wonder, excitement, and opportunity.
The Biggest Concerns About AI
If the potential benefits to society, economic value capture, and business opportunities are so promising, why the unease? The main reason is centered on singularity.
Singularity is a hypothetical point in time when machine intelligence surpasses human intelligence in all areas, essentially AGI improving itself exponentially and transitioning to ASI. This could lead to such a rapid acceleration of technological progress that it fundamentally and permanently changes the nature of human civilization. The fast takeoff scenario is most concerning because it yields with supercharged and unbounded capability to “solve problems” that will far outstrip our ability to govern the machines dispensing it.
Fast takeoff happens when AI becomes so effective that it rapidly self-improves to reach a level of intelligence far beyond that of humans suddenly and exponentially as opposed to gradually and linearly. People that have dedicated their lives to thinking about machine intelligence and singularity have been speaking up for years and painting terrifying worst case scenarios.
Of course, there are interesting counterpoints fleshing out the limits and constraints to achieving the exponential growth required to achieve AGI and eventually artificial ASI. And the potential for discovery and unleashed capability is tremendous in endeavors that include prolonging human life, democratizing innovative capacity, broadening economic participation, cleaning the environment, disintermediating financial transactions, bridging the digital divide, improving data analysis, and automating tasks to increase productivity.
However, we just don’t know enough yet. We aren’t even close to having a comprehensive set of questions about the negative implications of AI, let alone answers. The potential end state of things like artificial morality, cyberdemocracy, misinformation terrorism, bioengineering, brain hacking, exponential criminality, swarm warfare, automated policymaking, and the networked power of hundreds of billions of interconnected intelligent devices is staggering to ponder.
AI doesn’t even need to reach a point of super intelligence in order to wreak havoc. For example, when autonomous weapon systems become fully functional (intelligent) they could operate beyond human control. We’re not talking about less human intervention in a call center, assembly line, or bond trading desk but rather about machines deciding on their own to wage war.
Possibility and Plausibility
The potential consequences of irretrievable missteps could be existential. For starters, we don’t fully understand how our own brains work, and yet, we are trying to create artificial ones that are infinitely more powerful and that potentially can’t be controlled.
AGI may not yet be quite possible, machines with a conscience even less so. Programming consciousness into a machine is something researchers do not know how to do yet, and computers are relatively primitive in demonstrating adaptive capacity such as empathy. Much of what goes on in our brain and its 86 billion neurons with their 100 trillion synaptic connections is a mystery.
So are the neural connections in machines, which presents us with a transparency problem. Data goes in and information comes out, but the “in between” is a black box. Even if a particular set of algorithms solving a problem are perfect, if the data going in is garbage, on the other side we get supercharged garbage.
So, What Should Executives Do?
My advice to business executives and leaders: ask questions. AI development and deployment is happening fast, and it is leaders like us who shape the future, so you need to be well versed in this field no matter what your organization does. The crux of the ethical complexity of artificial intelligence is that raw knowledge and intelligence horsepower devoid of wisdom or consciousness—the human factor—may optimize for unknowable and undesirable outcomes.
In our quest for economic growth and progress, our inclination is to step on the gas, but is this instinctive march forward could be detrimental and as leaders we must be thinking proactively and asking questions. Dartmouth’s Dr. James Moor, a pioneer in studying the ethics of technology, believes that ethical questions regarding the use and development of technologies like AI are most important when those technologies have transformative effects on societies. The reason for this, he concludes, is that the more complex a technology the more complex and magnified its vectors of action, which in turn overwhelms our ability to govern it and guardrail it.
Navigating the intersection of technology and ethics as an executive has never been more important. Companies are perpetually seeking technology-driven competitive advantages, but the current speed and scale of AI development could dislocate the resources needed to ensure bounded and aligned progress. Safety, security, privacy, transparency, accountability, fairness, and reliability have always been touchstones of technology development and deployment. These pillars are more critical than ever with the potential of achieving the theoretical culmination of AI.
Many years ago the father of modern management, Peter Drucker, had some thoughts about the production assembly line: “it does not use the strengths of the human being but, instead, subordinates human strength to the requirements of the machine.” This applies equally and more importantly to artificial intelligence. For more information and an interesting framework to think about AI, check out one of my former employers, McKinsey & Company’s An Executive’s Guide to AI.
Javier Saade is the founder and managing partner of Impact Master Holdings and venture partner at Fenway Summer Ventures. He is chairman of the board of a Rothschild- and Presidio-owned financial services firm and serves on the Board of Trustees for both The Nature Conservancy and The Organization of American States’ Pan American Development Foundation. Javier also holds seats on the Global Board of Advisors of DocuSign, the Corporate Social Responsibility Board of Univision, and the Board of Advisors of Harvard University’s Arthur Rock Center for Entrepreneurship. He is a founding member of Fast Company’s Impact Council and a member of the National Association of Corporate Directors (NACD) and the Latino Corporate Directors Association (LCDA).