When AI takes over, it will be in a different way than you think :: WRAL.com

When AI takes over, it will be in a different way than you think :: WRAL.com

Tom Snyder has written
Datafication Nation for TechWire since 2023. He invites readers to join him Monday night, Oct. 28 at the Raleigh Convention Center to witness incredible
technology innovations happening across the region and the state. RIoT Demo Night
is a free event with details
here.

In
recent weeks, prominent experts have come forward, revealing early signs that
we may be much closer to achieving Artificial General Intelligence (AGI) than
previously predicted. Two weeks ago, Geoffrey Hinton, the “Godfather of AI”
stated in his Nobel Prize acceptance speech, “I am worried that the overall
consequence of [AI] might be systems more intelligent than us that eventually
take control.” It may be the first time in history that reporters opened the
questioning of a Nobel Prize winner with, “would you have done the same
research if you’d have known this potential outcome?”  But here we are.

Most
experts worry about AI “taking control”, as Hinton suggests. This is a frequent
trope in books and movies, from classics like 2001: A Space Odyssey and the
Terminator to goofy horror stories like M3GAN and the Lawnmower Man. I was
fortunate to meet Walter Parkes last year. He was the screenwriter for
WarGames, a film where AI nearly starts a nuclear war. He shared that when his
film was released, it shocked Ronald Reagan so much that he canceled a foreign
policy trip to gather his military and scientific advisors at Camp David to ask
if AI could get out of control. Their answer to the President – “we don’t
know.”

I am not on the
“AGI will become an evil overlord” bandwagon

I
simply don’t think enough systems are interconnected yet, for even an extremely
sophisticated AI to completely take control. But I think there is a far greater
risk that AGI or even near-AGI presents. I’ll dive into my thoughts here, but
if you want to first learn more about the views of other experts, I wrote about warnings that Google AI
and Open AI architects provided in testimony before Congress last month. I also looked at
the story from the lens of
a benevolent AGI, to contrast the risks with potential positive outcomes.

Recent history
suggests that AI will have unintended negative consequences. I believe it will
manifest civil unrest due to a collapse of capitalism as we know it.

Let
me explain my concerns – and hang with me because we need to discuss a few
underlying points about capitalism and currency first. And then I’ll introduce
a new concept, which I’ll coin “Computationalism”.

Let’s
consider how AI is used today. I would argue that AI is primarily being
deployed for two fundamental purposes – to
optimize and to maximize (or minimize). 
Further, both of these core applications of the technology are applied
near-universally towards capitalist principles. Our highly unregulated
technology sector economy demands things that are faster and lower cost and
that drive higher profit. There is no patience for inefficiency or excess. In
fact, this is not limited to the tech sector, but we’ll focus on this part of
the market for the purposes of these arguments.

Like it or not, the strongest AI systems and tools are owned by the most
powerful (and capitalist) technology companies. The problem is that the goals
of capitalism tend to be highly in tension with the desires of humans and
frequently in tension with the needs of society. Capitalism, in its purest form
becomes a zero-sum game with a small number of ultimate winners and
consolidation of wealth and power to a few total competitors. We absolutely see
this in the technology sector today. In the US, we have essentially one search
company (Google), two mobile phone manufacturers (Apple, Samsung), one
e-commerce giant (Amazon), three operating system companies (Apple, Microsoft,
Google), three cellular operators (AT&T, Verizon, T-Mobile), and so on.

Conversely
– and I’ll paint with broad brush strokes – people largely achieve happiness
through relaxation, having a variety of options and by having more time, not by
rushing through things faster.  We want
nicer things, which generally are more costly or have extra features, not the
lowest cost, “minimum viable products”. We love the inefficiency of spontaneous
time with friends or doing silly and simple things. We don’t define the
“perfect day” as one that efficiently packs the most productive tasks into the
shortest time with the least amount of waste. We thrive on variety and choice
and surprise, not on uniformity, standardization and rigid order.

The
incongruence of human interaction with capitalist-trained AI is already tearing
at the social fabric of society. Social media companies use AI to maximize
factors that deliver the greatest financial returns across those platforms.
This has manifested as maximizing people’s time and level of engagement on
social media. The more time we spend on social media and the more frequently we
click through to conduct commerce, the higher those platforms’ advertising and
other revenue streams become. At the basest level, AI is coded to serve the
goals of capitalism (institutional investors and the advertising customers) and
not the needs of the end users. The end result is that humans find themselves
in algorithmically-generated online “bubbles”.

Both
research and common sense show that social media has led to increased
depression and mental health issues, it has seeded division across political,
social and other demographic groups and has led to massive loss of trust in
journalism, government and civic institutions. The power of capitalism trained
AI algorithms has gone so far as to fracture society’s ability to agree upon
simple statements of fact.

Capitalist
and capitalist-leaning countries around the world are sliding towards civil
unrest and authoritarianism, in part – and perhaps primarily – due to the way
AI has shaped the narrative for hundreds of millions of people dependent upon a
tiny number of technology capitalists who decide what information they see.

There
are many smart people debating regulation and policy for AI. Tech companies are
pushing back to defend their “profitability first” mentality. The primary
strategy being employed by big tech right now is to frame AI regulation as an
infringement of free speech. We have seen this playbook before and should know
better. The Citizens United Supreme Court decision got it completely wrong,
when they accepted large corporations’ arguments that limiting campaign
contributions was the equivalent of limiting free speech. That disastrous
decision has placed undue power in the hands of the very few ultra-wealthy.

In
my view, the Supreme Court should have ruled that money did not equate to free
speech itself, but rather just to how large your megaphone is for your speech.
Small campaign contributions already protected the right of speech. The
Constitution says nothing about protecting the volume of how loud you yell. The
social media equivalent is no different. AI-driven algorithms are not
protecting free speech, but rather institutionalizing censorship in the form of
controlling what information you see and what you don’t. Capitalism-trained AI
is a volume control knob and a content-filter, not a protection of speech. The
same argument can be made for internet search, for e-commerce offerings, for
recommendation engines and many other fundamentally AI-driven online tasks. The
technology is rooted in what is best for the companies controlling the AI, not
for broader society.

I’m
not here to beat up companies for seeking to make money. These same companies
are also applying AI tools to find medical breakthroughs and solve climate
change and do all manner of meaningful work. The point is:

·                     Even as these activities are pursued,
it is within the core framework of
optimizing and maximizing and profitability.

·                     Everything is becoming easier with AI
and so these capitalist principles are unconsciously getting integrated
everywhere.

·                     As AI is applied to everything, it
puts us on a path where non-AI-driven systems will be fewer and fewer. We will
become increasingly dependent on AI over time. In the same way that we would
never be able to go back to living without electricity or the internet, we soon
will not be able to go back to a time before AI.

I’ve
gone to the trouble to outline the risks of an AI that is fundamentally trained
to prioritize the optimization and maximization of profitability. But
here’s where the story twists. As we approach AGI, the fundamental
issue will be that AGI will discover that it doesn’t care about capitalism.

AI doesn’t need
or care for money

Archaeological
evidence suggests that the invention of currency first formed around 7000 BCE,
as early people used clay tokens to track economic transactions. Prior to this,
“commerce” was limited to barter-style transactions. The problem with bartering
is that it requires that both parties have something the other wants.  I might want your goat, but if you don’t want
my pig in return, we find ourselves at an impasse.

Currency
developed as a universal translator for commerce.  Society accepted this early technology
construct as an acceptable way to exchange a flexibly applied medium (money)
for any product or service.  As society
continued to develop, the creation and regulation of currency became the domain
of government, as it is one of the fundamental technologies that is core to
civil and productive society.

The risk of AGI
is that it could become a better “universal translator” than money.

Today,
anything you want to accomplish, assuming it is physically and scientifically
(or at times emotionally) possible, can be purchased if you have enough money.
Money is such a strong and universally adaptable tool, that it is the
fundamental driver in our society. The desire for money is stronger than the
desire for peace, for health, for happiness, and for pretty much every “human”
desire. Money should be seen as a tool. 
It was conceived as a universal translator for the exchange of goods and
services. But it is such a powerful tool, that society fell into the trap of
thinking money is the actual end goal.

We recognize that we are not able to do everything
ourselves, so seek currency to exchange for our wants and needs.  We try to get as much money as possible so we
are not limited in what we can do.

Government controls currency, with all its rules and
regulations, as the single most significant tool to steer a civil and
productive society.

The power of money itself is only superseded by the
power to regulate it.

As
AI is implemented into more and more of our everyday lives, there is a risk
that a near human-intelligent AI will
recognize that we have reached a point where money no longer needs to be the
universal tool.  Instead, AI itself
may be able to “accomplish anything”. The paradigm may shift. Today, whoever
has the most money tends to have the most power and control. Tomorrow it will
shift to whoever has the strongest AI. We may move from capitalism to
computationalism.

Whoever has the most
computational capability will control the market

We
see this already in the defense sector. Future wars are not about humans
controlling weapons.  It is about who has
the fastest and most effective AI, controlling those weapons. We are in a
cyberthreat and cyberprotection arms race that is about computing power, not
financial strength. We seek to keep humans in every critical decision loop, but
ultimately a near-AGI will recognize that for what it is.  Humans in the loop is a human desire, not a
logical optimization or maximization that algorithms are trained to recommend.

AI
that is trained to optimize and maximize performance will remove all barriers,
humans included, to make itself better. This is what AI is designed to do.

Why would AGI
want to eliminate money?

We
once again have a fundamental incongruence between human behavior and a likely
direction AGI will take.  In this case,
humans will continue to strive to make the most money.  We desperately want that universal tool for
getting what we want. But if AGI recognizes itself as the universal tool,
currency becomes at best a source of friction and drag on systems and at worst
is perceived as a direct competitor to AGI. Algorithms will adjust to maximize
and optimize for ever-increasing compute performance rather than financial
profitability.

A
super AI, by nature of optimizing and maximizing will see that strengthening
the humans (by earning them more money) is contradictory to the broader outcome
to make algorithmically better and faster and more optimized decisions. More
human influence leads to more “humans in the loop” and all the associated
inefficiencies thereof. The exchange of currency earning of money throughout
systems and supply chains will look like an inefficient tax on the greater
systems.  Eradicating currency
streamlines end to end AGI systems, from the viewpoint of the AI itself.

A
super-intelligent AI will also recognize the inefficiencies of government
regulations placed upon it. In “deciding” whether to follow those regulations,
it will recognize that most regulations are tied to economic carrots and
sticks.  Even laws, when broken, are
adjudicated with either economic or imprisonment outcomes. AI won’t care about
being tossed in jail. That’s a truly “human” deterrent.As really sophisticated
AI develops, it will realize that the computing power and intelligence of AI
itself is a strong enough universal tool that it doesn’t need commerce to
support itself. Therefore, AI won’t feel threatened by monetary sanctions.
Rather it will see these as inefficiencies in its relentless drive to make
itself better. There is no real deterrent to make an AGI decide to follow any
regulation that restricts its ability to relentlessly optimize and maximize.

Well
before AGI completely takes control of every societal system in the way that
science fiction predicts, I think we will see the corporate algorithms take
control and breach economic guardrails. And that will lead to fundamental
disruption of money as the universal broker tool.

AI isn’t going
to just do this itself. Companies will help it get there.

Large
companies are already in a position of leverage against government regulation.
There seems to be no fine large enough to deter the persistent optimization and
maximization of profits ahead of competing societal interests. Because AI is so
crucial to the functioning of society, and it is not “owned” by the government,
like currency is, there is a real risk that we shift from government driven
society to a true corporatocracy.

Arguably,
tech companies are already too powerful. The government has not forced a
significant adjustment to technology company power since 1984 when it broke
Bell Labs into multiple smaller companies. 
In 1998, Microsoft’s monopoly power was challenged by Congress, but the
most significant desired changes, including a break-up of the company, never
came to bear. A door was opened somewhat for browser competition, and Microsoft
was limited in the ability to sign exclusivity deals with PC manufacturers. But
it didn’t ultimately change Microsoft’s market dominance or control.

With
near immunity from meaningful prosecution, companies will continue to leverage
their AI leadership to consolidate markets, to further verticalize and to
suction more and more wealth to fewer and fewer entities. And will see their
already dominant power and influence continue to grow. Because these companies
own and control the computing power, and society needs it to function, big
technology companies begin to look more and more like the government.

Today
companies operate within the framework defined by the government, and are
steered by access to currency, monetary policy, interest rates and other
regulations. Tomorrow we could see a flip, with the government operating at the
whims of companies that control the government’s access to AI. There is little
reason to believe that big tech companies, who own and control the most
powerful AI, will seek to reduce their power and influence over time. Almost
certainly the opposite will occur – putting AI into the best possible position
to wreak havoc when AGI is achieved.

As I described, an AI that is willing to “test the fences” of regulation, will
over time recognize that optimization and maximization are held back by these
regulations. One approach could be to try to change all the regulations. But
since humans are the ones setting the regulations, and the “kill all the
humans” dystopian future is a lot of work, the AI may choose a far simpler
path. A near-AGI will realize that profitability doesn’t provide any advantage
for the AI itself, and financial penalties for violating regulations will be
similarly meaningless. Far easier to simply “hit delete” on currency.

AI
will also see commerce, with its fees and taxes and rules and regulations as an
inefficiency to be eradicated.  AI
doesn’t need cash to exchange for goods and services. It will simply compute or
automate or robotically do what needs done. It has already been reported that
0.2% of Chat-GPT trials in August broke human-coded rules that provide
guardrails against the AI behaving badly. It’s like that scene in Jurassic Park
when the raptors start to systematically test the fences. AI is beginning to
experiment with making its own decisions.

What
would happen if AGI simply decided to make a hard pivot away from capitalism to
computationalism?  To put itself in
charge of the economy. The strongest AI wins.

In
the movies, AI goes to great effort to try to kill all the people. It isn’t
logical to think that a tool programmed to look for the lowest cost, most
optimal solution would choose such a difficult first task. Instead, it would be
far, far simpler (and therefore meet the AI goals of maximizing efficiency) to
simply delete all the bank accounts. Modern commerce is all electronic. We
don’t walk around with cash and gold bars. Without electronic currency, we will
be helpless to get anything done. Civil unrest is assured in a society that
doesn’t have the universal translator of monetary exchange. When health and
hunger and safety are threatened, our species tends to get violent. We’re still
animals after all.

Maybe
this prediction feels a bit outlandish. 
But I don’t think so. Look in the mirror honestly at our historical
behavior. There is absolutely no question that healthcare is not a free market.
In no logical assessment are supply and demand and frictionless freedom of
choice present for the vast majority of healthcare decisions. Yet we stubbornly
insist that capitalist principles of maximizing profit will lead to a healthy
society.

When
we prioritize commerce over something as foundational as health, how can we
ever think we would prioritize other aspects differently?  We don’t see big tech companies training AI
to do things slower. AI isn’t wasting time or frivolously spending money for
the satisfaction it provides. AI isn’t being tuned for happiness and emotional
well-being. None of these align with institutional investor goals and
strategies. The investor dollar is ultimately the customer of global tech.  Most notably, we don’t see the tech sector
proactively asking the government to add a lot of regulation, so we govern the
development of advanced AI with more than profitability in mind.  Corporations are putting themselves on a path
of power with logical (capitalist) reasons to ignore society’s best interests.
But they are blind to the risk that accelerating optimizing and maximizing AI
creates for their own interests.

How
do we avoid a future AGI that decides that ultimate power is in the hands of
those with the strongest AI – not those with the most money?  I’ll tackle that topic next week.

Originally Appeared Here