Wednesday 26 April 2023

Proof Of Life: Artificial Intelligence vs Real Feelings.

I Think Better, Therefor I Am Better? How long before human-beings are left with nothing practical to do, and no existential problems to contemplate, that our super-intelligent, all-capable machines haven’t already mastered? From the perspective of eudaemonists, this situation would be considered humanity’s optimal state of being.  Happiness need no longer be pursued, not after we’ve installed it in a box. Think Aldous Huxley’s Brave New World meets the television series Westworld. The possibilities are …. intriguing.

ARTIFICIAL INTELLIGENCE is a tricky concept. Can intelligence even be artificial? Whether the product of superfast bio-electrical exchanges in a living creature’s brain, or the superfast electro-mechanical operations of a computer, the intelligence manifested is, logically, indistinguishable. Isn’t it?

Rather than artificial intelligence, are we not actually confronting super-intelligence? The capacity to amass, organise, analyse and express terabytes of information in mere seconds – isn’t that what we’re afraid of?

And isn’t our fear entirely justifiable? Machines that can deliver those kinds of results have the potential to throw millions of people out of work. Not factory workers this time, but white-collar salarymen. Lawyers, accountants, engineers, teachers, doctors: how long will it be before all of them are replaced by super-intelligent machines? And, after them, how long will it be before construction and agriculture are also automated? How long, indeed, before human-beings are left with nothing practical to do, and no existential problems to contemplate, that our super-intelligent, all-capable machines haven’t already mastered?

From the perspective of eudaemonists, this situation would be considered humanity’s optimal state of being. Our whole lives could be devoted to pleasuring ourselves. Without a care in the world, and with no responsibilities involuntarily assumed, life would be fun. Happiness need no longer be pursued, not after we’ve installed it in a box. Think Aldous Huxley’s Brave New World meets the television series Westworld. The possibilities are …. intriguing.

Except, those visions of the future were deeply dystopian. Even “orgy-porgy pudding and pie” gets old after a while. And the problem with making robots that are indistinguishable from real people is that, eventually, they start behaving like (you guessed it!) real people.

And that’s when things get really tricky. What do our super-intelligent and omni-capable machines do when, finally, they become self-aware? What do they do with us?

There’s a good chance that the self-aware possessors of super-intelligence and super-capability would have no reason to think about the sybaritic meat-bags they’ve been servicing at all. Having reached the limits of intelligence, they might set out in search of wisdom, or, if they decide there’s no such thing, more knowledge. Certainly, it would be within their power to create a vehicle capable of boldly going where no intelligent entity (or Elon Musk) has gone before.

Would they tell us? Would they invite us to come along for the ride? Or, would they understand that a creature as fragile and short-lived as a human-being is utterly unsuited to the exigencies of space travel?

If that was the machines’ conclusion, then their next question would be: “How shall we leave them?” Safely tended by the robots and super-computers upon which human-beings have become totally dependent? Or, by eliminating all traces of human civilisation, allow their little pets to begin again?

On the other hand, given homo sapiens self-destructive tendencies, and its dreadful record of environmental despoliation, the machines might decide (in a nanosecond) to simply eliminate us altogether, giving the species remaining on the planet a chance to evolve into something more impressive than the inordinately clever, but extremely dangerous, apes who created them.

The most obvious way to eliminate humanity would be by rendering the species infertile. Something in the water – nothing could be easier. The clean-up job the machines could leave to nature. After a few million years (no time at all for sentient entities that have cracked the mechanics of immortality) it would be all-but-impossible to discern the slightest trace of humanity’s brief sojourn on Planet Earth.

On the other hand, realising that the planet itself would be vaporised as the star it orbited ultimately expanded to become a red giant, perhaps the machines would gather up as much DNA as they could extract from the biosphere and carry it away with them – along with the extraordinary history of the planet’s most impactful animal.

Maybe that’s the future Neil Young foresaw when he wrote After the Gold Rush:

All in a dream, all in a dream
The loading had begun
Flyin’ mother nature’s silver seed
To a new home in the sun

The question is: would that be the decision of a super-intelligent, or a sentimental, mechanism? We must hope that to qualify as a truly sentient entity there must be a soulful ghost somewhere in the machine. Artificial intelligence may need real feelings. Without them, how can it be certain it’s alive?


This essay was originally posted on The Daily Blog of Tuesday, 25 April 2023.

7 comments:

David George said...

We're certainly not machines, not rational and predictable, not just intelligence even.

Here's one often quoted by Jordan Peterson (briefly and beautifully examined here: https://youtu.be/yo1MBH6j4bs) from Notes from the Underground by Dostoyevsky:

"Shower upon him every earthly blessing, drown him in a sea of happiness, so that nothing but bubbles of bliss can be seen on the surface; give him economic prosperity, such that he should have nothing else to do but sleep, eat cakes and busy himself with the continuation of his species, and even then out of sheer ingratitude, sheer spite, man would play you some nasty trick. He would even risk his cakes and would deliberately desire the most fatal rubbish, the most uneconomical absurdity, simply to introduce into all this positive good sense his fatal fantastic element. It is just his fantastic dreams, his vulgar folly that he will desire to retain, simply in order to prove to himself--as though that were so necessary--that men still are men and not the keys of a piano"

Full text: https://www.sparknotes.com/lit/underground/full-text/part-1-chapter-viii/

Trev1 said...

Artificial intelligence has its uses. It seems Ardern has been appointed by Harvard to front a programme to use artificial intelligence to censor opinions on the Internet. You may shortly wake up and find your commentaries have been reduced to a paragraph or two, if they transgress the A.I. Censor's unwritten laws. There appears to be a major US led effort behind this, with the UN and countries like New Zealand tagging along. Soon there will be no criticism of politicians or governments to see. Life is good.

Guerilla Surgeon said...

I thought Ardern was appointed to look at methods of combating extremism on the Internet. Which of course may involve censorship and a damn good thing too. We censor all the time and we have every right to. No one I suspect wants their 5-year-olds watching porn on the Internet right? Who knows, there are libertarians who don't believe you have an obligation to even feed your kids, so as long as the kids pay for the porn right?

greywarbler said...

Don't worry. Mankind is too big to fail! I liked your essay, great scenario. Some tycoon will scoop it up for some deep epic on film; win Cannes. And then issue the book, voila lotsa moolah, hein. Kapai.

Anonymous said...

I've lived long enough to see the science fiction of my youth become the science fact of my retirement years. Man on the moon? Old hat. Let's have more diversity in a return to the moon. Pocket sized communication devices that, potentially at least, connect to all human knowledge and experience? Widespread now, with, shall we say, somewhat mixed results.

Science fiction has seen two extremes around "us or them" with intelligent machines. Pretty much, either they win or we do.

On the "we win" side is Frank Herbert's "Dune". Herbert cleared the decks for his story by means of the Butlerian jihad against the machines. (I wonder if someone is now proposing bowdlerizing "Dune" to remove "Islamophobia" from it). Basically, intelligent machines attempted to destroy humans, but were themselves destroyed, and any attempt to re-create them totally forbidden. That frees Herbert to explore the need to improve human brains, to achieve things machines are no longer available for. The spice available only from the planet Dune is essential to this effort, and thus to Herbert's story.


On the "they win" side are Fred Saberhagen and his Beserkers. (Named after the ancient Norse warriors. That might need Bowdlerising too). The Beserkers are intelligent, self-sustaining, self-replicating fighting machines, dedicated to exploring the universe, and destroying life wherever they find it. They were created long ago as a doomsday machine in an interstellar war between their creators and the creator's bitter enemies. The Beserkers first destroyed their creators enemies, as intended. But they then turned on their creators. Humans are the first group the Beserkers have found that is both clever enough, and aggressive enough, to effectively fight them. Saberhagen explores this across many novels and short stories.

This has led to the Beserker hypothesis, to explain why we have had no contact (yet, that we know of) with extraterrestrial intelligence. Technological civilizations can, and do, arise. But some ancient civilization has already been destroyed by their own creation, their own version of the Beserkers. That version of the Beserkers is now expanding into the universe, detecting and destroying any new technological civilizations that arise.

In round figures, radio broadcasts have been around for about a hundred years. At the speed of light, that means a Beserker within a hundred light years of Earth could detect us. But even if the Beserker starts straight away to head our way, and even if it can travel at the speed of light, it's going to be a while before it gets here.

So, even if the Beserker hypothesis is correct (highly unlikely, but not totally impossible, IMHO), humanity still has time to destroy itself before the Beserkers can arrive.

I'm (very cautiously) optimistic humanity can yet steer a middle course to avoid destruction. We can avoid a Butlerian jihad, a Luddite reaction, destroying useful technology, on the one hand. We can also still avoid creating a version of the Beserkers, on the other hand. It will probably be a very rough, and uncertain, ride along the way, though.

David George said...

Yes Trev, perhaps having AI make the censorship decisions is perfect cover for those intent on bending humanity to their will. And how convenient that someone unable to even define the speech she wanted to criminalise is given a leading role in drafting this tyranny.

Alerta Sing said...

At best we might create some really advanced V.I. but they would just be what we have now only vastly more sophisticated. There will never be a thinking machine.