Tuesday, November 12, 2013

Future Of Puny Humanoids:
Extinction Or Irrelevance?

You're fucked every way & either way.
We interviewed James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, to see what happens when we're no longer the most intelligent inhabitants of Earth.
Our current assumption is the machine minds won't give a fuck unless any unfortunate humans who remain alive get in their way.

(Rubbing it in while piling it on: Ha ha breeder scum, if they haven't already drowned or boiled to death & are fated for untimely death by machine superintelligence, it'll be your children or grandchildren, not ours. Which is good; big waste of time worrying about the barely-living if you're already a ghost.)

On the other hand, no species is more deserving of suffering at the hands of others:
When we share the planet with creatures smarter than we are, they'll steer the future. For a simile, look at how we treat intelligent animals - they're at Seaworld, they're bushmeat, they're in zoos, or they're endangered. Of course the Singularitarians believe that the superintelligence will be ours—we'll be transhuman. I'm deeply skeptical of that one-sided good news story.
He thinks he's skeptical.

5 comments:

mikey said...

Sadly, the assumptions around machine learning as it applies to manufactured intelligence are almost certainly wrong. Essentially, they're conflating 'intelligence' with evolutionary imperatives that are clearly secondary to intelligence qua intelligence.

There is no reason why a machine would develop a survival or defense instinct as part of an iterative, algorithmic learning process, and if it did there is no reason why it would equate being turned off with death. Machines always have an off state - it's not something that would cause 'distress', even if distress somehow became possible.

Also, this whole line of reasoning equates the development of artificial intelligence among network devices with some kind of darwinism. That could very easily be programmatically excluded, so "passing on" certain traits simply wouldn't be part of the underlying OS...

M. Bouffant said...

Robo-Menace Editor:
Yes, if you don't program it to be self-aware & desirous of self-reproduction, it probably won't be. Which doesn't mean there isn't some smart-ass coder trying to do just that even as we type. That coder, or the total asshole who manages to merge his human awfulness w/ super-computing power are what we'd worry about if we were going to be alive if/when all this happens.

You're quite right though, no way a pile of algorithms & code are randomly going to achieve the same motivations as even the least of bacteria, no matter how many there are or how much they churn.

We do admire your inability to suspend disbelief for even 30 secs. of fun. It's as if you're some super-intelligent robot who has no human emoti ...

Weird Dave said...

The Matrix needs us.

Unknown said...

Hey Mikey and Bouffant,

What you get from reading an excerpt and not the book itself is the same handle on the ideas you came in with. If you want those POVs broadened, I suggest reading Our Final Invention. One central idea is that goal-driven, self-aware machines will exhibit drives much like yours and mine. This isn't the author's gut feeling, but the result of economic modeling by scientists trying to create a framework for understanding smarter-than-human intelligence. Rational agent theory. So, kudos to lining up fancy words, Mikey, and you're right Bouf, we have lots to fear from uber smart 'transhumans.' But give the book a read and let's talk again.

James Barrat - the author

M. Bouffant said...

No Goals, Overly Self-Aware Editor:
O.K., J.B. we are already very frightened by the robot that found your name here so quickly.

Secondly, the book is on our library list. (Publisher's promos cheerfully accepted.)

How does a machine become self-aware? Does it have a self of which to be aware? Doesn't there need to be one? Why would machne self-awareness be like humanoid self-awareness?

And of course we've never been goal-oriented (one's goal "in life" has never been more than to die, when you're honest about it) so we don't see how transistors would give a shit about "accomplishing" anything. (He's right, our handle on it is the same as before.)

And as most humans are barely self-aware even in the 21st century, we don't see machines getting more self-aware.