💾 Archived View for gmi.noulin.net › mobileNews › 283.gmi captured on 2023-01-29 at 08:47:54. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2021-12-05)

➡️ Next capture (2024-05-10)

🚧 View Differences

-=-=-=-=-=-=-

Smarter-than-Human Intelligence & The Singularity Summit

2007-09-10 12:17:11

Posted by CmdrTaco on Sunday September 09, @11:48AM

from the something-to-think-about dept.

Sci-Fi

runamock writes "Brilliant technologists like Ray Kurzweil and Rodney Brooks

are gathering in San Francisco for The Singularity Summit. The Singularity

refers to the creation of smarter-than-human intelligence beyond which the

future becomes unpredictable. The concept of the Singularity sounds more

daunting in the form described by statistician I.J Good in 1965: 'Let an

ultra-intelligent machine be defined as a machine that can far surpass all the

intellectual activities of any man however clever. Since the design of machines

is one of these intellectual activities, an ultra-intelligent machine could

design even better machines; there would then unquestionably be an

'intelligence explosion,' and the intelligence of man would be left far behind.

Thus the first ultra-intelligent machine is the last invention that man need

ever make.'"

Fears are Overblown

(Score:5, Insightful)

by DumbSwede (521261) on Sunday September 09, @12:27PM (#20529123)

(http://jaytv.com/larrys/blog | Last Journal: Wednesday December 06, @02:21PM)

For those predicting the imminent elimination/enslavement of the human race

once ultra-intelligent machines become self-aware, where would the motivation

for them to do so come from? I would contend it is a religious meme that drives

such thoughts -- intelligence without a soul must be evil.

For those that would argue Darwinian forces lead to such imperatives; sure you

could design the machines to want to destroy humanity or evolve them in ways

that create such motivations, but it seems unlikely this is what we will do.

Most likely we will design/evolve them to be benign and helpful. The

evolutionary pressure will be to help mankind not supplant it. Unlike animals

in the wild, robot evolution will not be red of tooth and claw.

An Asimovian type future might arise with robots maneuvering events behind the

scenes for humanities best long term good.

I worry more about organized religious that might try to deny us all a chance

at the near immortality that our machine children could offer us rather than

some Terminator like scenario.

Posted: 2007690@831.74

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

stranger

OK. here's where we are:

could get a computer to do mathematical logic. All that was necessary was to

express the real world in predicate calculus and prove theorems. After all,

that's how logicians and philosophers all the way back to Aristotle said

thinking worked. Well, no. We understand now that setting up the problem in a

formal way is the hard part. That's the part that takes intelligence. Crunching

out a solution by theorem proving is easily mechanized, but not too helpful.

That formalism is too brittle, because it deals in absolutes.

somebody puts in. But back in the 1980s, when I went through Stanford, people

like Prof. Ed Feigenbaum were promising Strong AI Real Soon Now from rule based

systems. The claims were embarrassing; at least some of that crowd knew better.

All their AI startups went bust, the "AI Winter" of low funding followed, and

the whole field was stuck until that crowd was pushed aside.

family of hill-climbing optimizers. These approaches work on problems where

continuous improvement via tweaking is helpful, but usually max out after a

while. We still don't really understand how evolution makes favorable jumps. I

once said to Koza's crowd that there's a Nobel Prize waiting for whomever

figures that out. Nobody has won it yet.

with neural nets, but with a better understanding of what's going on inside.

Lots of practical problems in AI, from spam filtering to robot navigation, are

yielding to modern statistical approaches. Compute power helps here; these

approaches take much floating point math. These methods also play well with

data mining. Progress continues.

AI is one of those fields, like fusion power, where the delivery date keeps

getting further away. For this conference, the claim is "some time in the next

century". Back in the 1980s, people in the field were saying 10-15 years.

We're probably there on raw compute power, even though we don't know how to use

it. Any medium-sized server farm has more storage capacity that the human

brain. If we had a clue how to build a brain, the hardware wouldn't be the

problem.