Kurzweil on 2010

blunt

shot by both sides
Kurzweil is a really interesting guy, and clearly has a brain the size of at least 5 very large rooms. But processing power only gets you so far, and he's proof positive that logic doesn't operate in a political or philosophical vacuum.

To be sure, his prognoses always operate on a ridiculously short timeline. But my main problem with him is that he sells humanity so short. It seems that, for him, sentience is just a by-product of advanced processing power; but I'd say that it's highly signicant that computers are no closer to self-awareness than they were in Babbage's time.

Computers are still just dumb machines. Anything they do, they do because someone has told them to. It's certainly very easy to be seduced by a calculator's ability to do resolve comples mathematical equations, but it would be more truthful to marvel at the programmer's ingenuity in instructing it how to do so.

But then, people have always been willing to be seduced by the illusion of mechanical intelligence.

I find it stange for someone like Kurzweil, given his background, to fall into this trap, but whenever I hear him speak about the singularity, I see him going for it head first.

Simon Schaffer writes about all this stuff, and more, with more clarity and insight than I will ever muster :)
 

noel emits

a wonderful wooden reason
It seems that, for him, sentience is just a by-product of advanced processing power; but I'd say that it's highly signicant that computers are no closer to self-awareness than they were in Babbage's time.

Maybe not. Maybe there is a tipping point of complexity, like those phase-shifts in chaotic systems where things spontaneously restabilise on a higher level of organisation. I think Kurzweil writes about complexity theory in The Age of Spiritual Machines IIRC. It's well hip at the moment along with emergence theory (http://en.wikipedia.org/wiki/Emergence) and I think that's partly where he's coming from. See also Teilhard de Chardin and his Noosphere (www?) which he talks about as becoming self aware upon reaching a certain level of complexity.
 

blunt

shot by both sides
See also Teilhard de Chardin and his Noosphere (www?) which he talks about as becoming self aware upon reaching a certain level of complexity.

Ah, yes. The Noosphere. But to be honest, this is exactly what I mean about logic not operating in a vacuum.

The Noosphere/web compaisons just reinforce my suspicion that people are always willing to appropriate old ideas and remix them for a contemporary audience. There's no empirical evidence to support it. As with the coming of the Noosphere, the notion of 'the network' achieving self-awareness is always posited as being just around the corner. Hasta mañana! You'll be digging out some Erik Davis techgnostics next ;)

Kevin Kelly tried to resurrect the idea of the sentient network in the 10th anniversary edition of 'Wired' (last year?). I just think it's all smoke and mirrors. It sounds exciting, but ultimately serves to disassociate the tool from the operator; or, in other words, alienate the worker from the commodity ;)
 

noel emits

a wonderful wooden reason
It sounds exciting, but ultimately serves to disassociate the tool from the operator; or, in other words, alienate the worker from the commodity ;)

That sounds intriguing but I'm not sure I follow. Do you mean making self-awareness a product of material complexity makes humans less special?
 

noel emits

a wonderful wooden reason
You'll be digging out some Erik Davis techgnostics next ;)

Erik Davis is well skeptical about all of this isn't he?

Another fictional mindblower on both nanotech and strong AI is Greg Bear's Blood Music. Pretty haunting that one.
 

blunt

shot by both sides
Erik Davis is well skeptical about all of this isn't he?

Yeah, absolutely. Sorry, ambiguous (or maybe just poorly formed) sentence :)

Great read, though. And I see on the Techgnosis website he writes...

"TechGnosis peels away the utilitarian shell of technology to reveal the mystical and millennialist expectations that permeate the history of technology"

... which, again, summarises what I was trying to say about the Noosphere/web confusion with a little more brevity.

The only Greg Bear I've read is Eon, but I suspect I was too young to really get it. My fave (fictional) book on nanotech is Stephenson's The Diamond Age. But there's also this, which is actually incredibly accessible, and one of the most thought-provoking books I think I've ever read.
 

blunt

shot by both sides
That sounds intriguing but I'm not sure I follow. Do you mean making self-awareness a product of material complexity makes humans less special?

No, I just mean that machines do what we tell them to; and as such, if a machine did attain self-awareness, it's likely to be because we told it how to (which in itself has all sorts of interesting implications).

The Turing Test is a case in point. It's the thing that's always referenced as the definitive test of artificial intelligence. But the Turing Test is predicated on the illusion of intelligence. All that is required is that a human conduct a mediated conversation with, on the one hand, a machine and, on the other, a human operator; and not be able to distinguish which is which.

Of course, this all points towards the question: how do you prove sentience? Which is a question that, to my mind, is impossible to answer definitively (which is not to say that it's pointless to try). And without the answer, I'd say creating a sentient being will continue to be a pretty tough brief.

Does that make sense?
 

DJ PIMP

Well-known member
Computers are still just dumb machines. Anything they do, they do because someone has told them to. It's certainly very easy to be seduced by a calculator's ability to do resolve comples mathematical equations, but it would be more truthful to marvel at the programmer's ingenuity in instructing it how to do so.
I was discussing this with my partner last night - and its entirely why I prefer Windows to MacOS. With Windows it's inevitable that every so often I will end up marvelling at the clunky bits of the interface, or banging my head on the desk in frustation; its impossible to forget that I'm using a crappy computer. Without the rough edges OSX is that little bit more intoxicating.

And the OSs are entire environments where the user feels in control. As you point out, any illusion of choice or the semblance of genuine (human) interaction has already been defined by someone else.

As Anthony Rother said, "friends are not electric".
 

DJ PIMP

Well-known member
also the religion vs. technology dichotomy is pretty BS. apples and oranges, plus there's a whole lot of other fruit involved.
I agree its a false dichotomy that we ourselves create - but thats not how it appears in the public consciousness.

"My mind is open to the most wonderful range of future possibilities, which I cannot even dream about, nor can you, nor can anybody else. What I am skeptical about is the idea that whatever wonderful revelation does come in the science of the future, it will turn out to be one of the particular historical religions that people happen to have dreamed up.

When we started out and we were talking about the origins of the universe and the physical constants, I provided what I thought were cogent arguments against a supernatural intelligent designer. But it does seem to me to be a worthy idea. Refutable--but nevertheless grand and big enough to be worthy of respect.

I don't see the Olympian gods or Jesus coming down and dying on the Cross as worthy of that grandeur. They strike me as parochial. If there is a God, it's going to be a whole lot bigger and a whole lot more incomprehensible than anything that any theologian of any religion has ever proposed." - Dawkins

http://www.time.com/time/magazine/article/0,9171,1555132-1,00.html
 

turtles

in the sea
No, I just mean that machines do what we tell them to; and as such, if a machine did attain self-awareness, it's likely to be because we told it how to (which in itself has all sorts of interesting implications).

The Turing Test is a case in point. It's the thing that's always referenced as the definitive test of artificial intelligence. But the Turing Test is predicated on the illusion of intelligence. All that is required is that a human conduct a mediated conversation with, on the one hand, a machine and, on the other, a human operator; and not be able to distinguish which is which.

Of course, this all points towards the question: how do you prove sentience? Which is a question that, to my mind, is impossible to answer definitively (which is not to say that it's pointless to try). And without the answer, I'd say creating a sentient being will continue to be a pretty tough brief.

I think your working with a bit of an outdated model of AI here. Firstly, I don't think there's a single serious AI researcher out there that considers the Turing Test to be an important goal for AI; there's already plenty of chatbots that can pass the turing test, at least for some observers, and all of your criticisms are perfectly valid and hard to get around. I think pretty much everyone agrees with you that actually proving sentience is a very tricky thing, though they probably are more optimistic that it will be possible.

And there are plenty of techniques out there to get computers to learn and behave in unexpected ways and generally do things that you didn't tell it to do. If you set up a self-modifying Bayesian or Neural network properly and give it a good chunk of learning data (or let it loose in the real world) it can quickly start doing things that you did not expect or have an explanation for: you essentially have ask the program itself why it did something in order to find out. Of course you did set up the initial conditions, but it can quickly grow outside of your control or understanding (don't worry though, mostly the unexpected behaviour isn't intelligent or anything...just weird and unusual...the machines aren't taking over yet... :) ).
 

adruu

This Is It
Speaking of dichotomies, why does there tend to be no middle ground between the "Spritual Machine" AI Fantasy and just as relevant, but maybe less sexy, bio-mods (more efficient red blood cells, insulin prodction, blood clot accelerants for surgery/mechanisms for cleaning cooking oil...etc...etc...)

I know kurzweil doesnt help, but you don't really become the "go-to-guy" on this type of research, lecturing, pleading for money and attention, by playing the middle ground do you?
.
 

blunt

shot by both sides
If you set up a self-modifying Bayesian or Neural network properly and give it a good chunk of learning data (or let it loose in the real world) it can quickly start doing things that you did not expect or have an explanation for: you essentially have ask the program itself why it did something in order to find out.

For sure, and this is where the 'magic' happens, eh? It's the point at which, for contemporary viewers, the illusion of intelligence is at its most seductive.

Of course you did set up the initial conditions, but it can quickly grow outside of your control or understanding

And that's the clincher for me - it may be different kind of instruction set; but the machine is still following a set of instructions, issued by the operator.

Of course, you could say that God set the basic conditions of the physical universe, and we just operate within that context. This, to my mind, is a pretty neat half-way house between free will and pre-determination.

'God doesn't just make the world; he does something much more wonderful. He makes the world make itself.'

But equally, I'm aware that people have long been prone to map prevailing technological wisdom onto their understanding of the universe; just as it was regarded as something like a clockwork mechanism in the time of Newton and Huygens.

To bastardise bleep's Dawkins quote: the reality is no doubt infinitely more complex than any theologian or scientist has ever proposed.

Ah, I love this this shit :)
 
here's one I made up

Nature is aught but the will of God

I think when machines do become aware they will not just become aware of 4d they wil become aware of the bigger picture because I don't see why they would limit their sight and magnification to just the visible spectrum
 

noel emits

a wonderful wooden reason
I think when machines do become aware they will not just become aware of 4d they wil become aware of the bigger picture because I don't see why they would limit their sight and magnification to just the visible spectrum

Makes sense. A wider spectrum doesn't necessarily mean more dimensions though.
 
if string theory has any cred left and there are curled up extra dimensions then...

...regardless, I've always thought of interchangeable spatial dimensions swapping at superluminal speeds
 

MATT MAson

BROADSIDE
This just in -

http://www.physorg.com/news82910066.html

A baby step closer to intelligent machines perhaps?

There is the possibility that we will develop computers more complex than the brain, but they will still just be computers, they won't be 'intelligent'. We already have machines that can do calculations way more powerful than anything we can do (calculator watches) but programming a machine to appreciate art for example is another matter.

The middle ground guy on this stuff is James Martin - he's more concerned about self replicating nanobots taking over and eating all carbon-based matter within 30 minutes of becoming aware - the 'grey goo' scenario and such. But he makes the point that we can control computer viruses, ok, and nanotech shouldn't be that different.
 
Top