Page 2 of 4 FirstFirst 1234 LastLast
Results 16 to 30 of 50

Thread: Technological Singularity

  1. #16
    Join Date
    May 2008
    Location
    CHI
    Posts
    3,219

    Default

    it sounds like the worse thing ever, and Kurzweil is just...I dunno, it all comes off like Ponce De Leon in the digital era, this bollocks about conquering death and bringing his dad back to life (as a clone or whatever), just...ugh. which may be only tangentially related to the self-improving AI, I guess. but it still sounds like the worst thing ever. count me 100% out, at least. (I mean, it does also sound a bit much like 3rd-rate William Gibson)

    Vimothy, I'm curious tho, why exactly is self-improving AI so inevitable? is it one of those things where it's like "oh if you don't understand highly advanced math & whatever then you'll never get it"? or it there a relatively concise explanation for laypeople? I seem to remember that there's some debate over whether or not at the rate at which data storage capacity increases is actually beginning to level off.

    you have to admit also that these Kevin Drumm types sound a wee bit self-important (I seem to remember a quote from a paper of his you linked along the lines of "getting to the Singularity is the single most important thing in the world"). which doesn't necessarily add to or detract from the substance of their arguments, I guess.
    Last edited by padraig (u.s.); 03-11-2009 at 02:09 PM.

  2. #17
    Join Date
    Jul 2006
    Location
    Merseyside
    Posts
    3,546

    Default

    I have a pal who did Comp Sci at PHD level and wrote his thesis on AI. I've asked him before if all this way cool Neuromancer-type stuff could happen and he reckons it's possible, but even if we did have computers with has much processing power as the human brain, it would still be a tall order to program or teach them to think in the same way as humans. And that there might actually be physical, universal limits on how smart an artificial mind could be.

  3. #18

    Default

    Barring disaster, or some limit we don't understand yet, at some point on a long enough timeline CPU power will equal the human brain. Don't think that's very controversial.

    In the article I just linked to, Kevin Drumm was criticising Kurzweil and the Transhumanists because in general their programme is trivial and the specifics their programme are stupid. Pretty sure he's not written any papers on the singularity, but I could be wrong.

  4. #19
    Join Date
    Jun 2006
    Posts
    15,675

    Default

    Quote Originally Posted by vimothy View Post
    At some point, we should get self-improving AI. The rest is details.
    Haven't we had this, in the form of neural nets, for years already? Or is this a more specialised informatic usage of "self-improving"? (Like it learns a language in its spare time or volunteers at a cat shelter?)
    Doin' the Lambeth Warp New: DISSENSUS - THE NOVEL - PM me your email address and I'll add you

  5. #20

    Default

    Transhumanist people mean AI that is effectively intelligent enough enough build AI better than humans. Computers that can build better computers than humans. That's one of Kurzweil's possible singularities. Eliezer Yudkwosky as well. Probably lots of others.

  6. #21

    Default

    Doubtless most of these people will be wrong. Prediction is hard, especially about the future and all that. Still, it promises to be pretty weird.

  7. #22
    Join Date
    Jun 2006
    Posts
    15,675

    Default

    Quote Originally Posted by vimothy View Post
    Computers that can build better computers than humans.
    Again, aren't we doing this already? Surely ICs have been designed using computer algorithms for decades? I can't imagine there are huge rooms full of engineers painstakingly deciding where each and every transistor in a Pentium Dual Core has to go.

    And I don't see that this whole concept is really, at base, all that groundbreaking. It's obvious that you can make a better axe if you have some elementary metalworking and woodworking tools lying around than if you're using your bare hands to manipulate a lump of flint, a stick and some twine. An axe is a tool for chopping stuff up and a computer is a tool for performing calculations very quickly.

    Sorry if this is sounding at all facetious, it's really not meant that way!
    Doin' the Lambeth Warp New: DISSENSUS - THE NOVEL - PM me your email address and I'll add you

  8. #23
    Join Date
    Aug 2009
    Posts
    1,227

    Default

    Quote Originally Posted by swears View Post
    "Yeah, well quantum physics is like, really weird and counterintuitive... a bit like the love of our lord Jesus."
    hahaha

  9. #24
    Join Date
    May 2009
    Location
    London
    Posts
    1,453

    Default

    I don't think the concept is supposed to groundbreaking as such, it's attempting to be a prediction, to say this is where it looks like things are going. And part of that is based on extrapolating from observations of where we are now, like you above saying that computers are already being used to design computers.

  10. #25
    Join Date
    May 2009
    Location
    London
    Posts
    1,453

    Default

    the only serious arguments I've ever heard against the eventual development of genuinely intelligent machines all boil down to a thinly veiled belief that there just has to be something more to human intelligence than mere neurons and biochemistry
    Even this question doesn't necessarily matter so much. No absolute reason why AIs should be based on human intelligence.

    Maybe the issue for AI coming up is one of Initiative (or Will). ?

  11. #26
    Join Date
    Aug 2009
    Posts
    1,227

    Default

    Does anyone think that the hard problem of AI is one that we'll ever get over - i.e do you think we'll ever build robots that can actually think rather than respond as if they can? http://en.wikipedia.org/wiki/Chinese_room

  12. #27
    Join Date
    Feb 2008
    Posts
    1,445

    Default

    Thanks for all the info here. I wish I could comment more but im still finding out stuff on all of this. Will read the thread in detail and links more fully later.

  13. #28

    Default

    Quote Originally Posted by Mr. Tea View Post
    Sorry if this is sounding at all facetious, it's really not meant that way!
    Of course it's true that humans use technology to design and build more technology. But I tihnk that the transhumanists would make a distinction between that and...

  14. #29
    Join Date
    Dec 2004
    Location
    Vancouver, BC
    Posts
    408

    Default

    The singularity for AI is not about making an AI that can learn, it's about making an AI that can write an AI smarter than itself (which then makes an AI smarter than itself, etc etc). Neural nets learn, but they haven't, as yet, learned how to make something better at learning than a neural net.

    The main problem with this idea as applied to the singularity is that there's no proof that intelligence increases in any sort of continuous fashion (there could be large, discontinuous jumps in between levels) and that there isn't an upper maximum to intelligence. It's some weak induction plus wishful thinking.

    Would be cool though. I'd upload my brain no problem.

  15. #30
    Join Date
    Feb 2008
    Posts
    1,445

    Default

    AI and NN have been around a while, I think Genetic Algorithms may hold the key in that they generate loads of solutions and then access if they are fit for purpose, keeping the nearest fits and basing new better solutions on the previous sets of logic. Kinda like they self analyse. I dont think they will replicate human intelligence and intuition im not even sure that that would be a requirement in machine intel. I think some of what these TS guys talk about could happen but perhaps not in the time frames they talk about ie by 2035 . . . Still feeling my way around this . . .

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •