vimothy

yurp
get land going on the subject of AI or bitcoin and his "excursion" into eliminative nihilism quickly ends to be replaced with childish enthusiasm and uninformed optimism
 

droid

Well-known member
Land is right about one thing there - it is quite an achievement for an AI to win at Go against a decent player, especially one that uses a goal and not a rule oriented training model. Have you ever played Go?

Would be curious to hear Vim's critique. I have serious reservations about the potential of the AI project.
 

luka

Well-known member
Wasting his time looking after his kids or something. Get back here. Explain computers to us.
 

droid

Well-known member
Ive been working on a generative adversarial network in my spare moments. Terrifying to think of how advanced AlphaGo is in comparison, but nonetheless, I don't think they'll ever evolve beyond very capable tools. They will still need goals to work towards.
 
Ive been working on a generative adversarial network in my spare moments. Terrifying to think of how advanced AlphaGo is in comparison, but nonetheless, I don't think they'll ever evolve beyond very capable tools. They will still need goals to work towards.

AlphaZero could be set the task of finding the best goal to set itself ;)

One of the most interesting characteristics of AlphaZero is its entirely novel approach to chess - really weird opening moves, apparently reckless "willingness" to lose pieces early on to secure victory, counter-intutive positioning of the King. It's quite something.

https://deepmind.com/blog/alphazero-shedding-new-light-grand-games-chess-shogi-and-go/

Similar to how an earlier mind taught itself to walk. Weird at first, but rapidly effective.
Perhaps the surprising twists and turns of the timeline we're all enjoying living in so much is manifestation of weird AI guidance?

Terence McKenna wanged on about "ingression of novelty" into the timewave that would come to an omega point in 2012. I think he was few years out. The question is how does this wild novelty ingress into your nan's kitchen while she's toasting a crumpet and listening to The Archers? What would that look like?

 
Last edited:

vimothy

yurp
Go was supposed to be very different [to Chess - previously, "the acme of intelligence testing"]. It was even, in important respects, the final fallback line. No greater formal challenge obviously occupied the horizon. This was the last chance to understand what supremacy over artificial intelligence was like.

This is how Land situates the success of AlphaGo Zero -- as the fall of human intelligence before the now ascendant machines. It makes for good sci-fi -- especially this notion of self-play, as if the machine were "learning" without any human input -- but it's hysterical otherwise.

In reality AlphaGo Zero is not an epochal event, except perhaps in the lives of professional Go players and the team that developed it.

AlphaGo Zero and its predecessors use standard techniques and are not a breakthrough in terms of machine or deep learning methodology.

For a computer to win a game of Go is impressive in one sense -- the space of possible game states is very large and so computing "good" moves is potentially a very computationally expensive operation. However, it you can bring a lot of computational resources to bear on the problem, then it's a matter of working through all the possible moves from a given state and picking out the best one, and repeating.

However, Go is not really that difficult a game from the POV of a deep learning network - except for the large number of board states, Go is incredibly tractable. It has many convenient properties such as:
- determinism;
- perfect information;
- discrete actions;
- can be perfectly simulated;
- short games;
- easy to evaluate outcome;

All of these mean that it is possible for a network to play millions of games of Go over and over again very quickly to "learn" which moves are effective -- i.e., to work out statistically which moves are more likely to lead to an eventual victory (a counting problem). Then your AI god is simply a function or "model" from (an encoding of) a given board state to (an encoding of) the next move which maximises its chance of victory.
 

vimothy

yurp
even chess has more moves than the universe has atoms - at least according to shannon. fact is the number of atoms in the universe is just not that big compared to the number of possible states in a typical game (complexity in a game can grow very fast). football, for example, is far more complicated, with an infinite number of possible game states.
 

version

Well-known member
" ... his ballyhooed philosophy appears to consist of little more than gothic sci-fi-flavored intellectualizations of his fantasies of oppressing the poor by remote control from an automated oriental massage parlor."
 

vimothy

yurp
The first hints of that vocabulary emerged in 1962, when Torsten Wiesel and David Hubel showed that specific neurons in the brain’s visual centers are tuned to specific stimuli—lights moving in particular directions, or lines aligned in particular ways. Since then, other neuroscientists have identified neurons that respond to colors, curvatures, faces, hands, and outdoor scenes. But here’s the catch: Those scientists always chose which kinds of shape to test, and their intuition might not reflect the actual stimuli to which the neurons are attuned. “Just because a cell responds to a specific category of image doesn’t mean you really understand what it wants,” says Livingstone.

So why not ask the neurons what they want to see?

That was the idea behind XDREAM, an algorithm dreamed up by a Harvard student named Will Xiao. Sets of those gray, formless images, 40 in all, were shown to watching monkeys, and the algorithm tweaked and shuffled those that provoked the strongest responses in chosen neurons to create a new generation of pics. Xiao had previously trained XDREAM using 1.4 million real-world photos so that it would generate synthetic images with the properties of natural ones. Over 250 such generations, the synthetic images became more and more effective, until they were exciting their target neurons far more intensely than any natural image. “It was exciting to finally let a cell tell us what it’s encoding instead of having to guess,” says Ponce, who is now at Washington University in St. Louis.

.
 
Top