luka

Well-known member
we all know about the dubstepforum diaspora and how it gave us corpsey and version and others but we've heard less about the alt-right 'New Rationalist' website Gus, Biscuits and more cut their teeth on. I want to know all about life on that forum. Gus said they had a grading system for members and he was a 'knight-paladin' and biscuits was an 'aspirant' which is the entry-level grade.
 

mixed_biscuits

_________________________
But I had diasporated from here to Less Wrong before diasporising back again.

If Less Wrong is alt-right because it discusses taboo topics, then this place is far-right.
 

dilbert1

Well-known member
Begging 👏 y’all 👏 to 👏 recognize 👏 this thread is serving potential tea-spill of the century
 
  • Haha
Reactions: sus

Clinamenic

Binary & Tweed
I have been quietly assembling an oral history of rationalism, so I will have things to contribute when my schedule is quieter.
You actually should spend some time at our grouphouse, you'd appreciate dimensions of some of these scenes which are probably totally lost on me.
 

sus

Moderator
I have been quietly assembling an oral history of rationalism, so I will have things to contribute when my schedule is quieter.
You laugh now, but not when I reveal an equivalent dossier to LessWrong
 

sus

Moderator
Memory is power. Be nice to Gus or history will spurn you. That's the lesson to be taken away.
 

sus

Moderator
Where does LessWrong come from? That's the place to start.

The last decade we have been in the post-LessWrong era. The board shut down in the early 2010s and yes, while it came back and rebooted and has been a meaningful cultural force since the late 2010s, it's just not the same. The diaspora had spread out, moved on.

In the 90s, there was a group called the Extropians. Extropy as in the inverse of entropy. There was an interest in transhumanism and science fiction and artificial intelligence and longevity research. It was a very West Coast movement. Rationalism proper has a strong East coast component but the origins are all very Silicon Valley, very Berkeley, very Santa Cruz.

There was also a mailing list called SL4, short for Shock Level 4. Shock levels refer to the idea of "future shock"—that there is a horizon beyond which we struggle to make sense of life. "A Shock Level measures the high-tech concepts you can contemplate without being impressed, frightened, blindly enthusiastic."

  • SL0: The legendary average person is comfortable with modern technology - not so much the frontiers of modern technology, but the technology used in everyday life. Most people, TV anchors, journalists, politicians.
  • SL1: Virtual reality, living to be a hundred, "The Road Ahead", "To Renew America", "Future Shock", the frontiers of modern technology as seen by Wired magazine. Scientists, novelty-seekers, early-adopters, programmers, technophiles.
  • SL2: Medical immortality, interplanetary exploration, major genetic engineering, and new ("alien") cultures. The average SF fan.
  • SL3: Nanotechnology, human-equivalent AI, minor intelligence enhancement, uploading, total body revision, intergalactic exploration. Extropians and transhumanists.
  • SL4: The Singularity, Jupiter Brains, Powers, complete mental revision, ultraintelligence, posthumanity, Alpha-Point computing, Apotheosis, the total evaporation of "life as we know it." Singularitarians and not much else.

Many of the early members of LessWrong were part of these communities. Eliezer Yudkowsky, who wrote the Future Shock levels article, went on to found LessWrong. He was born in 1979, which puts him in his teens in the 1990s, and in his mid-20s when he starts LessWrong.
 

sus

Moderator
The other important strand is a blog called Overcoming Bias. Nowadays it seems to have migrated to Substack, but it's been around at least two decades, for most of its life a standard HTML/RSS blog. The main two writers on Overcoming Bias were Robin Hanson (George Mason University economist) and Eliezer Yudkowsky.

This intellectual strand of rationalism originates with the behavioral economics work of Kahneman and Tversky on cognitive biases. The idea was that if you familiarized yourself with the literature around bias, you could take active steps to mitigating that bias, and make better decisions.

Better decision-making in general was a focus of LessWrong. One of the areas where Yudkowsky has contributed genuinely novel, academically adopted ideas is timeless decision theory. There are lots of weird paradoxes that come up around this stuff—Newcomb's Paradox is a bit of a mind-bender. Roko's Basilisk is related and more famous (although Yudkowsky did not originate it, and has been publicly dismissive).

Finally, an emphasis on replacing frequentist statistics with Bayesian statistics. That's a whole can of worms—it's still an ongoing debate in the community whether we calculate Bayesian priors in the real world, whether it's just a metaphor, whether it's meaningless jargon. (See e.g. this lengthy dive into lab leak debate epistemology, which is a fascinating narrative in its own right.)
 

craner

Beast of Burden
The other important strand is a blog called Overcoming Bias. Nowadays it seems to have migrated to Substack, but it's been around at least two decades, for most of its life a standard HTML/RSS blog. The main two writers on Overcoming Bias were Robin Hanson (George Mason University economist) and Eliezer Yudkowsky.

This intellectual strand of rationalism originates with the behavioral economics work of Kahneman and Tversky on cognitive biases. The idea was that if you familiarized yourself with the literature around bias, you could take active steps to mitigating that bias, and make better decisions.

Better decision-making in general was a focus of LessWrong. One of the areas where Yudkowsky has contributed genuinely novel, academically adopted ideas is timeless decision theory. There are lots of weird paradoxes that come up around this stuff—Newcomb's Paradox is a bit of a mind-bender. Roko's Basilisk is related and more famous (although Yudkowsky did not originate it, and has been publicly dismissive).

Finally, an emphasis on replacing frequentist statistics with Bayesian statistics. That's a whole can of worms—it's still an ongoing debate in the community whether we calculate Bayesian priors in the real world, whether it's just a metaphor, whether it's meaningless jargon. (See e.g. this lengthy dive into lab leak debate epistemology, which is a fascinating narrative in its own right.)

Have you been replaced by an AI robot as well?
 
Top