sus
Moderator
There are other strands and influences, histories of early commentators and volunteers who shaped the intellectual and social direction of the site, but those are the big ones.
A few years before Yudkowsky started LessWrong, in 2000 (age 21) Yudkowsky founded the Singularity Institute, now MIRI (Machine Intelligence Research Institute).
I think it's hard to overstate how important Yudkowsky has been in mainstreaming AI alignment concerns. Yudkowsky didn't come up with the notion of singularity—that's John von Neumann—but he and Ray Kurzweil are the only ones seriously thinking about singularity around the turn of the century. Musk, famously, met Grimes via a Rococo Basilisk joke—his interest in AI alignment, and the large donations he's made to alignment projects, are largely a result of rationalist efforts.
In the 2000s, in an interview for Luke Muelhauser's Pale Blue Dot, Yudkowsky quips that humanity spends less money on AI safety research than New York City spends on lipstick. By 2020, there is so much money (billions, maybe tens of billions) going towards AI alignment and safety that some rationalists begin discouraging further donations, and it's an active community joke that you can get money for any project so long as you call it an alignment project.
A few years before Yudkowsky started LessWrong, in 2000 (age 21) Yudkowsky founded the Singularity Institute, now MIRI (Machine Intelligence Research Institute).
I think it's hard to overstate how important Yudkowsky has been in mainstreaming AI alignment concerns. Yudkowsky didn't come up with the notion of singularity—that's John von Neumann—but he and Ray Kurzweil are the only ones seriously thinking about singularity around the turn of the century. Musk, famously, met Grimes via a Rococo Basilisk joke—his interest in AI alignment, and the large donations he's made to alignment projects, are largely a result of rationalist efforts.
In the 2000s, in an interview for Luke Muelhauser's Pale Blue Dot, Yudkowsky quips that humanity spends less money on AI safety research than New York City spends on lipstick. By 2020, there is so much money (billions, maybe tens of billions) going towards AI alignment and safety that some rationalists begin discouraging further donations, and it's an active community joke that you can get money for any project so long as you call it an alignment project.