...it's like TV Tropes, but LINKED DATA!
Less Wrong (Blog)
- 118 statements
- 22 feature instances
- 8 referencing feature instances
Less Wrong (Blog) | type |
TVTItem | |
Less Wrong (Blog) | label |
Less Wrong (Blog) | |
Less Wrong (Blog) | page |
LessWrong | |
Less Wrong (Blog) | comment |
Less Wrong is a community blog devoted to rationality. Contributors draw upon many scientific disciplines for their posts, from quantum physics and Bayesian probability to psychology and sociology. The blog focuses on human flaws that lead to misconceptions about the sciences. It's a gold mine for interesting ideas and unusual views on any subject. The clear writing style makes complex ideas easy to understand.The mainstream community on Less Wrong is firmly atheistic. A good number of contributors are computer professionals. Some, like founder Eliezer Yudkowsky, work in the field of Artificial Intelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI" (AI That Is Not A Crapshoot), and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).Less Wrong is the source of much of the popularity of Rational Fic. Three Worlds Collide is hosted here. Harry Potter and the Methods of Rationality is occasionally discussed here. Friendship is Optimal originated there. | |
Less Wrong (Blog) | fetched |
2024-02-04T20:00:58Z | |
Less Wrong (Blog) | parsed |
2024-02-04T20:00:59Z | |
Less Wrong (Blog) | isPartOf |
DBTropes | |
Less Wrong (Blog) / int_1beef720 | type |
Living Forever Is Awesome | |
Less Wrong (Blog) / int_1beef720 | comment |
Living Forever Is Awesome: Almost everyone on Less Wrong. Hence, the strong Transhumanist bent. | |
Less Wrong (Blog) / int_1beef720 | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_1beef720 | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_1beef720 | |
Less Wrong (Blog) / int_1d751503 | type |
Talking Your Way Out | |
Less Wrong (Blog) / int_1d751503 | comment |
Talking Your Way Out: The AI-Box Experiment is a thought experiment intended to show how a superhuman intellect (like a hyper-intelligent AI) could talk its captors into anything, in particular releasing it into the world. | |
Less Wrong (Blog) / int_1d751503 | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_1d751503 | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_1d751503 | |
Less Wrong (Blog) / int_22345ef | type |
Neologizer | |
Less Wrong (Blog) / int_22345ef | comment |
Neologizer: The community has been criticised for making up their own terms for things, often even when they know that the concepts already have names; for example, "requiredism" instead of "compatibilism". | |
Less Wrong (Blog) / int_22345ef | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_22345ef | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_22345ef | |
Less Wrong (Blog) / int_2d4d16d9 | type |
I Know You Know I Know | |
Less Wrong (Blog) / int_2d4d16d9 | comment |
I Know You Know I Know: The heart of acausal trade as a concept is the ability to simulate the decision-making of your counterparty based on what they are in a position to know as they simulate your decision-making based on what you are in a position to know.note Many Less Wrong regulars (and Yudkowsky in particular) are familiar with the traditional Chessmaster-duel version of the trope from works such as Death Note. | |
Less Wrong (Blog) / int_2d4d16d9 | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_2d4d16d9 | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_2d4d16d9 | |
Less Wrong (Blog) / int_32e279c4 | type |
Humans Are Flawed | |
Less Wrong (Blog) / int_32e279c4 | comment |
Humans Are Flawed: Explained as a result of having been 'designed' slowly and very much imperfectly by the 'idiot god' that is evolution. | |
Less Wrong (Blog) / int_32e279c4 | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_32e279c4 | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_32e279c4 | |
Less Wrong (Blog) / int_4bfd2125 | type |
Straw Vulcan | |
Less Wrong (Blog) / int_4bfd2125 | comment |
Straw Vulcan: Averted. Less Wrong community members do not consider rationality to *necessarily* be at odds with emotion. Also, Spock is a terrible rationalist. | |
Less Wrong (Blog) / int_4bfd2125 | featureApplicability |
-1.0 | |
Less Wrong (Blog) / int_4bfd2125 | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_4bfd2125 | |
Less Wrong (Blog) / int_5b84f2a | type |
Transhuman | |
Less Wrong (Blog) / int_5b84f2a | comment |
Transhumanism: Their philosophy and goal, though it should be noted their emphasis on why is kind of skewed compared to others; see Living Forever Is Awesome. Most Transhumanists are more in it to make themselves and others better. | |
Less Wrong (Blog) / int_5b84f2a | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_5b84f2a | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_5b84f2a | |
Less Wrong (Blog) / int_5f185fbe | type |
Digital Abomination | |
Less Wrong (Blog) / int_5f185fbe | comment |
Digital Abomination: Roko's Basilisk is a thought experiment involving a hypothetical hyperintelligent AI built at some point in the future to help humanity that would retroactively punish everyone who knew about it note And now you do! and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its logical conclusion this would mean people are blackmailed into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion being forbidden by the moderation staff, not that that helped. A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of Pascal's Wager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead. | |
Less Wrong (Blog) / int_5f185fbe | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_5f185fbe | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_5f185fbe | |
Less Wrong (Blog) / int_73f799ed | type |
The Horseshoe Effect | |
Less Wrong (Blog) / int_73f799ed | comment |
The Horseshoe Effect: Frequently mentioned and discussed. | |
Less Wrong (Blog) / int_73f799ed | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_73f799ed | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_73f799ed | |
Less Wrong (Blog) / int_74e2ca0a | type |
Ban on Politics | |
Less Wrong (Blog) / int_74e2ca0a | comment |
Ban on Politics: It's generally agreed that talking about contemporary politics leads to FlameWars and little else. See Phrase Catcher, below. | |
Less Wrong (Blog) / int_74e2ca0a | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_74e2ca0a | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_74e2ca0a | |
Less Wrong (Blog) / int_77b009ea | type |
Phrase Catcher | |
Less Wrong (Blog) / int_77b009ea | comment |
Phrase Catcher: The Flame Bait topic of politics is met with "politics is the mind-killer". | |
Less Wrong (Blog) / int_77b009ea | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_77b009ea | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_77b009ea | |
Less Wrong (Blog) / int_81c8dd41 | type |
Deus Est Machina | |
Less Wrong (Blog) / int_81c8dd41 | comment |
Deus Est Machina: Yudkowsky and some other members of Less Wrong from the Machine Intelligence Research Institute are working on making one. Singularity is eagerly awaited. | |
Less Wrong (Blog) / int_81c8dd41 | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_81c8dd41 | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_81c8dd41 | |
Less Wrong (Blog) / int_8967e17f | type |
Back from the Dead | |
Less Wrong (Blog) / int_8967e17f | comment |
Back from the Dead: Some in the Less Wrong community hope to achieve this through cryonics. | |
Less Wrong (Blog) / int_8967e17f | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_8967e17f | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_8967e17f | |
Less Wrong (Blog) / int_943f7a7e | type |
Wiki Walk | |
Less Wrong (Blog) / int_943f7a7e | comment |
Wiki Walk: It is fairly easy to go on one due to the links in the articles to other articles. Also, certain lines of thought about similar issues are organized into 'sequences' which make them more conveniently accessible. | |
Less Wrong (Blog) / int_943f7a7e | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_943f7a7e | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_943f7a7e | |
Less Wrong (Blog) / int_afd71b8a | type |
Inside a Computer System | |
Less Wrong (Blog) / int_afd71b8a | comment |
Inside a Computer System: Even aside from Friendship is Optimal and other discussion of Brain Uploading, the concept of being part of a computer simulation is discussed surprisingly often, most notably as an aspect of acausal trade with A.I.s and situations like Newcomb's paradox or Roko's Basilisk. | |
Less Wrong (Blog) / int_afd71b8a | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_afd71b8a | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_afd71b8a | |
Less Wrong (Blog) / int_b7f082b6 | type |
Logical Fallacies | |
Less Wrong (Blog) / int_b7f082b6 | comment |
Logical Fallacies: Revealed to be shockingly common for normal human minds, and something for rationalists to avoid. | |
Less Wrong (Blog) / int_b7f082b6 | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_b7f082b6 | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_b7f082b6 | |
Less Wrong (Blog) / int_c647e8e8 | type |
Pascal's Wager | |
Less Wrong (Blog) / int_c647e8e8 | comment |
A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of Pascal's Wager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead. | |
Less Wrong (Blog) / int_c647e8e8 | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_c647e8e8 | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_c647e8e8 | |
Less Wrong (Blog) / int_d8b3412b | type |
The Singularity | |
Less Wrong (Blog) / int_d8b3412b | comment |
The Singularity: With the twist that it's seen in a (mostly) positive light. | |
Less Wrong (Blog) / int_d8b3412b | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_d8b3412b | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_d8b3412b | |
Less Wrong (Blog) / int_e8e56799 | type |
Blue-and-Orange Morality | |
Less Wrong (Blog) / int_e8e56799 | comment |
Blue-and-Orange Morality: One of the core concepts of Friendly AI is that it's entirely possible to make something as capable as a human being that has completely alien goals. Luckily, there's already an example of an 'optimization process' completely unlike a human mind right here on Earth that we can use to see how good we are at truly understanding the concept. | |
Less Wrong (Blog) / int_e8e56799 | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_e8e56799 | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_e8e56799 | |
Less Wrong (Blog) / int_e9432eb | type |
Concepts Are Cheap | |
Less Wrong (Blog) / int_e9432eb | comment |
Concepts Are Cheap: Applause Lights. | |
Less Wrong (Blog) / int_e9432eb | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_e9432eb | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_e9432eb | |
Less Wrong (Blog) / int_eae0129f | type |
Anti-Advice | |
Less Wrong (Blog) / int_eae0129f | comment |
Anti-Advice: Called out as fallacious; reversed stupidity is not intelligence. | |
Less Wrong (Blog) / int_eae0129f | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_eae0129f | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_eae0129f | |
Less Wrong (Blog) / int_name | type |
ItemName | |
Less Wrong (Blog) / int_name | comment |
||
Less Wrong (Blog) / int_name | featureApplicability |
1.0 | |
Less Wrong (Blog) / int_name | featureConfidence |
1.0 | |
Less Wrong (Blog) | hasFeature |
Less Wrong (Blog) / int_name | |
Less Wrong (Blog) / int_name | itemName |
Less Wrong (Blog) |
The following is a list of statements referring to the current page from other pages.
Less Wrong (Blog) | hasFeature |
Alternate Personality Punishment / int_54870df8 | |
Less Wrong (Blog) | hasFeature |
Blog / int_54870df8 | |
Less Wrong (Blog) | hasFeature |
Digital Abomination / int_54870df8 | |
Less Wrong (Blog) | hasFeature |
The Dark Arts / int_54870df8 | |
Less Wrong (Blog) | hasFeature |
The Horseshoe Effect / int_54870df8 | |
Less Wrong (Blog) | hasFeature |
The Real Heroes / int_54870df8 | |
Less Wrong (Blog) | hasFeature |
Wiki Walk / int_54870df8 | |
Less Wrong (Blog) | hasFeature |
Winds of Destiny, Change! / int_54870df8 |
Copyright of DBTropes.org wrapper 2009-2013 DFKI Knowledge Management. Imprint. - Thanks to Bakken&Baeck for hosting. Contact.
Copyright of data TVTropes.org contributors under Creative Commons Attribution-Share Alike 3.0 Unported License.
Copyright of data TVTropes.org contributors under Creative Commons Attribution-Share Alike 3.0 Unported License.