Deletion policy

From Lesswrongwiki

Jump to: navigation, search

This is a summary of what LessWrong posts or content may potentially be deleted by administrators and moderators, with some explanation of why.

Background:

Although in many cases the restrictions are not obvious, in almost any given forum speech is not free. An academic conference may pride itself on vigorously protecting the right of an undergraduate to ask difficult questions during a professor's Q&A, and yet even this act potentially exposes the undergrad to loss of status if the question is stupid. More importantly, the conference probably requires a fee to attend - the vaunted 'free speech' doesn't include to allowing mere mortals to wander in off the street and talk. And anyone who started talking during someone else's presentation, or swearing into the microphone during Q&A, might be quickly removed - the apparently free plains are really carefully walled gardens. Even 4chan, which in many ways provides a much better picture of what real 'free speech' would look like, still has administrators who delete spam. In the era of the Internet, your attention is a commodity; everyone wants it, and some people are willing to outright steal it. Everyone might enjoy, perhaps, if they themselves could speak with complete freedom in every forum - the way you would like to be able to go on national television any time you wanted. In this sense, anyone can be expected to object when they want to say something to an audience, and are somehow prevented from doing so. But it wouldn't actually be possible to run radio, television, or the Internet, and your cellphone would be inundated with spam calls, if 'free speech' meant everyone could talk in any forum at any time. The real danger is when an entity with sufficient power steps in to prevent a sentence from being spoken anywhere, by anyone, to anyone. But Internet message boards are not all-powerful entities, and alternatives everywhere exist (like, you know, 4chan); and the fact that not everyone is allowed to say anything at any time, even if those individuals themselves might prefer to be able to say anything at any time in any communications medium anywhere, does not constitute a terrible threat to civil liberties.

Most of the burden of moderation on LessWrong is carried by upvotes and downvotes - comments the community doesn't think were a good use of their time, will be downvoted. We encourage you to downvote any comment that you'd rather not see more of - please don't feel that this requires being able to give an elaborate justification.

Even so, administrators and moderators may, in certain cases, entirely delete comments.

Specific reasons for deletion:

Prolific trolls

Prolific trolls can post enough comments that users become fatigued of downvoting them. Once a commenter has been downvoted sufficiently strongly sufficiently many times, an administrator or moderator may go through and delete other comments at their whim, whether or not those comments have also been downvoted.

Trollfeeding

Trolls thrive on attention. A few users who are unable to restrain themselves from feeding trolls, represent a threat to the community because they provide troll-food and make LW a more attractive place for trolls. People who reply to trolls, even if their comments are amazingly intelligent, even if their comments are upvoted, may find their comments deleted. We strongly recommend downvoting all replies to troll comments. A 5-point karma toll applies if you try to reply anywhere inside a thread beneath a sufficiently downvoted comment. Sufficiently downvoted comments are not expanded by default. Replies to descendants of sufficiently downvoted comments do not appear in Recent Comments. (This used to be a large problem. Can you tell?)

Spam

Spam will be deleted outright to prevent it from being used to serve a useful purpose for spammers.


The following reasons for deletion are somewhat more controversial. Please keep in mind that this board is owned and run by organizations which consider it to be their property, and that our trying to explain what may get deleted and why, does not mean that these decisions are subject to public vote.

Posts rewarding external malefactors with attention.

It's worth noting that the entire phenomenon of terrorism exists simply because the media rewards terrorists with media attention - a larger-scale analogue of feeding trolls. If we lived in a world where nobody knew anything about the group that felled the World Trade Center - not who they were, their names to be made famous, or their goals to receive publicity - then the World Trade Center would not have been felled in the first place. Aliens watching the whole affair might have justly concluded that Earth's public media were the ones responsible for most terrorism, since they are the key component in a symbiotic relationship whereby terrorists commit deeds that sell lots of newspapers, and are rewarded with publicity for their causes. Of course no actual conspiracy is necessary for this situation to obtain - it is merely a Nash equilibrium of the private incentives for media and for terrorists.

The point remains that rational agents would not reward people who do stupid things or say stupid things with attention, publicity, or as the case may be, links which contribute LessWrong's Google PageRank to their websites.

Hypothetical violence against identifiable targets

Posts or comments purporting to discuss 'hypothetical' violence against identifiable real people or groups, or 'ask' whether that violence is a good idea, may be deleted by administrators or moderators.

This means that if you post an elaborate post exploring the potential great reasons to kill Brittany Fleegelburger or burglarize the homes of people with purple eye color (note that this page is being written so as not to identify any individuals as targets of violence), the post or comment, along with replies to it containing sufficient information to identify such an individual, may be deleted without ado.

In general, grownups in real life tend to walk through a lot of other available alternatives before resorting to violence. To paraphrase Isaac Asimov, having your discussion jump straight to violence as a solution to any given problem, is a strong sign of incompetence - a form of juvenile locker-room talk. (We emphasize to the casual reader that this situation has arisen very rarely over a multi-year lifetime of a message board with thousands of commenters; by far the vast majority of LW commeters appreciate this point.) Pleading, "But how can we gather information if we can't talk about it?" is not an excuse considering the aforementioned point and that the act of talking about violence against identifiable people can cause them to feel justly threatened, and justly complain. Would you like it if there were lots of 'hypothetical' scenarios being talked about in which it was a great idea to shoot you? No, because you wouldn't want that idea on the minds of thousands of people at least one of whom might be crazy. This is why - in addition to obvious legal and public-image issues - we think it is actually harmful to jump straight to violence as a solution, and then talk about it. People who really cared about an issue would realize the obvious knock-on negative effects of both violence, and the negative effects on the image of that issue of discussion that dwells on violence, and talk about the many other alternatives instead.

This similarly applies to someone who, in reply to a careful argument that e.g. a substance called congohelium is harming the environment, responds, "Ah, but if you go around saying that congohelium harms the environment, then someone may assassinate congohelium manufacturers! How terrible!" This is a form of the logical fallacy of appeal to consequences - someone trying to make the thesis "Congohelium harms the environment" look bad by hypothesizing that if this idea is true and believed, some crazy person might go assassinate congohelium manufacturers, which is icky violence. But in this case, the only person who raised and publicly gave publicity to the idea of violence against congohelium manufacturers, was the person trying to make the idea look bad - in fact, they're demonstrating that they're willing to throw congohelium manufacturers under a truck, for the sake of winning this online argument. So if you're the first one to allege that some idea you don't like implies violence (as the first and only alternative) toward identifiable targets, do not be too surprised if this comment itself is deleted.

Information hazards.

When a certain episode of Pokemon accidentally contained a pattern of red and blue flashes capable of inducing epilepsy, 618 children were taken to hospitals. (Afterward, several news programs showed the epilepsy-inducing pattern in their reports, and more children went to the hospital. Remember, it's not idiocy, it's incentives!) Another example of an information hazard might be, e.g., if you look on Wikipedia on the entries of people rumored to be major players in the Russian mafia, you will see no mention of their putative criminal activities. This is because, among other reasons, the people who run Wikipedia do not want to actually really get assassinated. We would delete such information if posted, even though it was both true and important, because it is not (for unusual reasons, not mental health reasons) physically healthy to know it. Actual information hazards are pretty darned hard to invent, but rest assured that if you post or comment something that could only harm the reader in some odd fashion, we reserve the right to delete it. (Warning! The previous link is to TV Tropes, and if you don't want to lose an hour, you shouldn't click on it!)

Toxic mindwaste

In the same way that deliberately building a fission pile can produce new harmful isotopes not usually present in the biosphere, in the same way that sufficiently advanced computer technology can also allow for more addictive wastes of time than anything which existed in the 14th century, so too, a forum full of people trying to produce amazing new kinds of rationality can also produce abnormally awful and/or addictive ideas - toxic mindwaste.

The following topics are permanently and forever banned on LessWrong:

  • Emotionally charged, concretely utopian or dystopian predictions about life after a 'technological singularity'. (Exception may be made for sufficiently well-written fiction.)
  • Arguments about how an indirectly-normative or otherwise supermoral intelligence would agree with your personal morality or political beliefs. (This says nothing beyond the fact that you think you're right.)
  • Trying to apply timeless decision theory to negotiations with entities beyond present-day Earth (this topic bears a dreadful fascination for some people but is more or less completely useless, and some of what's been said slides over into reinventing theology, poorly).

More ordinarily mind-killing topics like discussions of contemporary politics (e.g. Republicans vs. Democrats in the US) may be suppressed at will if they seem liable to degenerate into into standard internet flamewars. Discussion of pickup artistry and some gender arguments are on dangerous grounds - a passing mention probably won't be deleted but if it turns into a whole thread, the whole thread may well be razed. Ordinary downvoting usually takes care of this sort of thing, but the mods reserve the right to step in if it doesn't.

Some discussions of criminal activities.

We live in a society with many stupid laws (such as the US's ban on marijuana) which are not actually enforced, or laws which are selectively enforced against racial minorities only (such as the US's ban on marijuana). Thus there is not a blanket injunction against discussing things on LessWrong that happen to be illegal in one or more jurisdictions. On the other hand, some things are illegal for reasonably good reasons. And also on the other hand, if there actually is good reason to take the very serious step of breaking a law, then by publicly discussing your lawbreaking on the Internet, you fail forever as a conspirator. In general, there's the following twin test to apply to discussing usually-illegal activities on LW: "Is it true that whether this was a good idea or a bad idea, it would in either case probably be a bad idea to discuss it on LessWrong?" In the case where something is actually a bad idea, discussing it may waste people's time, cause unfavorable publicity, give a tiny fraction of the population the impetus to do something stupid, etcetera; and in the case where something is a good idea, discussing your intended crime on the Internet is still stupid. We do not attach a particularly high value to the wonderful advice you can get by asking the Internet before breaking a law. Please think twice about both sides of this question before discussing something illegal.

Topics we have asked people to stop discussing.

LessWrong is focused on rationality and will remain focused on rationality. There's a good deal of side conversation which goes on and this is usually harmless. Nonetheless, if we ask people to stop discussing some other topic instead of rationality, and they go on discussing it anyway, we may enforce this by deleting posts, comments, or comment trees.

Harrassment of individual users.

If we determine that you're e.g. following a particular user around and leaving insulting comments to them, we reserve the right to delete those comments. (This has happened extremely rarely.)

Revelation of users' personal information.

If you somehow find out that LordVoldemort's real name is Tom Marvolo Riddle and that he lives in London, well, we can't stop you from posting that information anywhere, but we can stop you from posting it here. Be polite - if someone wants to be pseudonymous, let them be. (This has also happened extremely rarely.)


All of the above guidelines are not binding on moderators or administrators. We aspire to have large amounts of common sense and are not forced by this wiki page to delete anything.