Friday, January 28, 2011

Hating The Andromeda Galaxy

I'm discussing (well a monologue so far) the nature of politics at Libertarians Are Not Anarchists!. Commenter Peter Jackson thinks he has figured it out.

There are three types of politics: the politics of social justice, the politics of public order/social virtue, and the politics of individual freedom.
But he misses something deeper.

There is another unspoken force in politics:

It does not actually want to solve problem xxxx. It wants an outlet for feelings of moral superiority. i.e. some one to punish. A scape goat.

Here is another take on the same idea:

“Distrust anyone in whom the desire to punish is powerful” Friedrich Nietzsche

Such folks are immune to reason. Why? Because what they do is not based on reason. It is based on a need for, as George Orwell put it: “two minutes of hate”. I acknowledge that it is a powerful force in human nature. So I propose hating the Andromeda Galaxy. It should be safe from our hate for at least another few decades. Maybe longer.

H/T Instapundit

Cross Posted at Classical Values


tomcpp said...

Since reason doesn't actually work, none of these policies are any more reasonable than any of the others.

The more robots you try to program to become reasonable, the more failures of reasoning AI, the more disastrous -but reasonable- decision leading to broken bots, or the equally disastrous bots falling to their demise from what should have been a perfectly tolerable 1m fall, but isn't because of a thoroughly reasoned about feedback loop ... or, even more humiliating in the evaluation - the robot's inaction.

Reason doesn't work. I've tried for years and utterly failed to build a reasoning robot (but in the university it's "succeeded", you see we've proven it can't be done) ...

Even ridiculously simplified versions of rationality (e.g. ) lead nowhere.

Aside from the proof, the same has been tried for decades by CS researchers around the world.

In order to book even minor successes we've had to lower the program's working to : "attempting to predict the future by generalizing over what we've seen in the past". This is useless as a form of "rationality" in discussion since it means that no 2 humans, or robots, have the same idea about rationality.

So humans are not reasonable. We'd have killed ourselves to the last man, woman and child long ago.

Instead, we do what historically worked ... and guess what ?

Social justice politics works beautifully - if you want to become a dictator (though more likely one of his victims)

Politics of social order/public virtue - has shown monumental successes and equally huge failures.

Unfortunately, politics of individual liberty are not different in success from the social order/public virtue politics : huge successes, and failures where one's bafflement about the stupidity involved is only matched by amazement over the scale of the failure.

So imho, America would probably be wise to respect the deal that made it what it is : public virtue - private freedom.

Just a thought.

M. Simon said...

Reason doesn't work?

Ah. I get it. What ever Allah Wills.

Reason and food is all that separates us from barbarians. And I have noticed a strange connection - the more reason the more food.

BTW send me the plans and code for your reasoning robot. Maybe I can find the bug(s).