When a chain of logic leads to an extreme scenario, it is referred to a slippery slope, such as slowly tipping an object off the edge of a plateau and it will slide down to the bottom under its own power. Frequently the chain of logic is tenuous and the outcome undesirable, leading fear to the initial step or steps such that it is or they are not taken. When the chain of logic is not tenuous it is a useful tool, but a tool that is rarely used.
When used erroneously, the slippery slope logic becomes a fallacy simply because the chain of logic between the first action to the last action does not hold up. When used correctly, the slippery slope argument can be used to demonstrate reasonable dire consequences, but only a few steps along. If the number of steps is too great (perhaps 5 or more), then the complexity of each interacting event becomes too great to give any reliability to the outcome. This is generally the error introduced by the slippery slope logical fallacy.
Additionally the outcome is generally exaggerated and dire. This evokes a fear or repulsion emotion in the recipient of the argument, distorting their value system when it comes to evaluating the validity of each step in the slippery slope.
There are two main forms of the slippery slope logical fallacy, implied and explicit. With the implied slippery slope logical fallacy, the steps between the initial event and the dire outcome is implied and not specifically specified. For example, drinking alcohol leads you to become an alcoholic and therefore you will kill yourself and your friends in a horrible drunken driving accident. In some instances this may even turn out to be correct, yet in the vast majority of situations, not only do people not become alcoholics (in the medical sense), but also relatively few drunk people are involved in horrific car accidents, especially that kill their friends. The dire consequences evoke a fear of the outcome, prompting you to overvalue the tenuous chain of logic leading to this outcome.
The explicit version of this logical fallacy lists all of the necessary steps in the chain, and each step can seem feasible, even the outcome can seem feasible, but the slippery slope mechanism overstates the likelihood of the outcome. In a mechanistic universe a long chain of actions and reactions can indeed lead to a predictable outcome (Laplaze Machine, or Demon), but in a chaotic probability driven universe (like ours), the next steps become less and less predictable. An example of a highly predictable mechanistic model can be demonstrated by use of a gravity shoot releasing a billiard ball at a set angle and speed onto a billiard table such that the ball bounces off 3 walls and goes into a pocket. Works every time. Yet if we organise for 20 or 30 bounces, even this model begins to break down. More realistically speaking, crumple a can and place it on a table. Slowly push that can off the table until it falls. Now mark the place where it ends up (stops moving). Put the can back on the table and slowly push it off again. It won’t land where the last one did. Heck, it won’t even tip off the table the same way.
When dealing with life forms, the can example above is far more like reality than the billiard ball example. To translate choice into the first billiard ball example, each step (bounce of the ball on the billiard table) has a number of angles it can now bounce off into depending on the chooser, which will change each progressive bounce such that the final pocket (if any) the billiard ball ends up in is far different to the preceding and next one.
If chains of events are so tenuous, how does science allow us to make predictions at all? Shouldn’t we just give up? Doesn’t that just debunk the whole point of everything?
No. This is why – the study of nature is far more probabilistic than not. In general, things happen roughly the same, yet each event is different. Consider our can example above. The exact tipping point depends on a number of factors – the orientation of the crumpled can, the speed at which it is pushed, wind currents, the temperature of the can, the surface, the air and so on. If we know this, we then need to contend with the air currents on the way down, the rotational spin imparted on the crumpled can as it falls off the edge and so on. Eventually it will strike a spot on the ground and bounce a few times. Where it strikes the ground, how it strikes the ground, velocity of the can, angular momentum of the can and the type of material of the ground will all play a part in defining where and how the can will bounce. Each successive bounce will have a similar set of calculations. If we know enough… we can actually mostly work out where the can will land, much like the billiard balls. Yet each push off the edge will be a new problem – a unique problem. The repeated problem, though, has predictable components. The can will slide on the table, it will fall, it will land and bounce, it will end up somewhere.
So lets do this experiment 100 times. We will discover that the final location of the crumpled can has a high value close to the landing zone, petering out to a low value of locations far away from the landing zone (by value, we mean probability of landing). There will be a boundary of maximum landing that the crumpled can will not go past, but it will be sparsely populated with landings, while the closer in section will have a higher population of landings. The initial impact zone will have a smaller version of the final landing zone. The fall location at the edge of the table will have a similarly smaller location. Each step from the push to the final resting location becomes less predictable, but still has a level of prediction. On average, the location can be predicted, the likely outcome known. If we changed the table and floor to a set of stairs, the complexity goes up, but we can still work out the area that the crumpled can will rest in rather than the specific location. That area is highly predictable, even though the stairs may be 100 steps long. Often in science, the area of find rest is the outcome we are after, rather than the specific location this time.
If we introduce people walking up the stairs, we can’t easily predict if a person will step on the can, kick it, pick it up and so on. The variables have made it too hard to predict.
Using the knowledge of Newtonian Physics (which are superseded by Einstein’s laws, but are still pretty much good enough for nearby astronomy) satellites are launched from Earth, spun around gravity wells (such as moons and planets) and whizzed off with high precision all over the solar system. The calculations for this are on the one hand monstrous, yet also on the other hand elegant. Also most satellites have a fudge factor built into them – a slight miss can be realigned en-route and corrected for.
In your computers central processing unit (CPU) is error correction, so when the high speed electrons misbehave and do some of that quantum stuff, the non-average result is detected and adjusted. This happens remarkably fast, all things considered allowing you to read this from the web. If these weren’t corrected for, we would be back in the mechanical calculator days. We only need this error correction because of the precision we are demanding from our electronics. Simpler electronics don’t need error correction, because they rely less on the accuracy of the result.
One may think, at this point, that the slippery slope kind of seems reasonable. If the arguer were to take enough factors into consideration in their prediction of results, then they would be. The problem is the arguer generally hasn’t, so they are making value laden decisions about the next step which are faulty. It is like predicting that the crumpled can will land at a particular extreme position. It may do so once, but the odds of a repeat are very unlikely, yet the arguer is suggesting that this will happen every time, or at least the consequences are so high that we can’t risk pushing the can off the edge of the table. That is the error – that the odds are overstated because the feared consequences are so extreme.