I read the question someone posted a while back on the "precautionary principle" here: http://objectivistanswers.com/questions/3880/what-is-the-objectivist-view-of-the-precautionary-principle .
There seemed to be a clear position that the precautionary principle is far too vague to be used in a real sense and could even be abused by governments to stop innovation. The author Nassim Nicholas Taleb speaks about "risk of run" as a concept. This is elucidated at length here: https://medium.com/the-physics-arxiv-blog/genetically-modified-organisms-risk-global-ruin-says-black-swan-author-e8836fa7d78 but briefly the idea is that we need to be careful when playing with/testing technologies that could rapidly spread damage globally & quickly (thus potentially impacting huge swathes of people).
My question is: how do Objectivists see "risk of ruin" ? Is this a real concept or do Objectivists believe that all potential harm from new technology/innovations is always "localized" vs. global and that the whole concept is bunk? If so, how do you respond to Taleb's hypothesis on GMO foods? Should all scientific "playing around" be considered proper even if it could have massively damaging potential impacts ? In this bucket, I'd place GMOs, atmospheric engineering and, perhaps, strong AI etc.
If the answer is "this is a wrong concept and all scientific exploration is fine", then my question is: what happens if you're wrong ? Won't it be too late to "fix" things then ?
asked Nov 15 '14 at 16:40
The title question asks whether "risk of ruin" is a "real concept" (by which I assume the questioner means an objectively valid concept). However, the explication of the question appears to be more about whether objectivists would agree with the views of author Nassim Nicholas Taleb. These questions are different, and I encourage the questioner to update the question to make it clear which one they are actually asking about. I will address both questions below to the extent possible given the information in the question, but more detailed answers will require the questioner to provide additional information.
Is "Risk of Ruin" a Valid Concept?
To answer whether "risk of ruin" is a valid concept we would first need to know what the purported concept is supposed to mean (i.e., what units the purported concept allegedly integrates). The question does not provide a definition of the purported concept, and neither does the cited article. A Wikipedia article describes risk of ruin as follows:
I will assume for purposes of my answer that this definition corresponds to the concept that the questioner is asking about. If questioner is asking about a different concept, then I encourage the questioner to update the question to include a definition.
Given the definition in the Wikipedia article, it appears to me that "risk of ruin" could be a valid concept.
A concept is valid when it is formed based on observations of reality and through employing an objective method that respects the nature of concepts and the requirements of human cognition. ("The requirements of cognition determine the objective criteria of conceptualization." The Analytic-Synthetic Dichotomy in Introduction to Objectivist Epistemology, 96). Remember that concepts are tools, mental tools for helping us think better. They help us primarily by condensing multiple discrete but essentially similar concretes into a single unitary mental entity, and thereby improving the efficiency of our thinking and allowing us to transcend the natural limitations of how many discrete concretes our mind can simultaneously consider (a limitation referred to by objectivists as "the crow epistemology"). See the Ayn Rand Lexicon entry "Unit Economy" for more on the purpose and primary benefit of concepts. A grouping of concretes together that does not improve thinking and instead actually hinders our thinking is an invalid concept. For example, when the concretes that are grouped are not actually essentially similar to each other (considering the context of course, which sets the requirements for what is "essential"), then the purported concept is invalid. As another example, even some groupings of similar concretes might be invalid when the grouping is not necessary given human purposes. Leonard Peikoff explained:
The Analytic-Synthetic Dichotomy in Introduction to Objectivist Epistemology, 96.
In considering the concept of "risk of ruin" as presented by the Wikipedia article, it appears that the concretes integrated by the concept are reality based. The concretes integrated by such a high level concept are themselves concepts, such as the concepts of risk of loss or harm, and the concept of irrecoverable loss or harm. In addition, the concretes appear to be essentially similar--they share an essential similarity with regard to the magnitude of loss. In particular, the concept appears to be a narrowing of the wider concept of "risk of loss" to the subset of risks whose associated loss is either a total loss or is an irrecoverable loss (i.e., the magnitude of the loss is sufficiently large that recovery from the loss is impossible). In addition, it appears that there is a real world necessity for this concept, as judging risks is a significant part of human life and distinguishing risks whose loss is total or irrecoverable from other less severe losses is also clearly useful. Accordingly, I think that concept is a valid one.
Any Merit to Taleb's Views?
However, just because the concept of "risk of ruin" may be a valid concept, that does not mean that Taleb or others who use the concept are correct in their policy conclusions. Taleb is an environmentalist who is arguing against GMOs by appealing to the precautionary principle. The precautionary principle was nicely refuted in the answer here. However, Taleb attempts to defend the precautionary principle by arguing that it should only be applied when the potential harm is "global" as opposed to "local" and "scale independent" as opposed to "scale dependent". According to Taleb, the precautionary principle is unnecessary for local harm, but "When global harm is possible, an action must be avoided unless there is scientific near-certainty that it is safe." Taleb's argues that the refutations of the precautionary principle apply only to local harm, but that for global harm things are different. The connection to "risk of ruin" is that Taleb sees "global harm" as corresponding to "ruin"--i.e., irrecoverable loss.
It is undoubtedly correct that, all other things being equal, global harm is worse than local harm, and it is eminently rational to take this distinction into account when making risk assessments. However, the distinction between local harm and global harm does not validate the precautionary principle. The precautionary principle is still based on the arbitrary because it assumes without evidence that the harm will be irrecoverable merely because it is metaphysically possible for the harm to be irrecoverable. All Taleb's argument has done is limited the application of this bad policy to fewer cases, and it does not address the core problem with the precautionary principle.
Moreover, the linked article does not address some glaring problems with Taleb's argument. For example, how does one know when the harm of an action will be local as opposed to global or scale-independent as opposed to scale-dependent? The only way one could know this would be if you had some actual knowledge about the harm that could result from the action. However, if you have actual knowledge about the harm that could result from the action, then you do not need the precautionary principle. The precautionary principle says that when the potential harm of the action is unknown, you should avoid the action--if you have actual knowledge about the harm, then it is not unknown (at least, not totally unknown). It appears to me that Taleb is simply trying to give a venire of respectability to the obviously flawed precautionary principle.
Based on observations of how environmentalists have operated in the past, I presume that assertions that an action has "global" harm would be made arbitrarily, without any actual knowledge of the potential harm aside from what "might be". Basically, if the action is one that the enviros oppose, then they would claim it has "global" harms, and they would proceed to concoct tenuous arguments about what "could be" to support the claim to "global" harm. Indeed, the enviros already have indoctrinated an entire generation of school kids to swallow lock-stock-and-barrel a readymade "global harm" argument that could be applied anytime they need one--the argument that any disturbance to "the ecosystem" will have widespread, unknowable effects, not only on the local ecosystem but also on the world. In essence, all the enviros would have to do is claim that an action disturbs some local ecosystem, and thereby they have an automatic tie in to a "global" harm. Thus, Taleb's "limiting" of the application of the precautionary principle would likely not be very limiting at all.
Finally, it should go without saying, but I’ll say it anyway: the government should not initiate force against people based on arbitrary claims about what “could be”. Thus, laws banning products or actions based on the precautionary principle—whether taken in its original form or as modified by Taleb—would be completely wrong. Whether a made-up harm is local or global does not change the fact that it is made up. Governments can only properly address actual harms, not made up harms.
 The precautionary principle relies on the idea from economics that an action should be taken only when the potential reward outweights the risk of loss. To determine the risk of loss, you multiply the amount of harm resulting from the action by the probability of the harm occuring--so if taking action A has a 20% chance of resulting in a $1,000 loss (or a harm that is equivalent to $1,000), then the risk of loss is $200 (0.2 X $1,000). Under the theory then, action A should only be taken if the potential reward is greater than $200. What the precautionary principle says, in effect, is that we should assume that the harm is infinite (or some very large amount that) when we do not actually know how much harm an action might cause. If the harm is infinite (or very large), then the risk becomes infinite (or very large) regarless of how small the probability of the harm occuring, and thus no amount of reward can justify taking the action. The assumption that the harm will be infinite (or very large) without any evidence suggesting as much is clearly arbitrary. Taleb's modification of the precautionary principle is essentially that he says that we should only consider the harm to be infinite when the potential harm is global and scale-independent, in addition to being unknown.
This question is well answered by Eric, but a new comment on Eric's Answer highlights a specific aspect that evidently needs further emphasis. The comment states:
...if humans want to avoid extinction (make an infinite number of bets [?]), then humans can't accept any non-zero risks of extinction.
In the Objectivist view, life is not an endless series of "bets" in the face of pervasive, chronic uncertainty. Man possesses the faculty of reason, which gives him the cognitive capacity to understand the world he lives in and to initiate life-sustaining and life-enhancing actions accordingly. Chronic human inaction in the face of incomplete knowledge is a sure prescription for human death.
Although the comment (and the original question) refer to "extinction," there is a closely related, widespread view today concerning merely "great harm," not necessarily total extinction. Often there is great confusion over the issue of "harm" to what or whom. Note, however, that this view cannot escape the premise that man possesses the faculty of reason, which potentially gives him great power over his environment. He can learn (and has learned) how to use his power rationally, to identify objectively any cases of recklessness leading to demonstrable danger to others, and to intervene by means of objective law (and proper application of individual rights) when a danger to the lives of others truly warrants it. But again, chronic inaction and suppression of individual initiative in the face of possibly incomplete knowledge is a sure prescription for pervasive human suffering and death.
Crafting a question in the language of probability and gambling may be an attempt to avoid philosophical presuppositions in attacking reason, science, technology, and industry. But the metaphysical view of life as an endless series of "bets" in the face of pervasive, chronic uncertainty is exactly that -- a metaphysical view. It is an expression of the long, deeply rooted philosophical tradition of mysticism, altruism, collectivism, and statism -- as against the historically newer but also long-standing tradition of reason, egoism, individualism, and capitalism. It is an expression specifically of "Kant's gimmick," i.e., "attempting to negate reason by means of reason":
A man's protestations of loyalty to reason are meaningless as such: "reason" is not an axiomatic, but a complex, derivative concept—and, particularly since Kant, the philosophical technique of concept stealing, of attempting to negate reason by means of reason, has become a general bromide, a gimmick worn transparently thin. Do you want to assess the rationality of a person, a theory or a philosophical system? Do not inquire about his or its stand on the validity of reason. Look for the stand on axiomatic concepts. It will tell the whole story.
(Quoted from ITOE2, end of Chap. 6, "Axiomatic Concepts." Refer also to the topic of "Kant, Immanuel" in The Ayn Rand Lexicon.)
Update: Evidence and Proof
A new comment by the questioner begins:
I think the crux of the issue is "demonstrability".
No. The essence of the issue is proof, not concrete "demonstration." Man's rational faculty does not compel him to wait for concrete "demonstration" of harm before he can rationally conclude, from evidence and reasoning, that there is a real danger. Yet the concrete-bound epistemology is, indeed, what Taleb proposes. It's a form of empiricism: if man can't know, without direct, concrete observation, then he must naturally be superstitious of all manner of "unknown" potentialities, including GMOs.
Incidentally, the cited Internet article says that Taleb does not classify nuclear energy as posing a risk of global catastrophe, unlike GMOs (in Taleb's view). Yet GMOs and non-GMOs are perfectly capable of coexisting and giving buyers in a free market the opportunity to choose one or the other as they see fit, along with the potential of the non-GMOs and "organics" to survive a calamity (as suggested by Taleb) that might befall the GMOs.
Objectivist philosophy teaches that the most crucial political priority is to preserve individual freedom -- removing coercion from man's life -- not constructing an FDA or other governmental agency to make decisions on behalf of individuals by banning substances deemed harmful to individuals without any choice on their part in their own independent rational judgment. Objectivism denies and rejects the collectivist premise that "society" and all its members are to be ruled coercively by government. Taleb, in contrast, argues entirely from the "social" premise rather than the independent rational nature of individuals. The implicit, unchallenged premise is that individuals "belong" to "society," and that "society" is supreme.