login about faq

I read the question someone posted a while back on the "precautionary principle" here: http://objectivistanswers.com/questions/3880/what-is-the-objectivist-view-of-the-precautionary-principle .

There seemed to be a clear position that the precautionary principle is far too vague to be used in a real sense and could even be abused by governments to stop innovation. The author Nassim Nicholas Taleb speaks about "risk of run" as a concept. This is elucidated at length here: https://medium.com/the-physics-arxiv-blog/genetically-modified-organisms-risk-global-ruin-says-black-swan-author-e8836fa7d78 but briefly the idea is that we need to be careful when playing with/testing technologies that could rapidly spread damage globally & quickly (thus potentially impacting huge swathes of people).

My question is: how do Objectivists see "risk of ruin" ? Is this a real concept or do Objectivists believe that all potential harm from new technology/innovations is always "localized" vs. global and that the whole concept is bunk? If so, how do you respond to Taleb's hypothesis on GMO foods? Should all scientific "playing around" be considered proper even if it could have massively damaging potential impacts ? In this bucket, I'd place GMOs, atmospheric engineering and, perhaps, strong AI etc.

If the answer is "this is a wrong concept and all scientific exploration is fine", then my question is: what happens if you're wrong ? Won't it be too late to "fix" things then ?

asked Nov 15 '14 at 16:40

Danneskjold_repo's gravatar image


1) Risk of ruin is indeed a real concept, but Taleb's use of it is highly nonstandard.

2) The question of the risks of genetically modified organisms is a scientific one. The current scientific consensus is that there's nothing inherently unsafe about it, and depending on your definition humans have been practicing it for decades (direct manipulation of genes), centuries (cross breeding), or even millenia (selective breeding).

3) The article you linked to doesn't define the concept of "risk of ruin", nor does it present any evidence regarding risks of GMOs.

(Nov 16 '14 at 14:45) anthony anthony's gravatar image

4) This whole thing appears to be another attempt by the environmentalism movement to send mankind back to the stone ages.

(Nov 16 '14 at 14:53) anthony anthony's gravatar image

That's exactly what worries me: on one hand I can't figure out what the risk he speaks of really is versus a "premonition" or "mathematical possibility" that the world's food supply could be extinguished. On the other hand, I certainly don't want to support something that could actually ruin things for all humans on Earth. It's puzzling, thus my query.

(Nov 16 '14 at 14:54) Danneskjold_repo Danneskjold_repo's gravatar image

If the answer is "this is a wrong concept and all scientific exploration is fine", then my question is: what happens if you're wrong ? Won't it be too late to "fix" things then ?

I was going to link you to an objectivistanswers.com discussion of Pascal's Wager, but apparently we haven't yet had one.

(Nov 16 '14 at 14:56) anthony anthony's gravatar image

Is that the "believe in God" wager ? To me that's pretty abstract and limited (even in the case God exists) to your own "soul". This one seems more concrete and in the realm of scientific reason. Theoretically for example you could bioengineer a virus that wipes out 10,000,000 people.

(Nov 16 '14 at 15:35) Danneskjold_repo Danneskjold_repo's gravatar image

Is that the "believe in God" wager ?


This one seems more concrete and in the realm of scientific reason.

As far as I can tell, Nassim Nicholas Taleb isn't even a scientist.

Theoretically for example you could bioengineer a virus that wipes out 10,000,000 people.

Yeah, but not by genetically modifying a tomato to make it stay fresh longer.

(Nov 16 '14 at 16:05) anthony anthony's gravatar image
showing 2 of 6 show all

The title question asks whether "risk of ruin" is a "real concept" (by which I assume the questioner means an objectively valid concept). However, the explication of the question appears to be more about whether objectivists would agree with the views of author Nassim Nicholas Taleb. These questions are different, and I encourage the questioner to update the question to make it clear which one they are actually asking about. I will address both questions below to the extent possible given the information in the question, but more detailed answers will require the questioner to provide additional information.

Is "Risk of Ruin" a Valid Concept?

To answer whether "risk of ruin" is a valid concept we would first need to know what the purported concept is supposed to mean (i.e., what units the purported concept allegedly integrates). The question does not provide a definition of the purported concept, and neither does the cited article. A Wikipedia article describes risk of ruin as follows:

Risk of ruin is a concept in gambling, insurance, and finance relating to the likelihood of losing all one's capital or impacting one's bankroll to the point that it cannot be recovered. For instance, if someone bets all their money on a simple coin toss, the risk of ruin is 50%.

I will assume for purposes of my answer that this definition corresponds to the concept that the questioner is asking about. If questioner is asking about a different concept, then I encourage the questioner to update the question to include a definition.

Given the definition in the Wikipedia article, it appears to me that "risk of ruin" could be a valid concept.

A concept is valid when it is formed based on observations of reality and through employing an objective method that respects the nature of concepts and the requirements of human cognition. ("The requirements of cognition determine the objective criteria of conceptualization." The Analytic-Synthetic Dichotomy in Introduction to Objectivist Epistemology, 96). Remember that concepts are tools, mental tools for helping us think better. They help us primarily by condensing multiple discrete but essentially similar concretes into a single unitary mental entity, and thereby improving the efficiency of our thinking and allowing us to transcend the natural limitations of how many discrete concretes our mind can simultaneously consider (a limitation referred to by objectivists as "the crow epistemology"). See the Ayn Rand Lexicon entry "Unit Economy" for more on the purpose and primary benefit of concepts. A grouping of concretes together that does not improve thinking and instead actually hinders our thinking is an invalid concept. For example, when the concretes that are grouped are not actually essentially similar to each other (considering the context of course, which sets the requirements for what is "essential"), then the purported concept is invalid. As another example, even some groupings of similar concretes might be invalid when the grouping is not necessary given human purposes. Leonard Peikoff explained:

The requirements of cognition forbid the arbitrary grouping of existents, both in regard to isolation and to integration. They forbid the random coining of special concepts to designate any and every group of existents with any possible combination of characteristics. For example, there is no concept to designate “Beautiful blondes with blue eyes, 5’5” tall and 24 years old.” Such entities or groupings are identified descriptively. If such a special concept existed, it would lead to senseless duplication of cognitive effort (and to conceptual chaos): everything of significance discovered about that group would apply to all other young women as well. There would be no cognitive justification for such a concept—unless some essential characteristic were discovered, distinguishing such blondes from all other women and requiring special study, in which case a special concept would become necessary.

. . .

In the process of determining conceptual classification, neither the essential similarities nor the essential differences among existents may be ignored, evaded or omitted once they have been observed. Just as the requirements of cognition forbid the arbitrary subdivision of concepts, so they forbid the arbitrary integration of concepts into a wider concept by means of obliterating their essential differences—which is an error (or falsification) proceeding from definitions by non-essentials.

The Analytic-Synthetic Dichotomy in Introduction to Objectivist Epistemology, 96.

In considering the concept of "risk of ruin" as presented by the Wikipedia article, it appears that the concretes integrated by the concept are reality based. The concretes integrated by such a high level concept are themselves concepts, such as the concepts of risk of loss or harm, and the concept of irrecoverable loss or harm. In addition, the concretes appear to be essentially similar--they share an essential similarity with regard to the magnitude of loss. In particular, the concept appears to be a narrowing of the wider concept of "risk of loss" to the subset of risks whose associated loss is either a total loss or is an irrecoverable loss (i.e., the magnitude of the loss is sufficiently large that recovery from the loss is impossible). In addition, it appears that there is a real world necessity for this concept, as judging risks is a significant part of human life and distinguishing risks whose loss is total or irrecoverable from other less severe losses is also clearly useful. Accordingly, I think that concept is a valid one.

Any Merit to Taleb's Views?

However, just because the concept of "risk of ruin" may be a valid concept, that does not mean that Taleb or others who use the concept are correct in their policy conclusions. Taleb is an environmentalist who is arguing against GMOs by appealing to the precautionary principle. The precautionary principle was nicely refuted in the answer here. However, Taleb attempts to defend the precautionary principle by arguing that it should only be applied when the potential harm is "global" as opposed to "local" and "scale independent" as opposed to "scale dependent". According to Taleb, the precautionary principle is unnecessary for local harm, but "When global harm is possible, an action must be avoided unless there is scientific near-certainty that it is safe." Taleb's argues that the refutations of the precautionary principle apply only to local harm, but that for global harm things are different. The connection to "risk of ruin" is that Taleb sees "global harm" as corresponding to "ruin"--i.e., irrecoverable loss.[1]

It is undoubtedly correct that, all other things being equal, global harm is worse than local harm, and it is eminently rational to take this distinction into account when making risk assessments. However, the distinction between local harm and global harm does not validate the precautionary principle. The precautionary principle is still based on the arbitrary because it assumes without evidence that the harm will be irrecoverable merely because it is metaphysically possible for the harm to be irrecoverable. All Taleb's argument has done is limited the application of this bad policy to fewer cases, and it does not address the core problem with the precautionary principle.

Moreover, the linked article does not address some glaring problems with Taleb's argument. For example, how does one know when the harm of an action will be local as opposed to global or scale-independent as opposed to scale-dependent? The only way one could know this would be if you had some actual knowledge about the harm that could result from the action. However, if you have actual knowledge about the harm that could result from the action, then you do not need the precautionary principle. The precautionary principle says that when the potential harm of the action is unknown, you should avoid the action--if you have actual knowledge about the harm, then it is not unknown (at least, not totally unknown). It appears to me that Taleb is simply trying to give a venire of respectability to the obviously flawed precautionary principle.

Based on observations of how environmentalists have operated in the past, I presume that assertions that an action has "global" harm would be made arbitrarily, without any actual knowledge of the potential harm aside from what "might be". Basically, if the action is one that the enviros oppose, then they would claim it has "global" harms, and they would proceed to concoct tenuous arguments about what "could be" to support the claim to "global" harm. Indeed, the enviros already have indoctrinated an entire generation of school kids to swallow lock-stock-and-barrel a readymade "global harm" argument that could be applied anytime they need one--the argument that any disturbance to "the ecosystem" will have widespread, unknowable effects, not only on the local ecosystem but also on the world. In essence, all the enviros would have to do is claim that an action disturbs some local ecosystem, and thereby they have an automatic tie in to a "global" harm. Thus, Taleb's "limiting" of the application of the precautionary principle would likely not be very limiting at all.

Finally, it should go without saying, but I’ll say it anyway: the government should not initiate force against people based on arbitrary claims about what “could be”. Thus, laws banning products or actions based on the precautionary principle—whether taken in its original form or as modified by Taleb—would be completely wrong. Whether a made-up harm is local or global does not change the fact that it is made up. Governments can only properly address actual harms, not made up harms.

[1] The precautionary principle relies on the idea from economics that an action should be taken only when the potential reward outweights the risk of loss. To determine the risk of loss, you multiply the amount of harm resulting from the action by the probability of the harm occuring--so if taking action A has a 20% chance of resulting in a $1,000 loss (or a harm that is equivalent to $1,000), then the risk of loss is $200 (0.2 X $1,000). Under the theory then, action A should only be taken if the potential reward is greater than $200. What the precautionary principle says, in effect, is that we should assume that the harm is infinite (or some very large amount that) when we do not actually know how much harm an action might cause. If the harm is infinite (or very large), then the risk becomes infinite (or very large) regarless of how small the probability of the harm occuring, and thus no amount of reward can justify taking the action. The assumption that the harm will be infinite (or very large) without any evidence suggesting as much is clearly arbitrary. Taleb's modification of the precautionary principle is essentially that he says that we should only consider the harm to be infinite when the potential harm is global and scale-independent, in addition to being unknown.

answered Nov 17 '14 at 12:14

ericmaughan43's gravatar image

ericmaughan43 ♦

edited Nov 17 '14 at 12:33

Thank you very much. Really thought-provoking answer.

(Nov 17 '14 at 13:08) Danneskjold_repo Danneskjold_repo's gravatar image

Anthony, why the down vote? If you think you have a better answer, then why don't you provide an actual answer rather than just lurking in the comments and sniping other peoples answers? A little positive contribution rather than constant contrarianism would be refreshing.

(Nov 18 '14 at 09:55) ericmaughan43 ♦ ericmaughan43's gravatar image

Anthony, my comment was more about a pattern than a particular occurrence. While I think you often have interesting insights, I have never seen you post an actual answer to a question--a comment is not an answer. Moreover, in my estimation your comments are negative more often than not (i.e., directed to challenging those who have provided an answer about some detail of their analysis rather than to contributing your own positive analysis). This is what I refered to as "sniping".

(Nov 21 '14 at 13:21) ericmaughan43 ♦ ericmaughan43's gravatar image

I did not mean to be hostile to you, and I apologize if I came across that way. My intention was rather to encourage you to start providing answers of your own. I think you have an interesting perspective and that answers from you could enrich the site.

Also, I do not wish to imply that comments that challenge aspects of an answer are inappropriate--they are perfectly fine. However, when such negative comments become dominant, I think that is problematic.

(Nov 21 '14 at 13:25) ericmaughan43 ♦ ericmaughan43's gravatar image

As for downvotes, you are free to use them however you would like. Personally, I think downvotes should serve to highlight to non-objectivists who are perusing the site which answers are out-of-line with established objectivist principles, so that no person reading the answer would mistake it for an actual "Objectivist Answer". Thus, when someone who doesn't know what they are talking about presumes to present an "Objectivist Answer" that is clearly contrary to the principles of Objectivism, I would downvote that answer (and would hope that others did so as well!).

(Nov 21 '14 at 13:33) ericmaughan43 ♦ ericmaughan43's gravatar image

However, I do not downvote answers merely because I do not like the author's style of presentation, or because I do not agree with the author on some peripheral detail, or because I do not agree with the author on even a central pillar of their analysis when that central pillar is in an area where reasonble disagreement is possible and no official objectivst position exists.

In any event, if I downvote an answer I will be ready to explain to the author why I did so should they ask.

(Nov 21 '14 at 13:39) ericmaughan43 ♦ ericmaughan43's gravatar image
My intention was rather to encourage you to start providing answers of your own. I think you have an interesting perspective and that answers from you could enrich the site.

Those interested in applying to become Answer providers can find additional information in the OA FAQ (link). In the past, I believe Anthony may have had objections to one or more of the requirements.

(Nov 23 '14 at 15:08) Ideas for Life ♦ Ideas%20for%20Life's gravatar image

3) I did explain to you why I downvoted your answer. I hesitate to go into more detail publicly other than to say “don't feed the trolls.”

If you'd like to discuss any of this privately, let me know how to contact you privately.

(Nov 28 '14 at 17:06) anthony anthony's gravatar image

This is a well reasoned answer, but it draws on the single bet case referred to in the Wikipedia article. The relevant case for this question is the multiple bet case. A very important part of this concept is that as the number of bets increases to infinity, the risk of ruin approaches 100%. This can be verified with any risk of ruin calculator such as http://www.forexscamalerts.com/risk-of-ruin-risk-of-drawdown-calculator or any other.

The implication here is: if humans want to avoid extinction (make an infinite number of bets), then humans can't accept any non-zero risks of extinction.

(Mar 14 '15 at 16:26) Lavoie Lavoie's gravatar image
showing 2 of 9 show all

This question is well answered by Eric, but a new comment on Eric's Answer highlights a specific aspect that evidently needs further emphasis. The comment states:

...if humans want to avoid extinction (make an infinite number of bets [?]), then humans can't accept any non-zero risks of extinction.

In the Objectivist view, life is not an endless series of "bets" in the face of pervasive, chronic uncertainty. Man possesses the faculty of reason, which gives him the cognitive capacity to understand the world he lives in and to initiate life-sustaining and life-enhancing actions accordingly. Chronic human inaction in the face of incomplete knowledge is a sure prescription for human death.

Although the comment (and the original question) refer to "extinction," there is a closely related, widespread view today concerning merely "great harm," not necessarily total extinction. Often there is great confusion over the issue of "harm" to what or whom. Note, however, that this view cannot escape the premise that man possesses the faculty of reason, which potentially gives him great power over his environment. He can learn (and has learned) how to use his power rationally, to identify objectively any cases of recklessness leading to demonstrable danger to others, and to intervene by means of objective law (and proper application of individual rights) when a danger to the lives of others truly warrants it. But again, chronic inaction and suppression of individual initiative in the face of possibly incomplete knowledge is a sure prescription for pervasive human suffering and death.

Crafting a question in the language of probability and gambling may be an attempt to avoid philosophical presuppositions in attacking reason, science, technology, and industry. But the metaphysical view of life as an endless series of "bets" in the face of pervasive, chronic uncertainty is exactly that -- a metaphysical view. It is an expression of the long, deeply rooted philosophical tradition of mysticism, altruism, collectivism, and statism -- as against the historically newer but also long-standing tradition of reason, egoism, individualism, and capitalism. It is an expression specifically of "Kant's gimmick," i.e., "attempting to negate reason by means of reason":

A man's protestations of loyalty to reason are meaningless as such: "reason" is not an axiomatic, but a complex, derivative concept—and, particularly since Kant, the philosophical technique of concept stealing, of attempting to negate reason by means of reason, has become a general bromide, a gimmick worn transparently thin. Do you want to assess the rationality of a person, a theory or a philosophical system? Do not inquire about his or its stand on the validity of reason. Look for the stand on axiomatic concepts. It will tell the whole story.

(Quoted from ITOE2, end of Chap. 6, "Axiomatic Concepts." Refer also to the topic of "Kant, Immanuel" in The Ayn Rand Lexicon.)

Update: Evidence and Proof

A new comment by the questioner begins:

I think the crux of the issue is "demonstrability".

No. The essence of the issue is proof, not concrete "demonstration." Man's rational faculty does not compel him to wait for concrete "demonstration" of harm before he can rationally conclude, from evidence and reasoning, that there is a real danger. Yet the concrete-bound epistemology is, indeed, what Taleb proposes. It's a form of empiricism: if man can't know, without direct, concrete observation, then he must naturally be superstitious of all manner of "unknown" potentialities, including GMOs.

Incidentally, the cited Internet article says that Taleb does not classify nuclear energy as posing a risk of global catastrophe, unlike GMOs (in Taleb's view). Yet GMOs and non-GMOs are perfectly capable of coexisting and giving buyers in a free market the opportunity to choose one or the other as they see fit, along with the potential of the non-GMOs and "organics" to survive a calamity (as suggested by Taleb) that might befall the GMOs.

Objectivist philosophy teaches that the most crucial political priority is to preserve individual freedom -- removing coercion from man's life -- not constructing an FDA or other governmental agency to make decisions on behalf of individuals by banning substances deemed harmful to individuals without any choice on their part in their own independent rational judgment. Objectivism denies and rejects the collectivist premise that "society" and all its members are to be ruled coercively by government. Taleb, in contrast, argues entirely from the "social" premise rather than the independent rational nature of individuals. The implicit, unchallenged premise is that individuals "belong" to "society," and that "society" is supreme.

answered Mar 15 '15 at 11:13

Ideas%20for%20Life's gravatar image

Ideas for Life ♦

edited Mar 17 '15 at 23:16

I think the crux of the issue is "demonstrability". Some people think that the potential harm of certain technology (in Taleb's case, GMOs) is so huge that demonstrating it would be tantamount to causing irreparable harm. This is where things get squirrely: to ban something terrible, you have to demonstrate that it is terrible but how can you do that ? To kill/hurt people is clearly not moral or practical. So what do you do? In many ways this reminds one of the nuclear issue: it is both a technology that provides energy but also can be used to kill hundreds of thousands of people.

(Mar 17 '15 at 11:08) Danneskjold_repo Danneskjold_repo's gravatar image

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Share This Page:



Asked: Nov 15 '14 at 16:40

Seen: 1,068 times

Last updated: Mar 17 '15 at 23:16