Home Up



Single Fault Critical Failures In Society Through Technologies

By Roy D. Follendore III

Copyright (c) 2003 by RDFollendoreIII


I believe that I have a particularly intelligent wife, one who certainly recognizes and appreciates important concepts that most people take for granted.  I think that was one of the reasons why I fell in love and married her.  Last night she told me that while driving home from work she was listening to the news about the great blackout of the north east.  She said that the thought had ran across her mind that civilization may have reached its technological peak.  I have been thinking about her thoughts today.  I am sure that with the blackout, computer viruses, the shuttle disaster, weapons of mass destruction and all of the other ways that we have been shown we can cause ourselves to fail recently, this idea ran across the back of everyone's mind these days. 

August 15, 2003

The fundamental thing that we should consider at this moment in our history is not that technology is failing us, but that it can't.  Regardless of our television fantasies, technology still simply does what it does, but mostly what we tell it to, however complex that it may be.  We attribute a kind of humanity to technology as though it has intent.  Technology at this stage of our civilization simply doesn't have intent and therefore does not have its own moral, ethical or technical capability to act independently.   Through our empathy it is easy for humanity to forget that technology does not really act upon us; we act through it.  Because of this, all technology is not so much a "thing" that acts on us as it is a communication media through which we reveal ourselves to each other.  Technology does not act independently without our consent because it is explicitly created.  We must implicitly or explicitly endorse everything that technology does and is capable of doing through design.  The failures that we are seeing are therefore not simply technological.  On our part they are intellectually moral, not so much by our tacit choices, but by the implications of our assumptions with respect to short term needs, mismatched with our long term obligations. We are causing our own single fault failures in our society through our technologies but not because of it. 

The problem is that we are simply not consistently and constantly listening to ourselves.  When we see overcrowded highways connecting our cities, we are really being told by aggregation of humanity, through the transportation technology, that there is something wrong with our organizational philosophy.  After a while we ignore the message. When we see aircraft that can be taken over and flown into our greatest buildings then we are being told through aircraft technology that something is wrong with our organizational philosophy.  We are ignoring that message. When we see a fantastic spacecraft disintegrate across our sky because of foam, we are being told through space technology that there is something wrong with our organizational philosophy.  We have certainly ignored that message more than once. When we see power blackouts we are being told through power technology that there is something wrong with our organizational philosophy. There is a common thread and it is simply that humanity has not been successful maintaining its technological agenda.  

It happens that in the early 1980's I authored one of the first white papers about the potential of creating complex cooperative computer viruses.  I presented the concept that it would be possible to create tiny scraps of code that could act through the side effects of existing channels of communication. From what I observed, my paper had absolutely no effect on the community that I was working in at the time.  The reason I later found was simply because the organization that I was in at the time was simply not prepared to listen.  The coin of authority within the organizational realm has always been funding. There was no funding for new ideas, and initiatives and therefore no opportunities to fix the basis of the technical problems before they began to embed and ingrain themselves into the technologies that we have today.  

The tendencies within modern bureaucracies are that original ideas are increasing forced to come from elsewhere to be validated. My fellow workers had their own investments and personal agendas in the greater schemes of the organization and they were not particularly interested in a competitor's, regardless of the importance.  Of course now there is a powerful and growing industry based on the technical oversight and control of viruses.  There are now many writers that openly discuss these ideas and now they generate a great deal of noise in the press. The point is that it often takes expensive calamities to get on and stay on the technological agenda.  The true organizational impetus for new industries are external concepts pushed by external fears. Technology and the problems that communicates always seems to be out there, not in here.  The failure of technology is therefore really a systemic failure of our acceptance of internalized responsibilities while we simultaneously accept all of the advantages and possibilities.   

People are listening to the technological issue of computer viruses striking our computers because man induced computer viruses to deliberately cause millions of our computers, containing billions of man hours of work to fail.  There is a clear message being sent both literally and figuratively from the originator of viruses to humanity.  Inside the code of the most recent virus was the question to Bill Gates asking him "why he continues to make so much money and allow such simple code to cause so many computers to fail."  Even as we hate the idea of 'hackers' writing viruses to damage so many people's lives, we have to admit that the question was legitimate.  The advancement of technology simply does not work the way that it should.  It is humanity that allows that, not technology.

Maybe it is time that we all began to consider the implications of what we mean by technology, as well as what it means to us.  For many of us, technology is a necessary and natural part of our lives.  We choose to ignore the depth of technologies that completely surround us. New Yorkers certainly no longer considered bridges and roads as technology until they had to walk across it in mass to escape their city when their buildings collapsed or the power grid went out. The simple fact that the bridges and highways exist, and that people can simply walk out by the millions is not only amazing, it should communicate to us that there are basic physical underpinnings to what civilized society has become.  It reinforces the idea that perhaps man can and should walk to and from work through the existing technological systems that we have created for ourselves.  Our transportation industry is not supposed to work like that either but we have seen that it can. Perhaps our telecommunication industry should also rediscover itself.

What we are seeing at this moment of history is not the collapse of civilization because of technology, but it is a definite shift in the philosophies of what we know as civilization.  It is a revolution of relationships between technologies as well as a revelation of what we are choosing technology to be.  As a communication medium, technology has always been a building process. We build things and when they no longer work the way that we desire we must rethink, destruct and rebuild new things based on what we think that we want.  Technology is a communication process because it takes constancy in communication to make it work for us. The communication that goes into what is accomplished by its broadcast is reflected through the results at the receiving end.  The destruction of technological ideals is systemic to the act of creation and exploitation of all technology.  The concept of technological  immortality and absolute perfection has never been part of the equation of technological reality within our physical universe.  Even in the abstract notions of rational and logical thinking, contextual entropy affects the potential of reasoning.     

All of this high flying thinking brings us back to the topic of this essay; the idea of single fault critical failures within our technology.  The reality is that everything that we do involves physical processes but when we begin to actually examine the facts there is really no such thing as a "single point failure." Single point failures are not really singular because they always coexist with respect to cause and effect to other things.  We really can not isolate single faults without also proposing that the relationship of the fault to other causation properties are irrelevant.  The dilemma is that if something is to be defined as a fault it must also be relevant to history.   We must therefore categorically state the obvious fact that technology is always genuinely systemic.  A technological fault can never really default to an absolute single isolated state because it is the reasoning within our array of choices that have become brittle when something fails.  We choose to rationally and semantically redefine boundaries of failure in order to escape responsibility for creation.  

Within our universe, the potential always exists for a component, device and system to fail both through environment, design and/or intent.   As a society we have not yet fully integrated into our psyche the realities espoused by Claude Shannon; the simple idea that information, and therefore knowledge, operates through the laws of physics. Universal entropy is therefore a fundamental law of perspective, whether the technology be physically or logically oriented. This means that ideas and ideals eventually decay and fail just as physical objects around us decay and fail. Any system that is not designed to include relative entropy as part of its complexity associated with technical knowledge is doomed to critical failure.

Humanity is constantly organizing and reorganizing itself to explore the possibilities of technologies relative to the economics of our local minima.  It is easy to forget that it is often a good thing that the reasons why technologies evolve is often very different from the reasons why technologies are successful.  Innovative individuals perceive technological value as more than the potential of failure. This is what causes innovative technologies to be able to constantly expand.  This is what drives Moore's law.  By the time that organizations are prepared to incrementally innovate, the issues have been explored and the perceived risk have been considered.  Organizations can simply steal the best original ideas of individuals within society because they have the power to ignore the consequences. This is tolerated perhaps because in many ways this has been a good thing for humanity.

But this does not mean that the organizations of society are using the best technologies that become available to us. Within our society, important future technologies that would otherwise protect and preserve the interests of humanity are ignored or even intentionally suppressed because they are incrementally competitive with the local "corporate" perception of existing investments. Managed technical risks are considered locally not globally. Our organizations as a whole are not considered to be part of the critical technical risks that are being taken.  While it may be a fact that we use technology to affect how we are organized, it is also a fact that we organize in order to influence human decision making.  We do not choose to manage the implications of our social structure within our technical decisions.             

The potential of critical failure therefore exists at each and every decision point in the creation and operation of technology.  The reason why technology at this stage of human development can not fail humanity is because it does not care if it fails us.  The fundamental difference in perception between the idea of man and machine is the fact that machines function because they can and humans function as they desire.  This means that at any moment a human being decides to give up living, but machines can never really choose to do that.  Technology simply performs correctly until it fails, even if it is constantly producing the wrong result for human beings.  At the lower biological levels human beings operate in the same way and this has been the essence of the justification for some academic intellectuals to describe mankind in terms of our technology.  

What is not being considered by such a rational is that the cognitive functions of the human mind are innately human.  Of course this is admittedly a recursive philosophical argument but it is also true that recursion is part of the intellectual issue with respect to the definition of humanity.  But this recursion is also the reason why while we may choose to use technologies to emulate humanity, we can never really artificially create humanity. At this moment in our development, human beings can and do freely choose to trust others and independently discover and accept new philosophies for their rational and often irrational interactions as they evolve.  Machines can not for they are not in our evolutionary loop.  The differences between man and a machine is therefore the relationship of inside to outside.    

For humanity, evolution is the fundamental core of our biological and social mindset. When we attempt to model our cognitive selves within machines, we are externally inducing a simulation of what we believe we want. It is a belief model and is as a consequence is quite different from what we are as biological products of evolution.  Independent self definition and self evolution is the basis of all biological life and the weight of historical trial and error is embedded within our genes. Today's technology is simply not internally motivated or internally capable of becoming more than what it is when it is created and that is probably a good thing. 

Let us hope that humanity is capable of managing our ultimate potential technological failure and its consequences as we discover the technological means to embed the evolutionary survival instincts of human beings into our technologies. If we were to find the way to create technology that could emulate us as human beings, it would by definition be designed to compete with what we are and/or shall become.  We would be intentionally designing technology that would be designed to catastrophically fail us.   We would be creating the true single fault critical failure technology.

Through this discourse we have arrived at our original issue for this essay.  Can single fault failures in society exist through technologies?  The answer is that that they can because at this point in our evolution, it is we who are the technology.  We have reached a critical point in our evolution.

As humanity evolves, our relationship to technology must evolve with respect to what we choose to be as a society.  Society has had and may always will have difficulty accepting the idea that because we technologically exist, we are responsible for our technological existence.  A fundamental scientific aspect of taking charge of our relationships with nature is accepting responsibility of our relationship within nature.  Perhaps humanity's physical concept of single fault failure may never be taken in context with our systemic acts of creation. It may even be that we can not see our responsibilities even if we were to accept responsibility. Single fault critical failures within society through technologies does not necessarily mean that they are singularly apparent.  It is too easy to forge that as humanity evolves we specify the singular meaning of things. Single fault critical failures are definable as a part of our intellectual, philosophical, moral and ethical ethos as a technological species as much as it may be to specifically isolatable technological functions.  

Our underpinning ideals of logical perfection are the seeds embedded within fabric of our constant technological media events. Like a sailing ship driven by the constancy of a hurricane wind towards an unseen but inevitable reef, our ability to create and manage technology is on a critical path within our societal evolution as a species.  At this critical point, we desperately need to evolve better ways to discriminate the noise from the knowledge of technologies so that we may listen to our problems better, accept responsibilities sooner and better act upon what we discover sooner.  As it stands, when humanity listens to technology we are really not listening to its message, we are worshiping the risks.




Copyright (c) 2001-2007 RDFollendoreIII All Rights Reserved