In Defense of Error

In Defense of Error

A Strategy for Coding Biomorphic Machines

More than one entity is working on simulating a human brain in a supercomputer. In fact, several universities, governments, research centers and private ventures are racing to create a digital mind. With exponential growth in processing power, this effort to translate humanity into ones and zeroes will be capable of more than we can imagine. But the key word here is simulation. What engineers and computer scientists are trying to create, an improved version of our minds, may become the victim of its own flawed pursuit of perfection.

The problem is that humans, for the most part, are not logical thinkers. What will happen when brain simulations arise that are modeled after the interconnectivity, elasticity and parallel processing of the human mind, but neglecting to take into account the often paradoxical or conflicting reasoning in the human mind, is anyone’s guess. Ray Kurzweil, who has made leaps in digitizing music and language, makes mention of conflicting information in his book How to Create a Mind, but then seems to brush it off with a solution for overcoming this hiccup by relying on logic. Mr Kurzweil is the chief of engineering at Google so paying attention to his oversight as he continues guiding the digital giant toward neural nets and machine learning is advised. He concedes that a digital brain will be superior to human brains, at least in one way, in its treatment of conflicting information. But humans often accept and work with this very conflict in an illogical way. This non-logic could even be a manifestation of the ever-elusive notion of free will that we lay claim to.

The danger of hyper-logical thinking at unprecedented levels of intelligence is real. Moore’s Law predicts a continual doubling of processing power, at least until machines are able to engineer their own decedents. At that point, the exponential explosion of IT engineering will translate to an overwhelming amount of knowledge being created from a relatively single source, one that is inherently hyper-logical.

So what? Won’t creating logical intelligence provide a necessary separation from humanity? We don’t want machine learning to be like human learning because we want to distance ourselves from cyborgs, robots, and concepts that we don’t identify with on an emotional level. However, considering strong AI and self-designing IT as an extension of humanity, rather than a separate entity, is essential for its success, as well as our own. Super intelligence is inevitable, so we may as well do our best to insure it will understand our way of thinking on some level, even if it is rife with error.

Although an oversimplification, we can is some ways look at psychopathic behavior as the result of purely logical thought processes, with a lack or complete absence of emotional temperance. The means to ends are calculated in the psychopathic brain and the cause and effect relationships are taken apart literally. This has been evidenced in linguistic analysis of psychopaths describing their crimes, with an increase in subordinating conjunctions that imply a greater cause and effect reasoning. Sometimes simply described as lacking a conscience, psychopaths are in fact often just overly rational.

A milder manifestation of highly logical thinking is addressed by the “extreme-male-brain” theory of autism. While gender marking of brains is a sketchy practice at best, the characteristics of gender-stereotyped thinking do place brains on a logical-emotional sort of spectrum. In the 1960s and 70s IT companies hired programmers based on what were considered male-stereotyped characteristics, including antisocial tendencies and task orientation, thus weeding out the artistic or abstract thinkers. These primitive coders laid the groundwork for our communication with machines. The recent evolution of computer coding has led to proliferation of the hyper-logical nature of communication with computers.

Just as natural languages evolve, so too does the syntax of IT. Whereas different languages, like C++ and Pascal are used, we can also see changes that resemble those between dialects, as in the case of Python 2.0 and Python 3.0. But these are unanimously created by humans and prescribe the same purely logical way of thinking to a computer. The emergence of strong AI will result in a feedback loop of intelligence, since it will be capable of engineering higher intelligence than its own current state. We urgently need to come up with ways to introduce creativity, flexibility and yes, even error, into our programming languages. This is no easy task and there may be no clear consensus as to how or when to implement such changes. But the future of humanity may depend on it.

Computer coding allows for creativity in how it’s used, but not in its structure. The programming language is prescriptive and rigid. What would be poetry in human language results in an error when the code is run. In order to make room for “thinking” that is not purely logical, a language that evolves through interaction, more like a natural language, should be employed. This creativity would allow for alternative machine learning conclusions, much like the varied voices of human society in response to the same sets of data.

Flexibility can be achieved by making use of mechanisms that introduce randomness, rounding, and intentional imprecision into both coding and data. Although anathema to modern research, these errors may end up being what saves us from thinking dominated by psychopathic equivalents of information technology. We can seek comfort in the self-correcting phenomenon of big data, where larger amounts of data act in a similar mechanism to natural selection. Truly aberrant data is pruned or neutralized, but it still may leave its mark on the data set, much like experiences in human life. Optimizing computer knowledge through introduction of error may be just what is needed for our own survival.