Lazy fall colors

my writing and musings

unpublished thoughts, poems, experiences, stories, science on their way becoming a book

Friday, January 29, 2010

Measuring Complexity -- my mathematics applied to emergent intelligence --- this got me started on the mathematical complexity of DNA -- -- March 2006

Feature Article - March 2006



Emerging Complexity

Is there a natural process by which complex systems can emerge from simple components?
Living things are complex. Even the so-called “simple cell” is complex. How did such complexity arise? The answer one sometimes hears from evolutionists is that it arose through some poorly understood process called, “emergence” or “self-assembly.” The idea is that simple components will naturally assemble themselves into more complex structures.

The self-assembly literature usually talks about the formation of crystals or the accumulation of charged particles on a surface. Certainly particles can form interesting patterns, and these patterns are understandable in terms of thermodynamics. The molecules or particles are simply falling into lower energy states. As we will soon see, they aren't complex systems.

Another common example of emergence or self-assembly is the Internet. Despite Al Gore’s claim , he didn't invent the Internet. No one can claim to be the sole designer of the Internet. In a sense, the Internet did just happen without a single designer; but it didn’t happen apart from conscious volition. There were committees that set standards for communication protocols (TCP/IP, FTP, etc.). Networks were intentionally installed, and computers were connected to them by people who had a goal in mind. The Internet isn’t a good example of a complex system arising naturally by chance.

Emergence Defined

In 2005, Professor Robert M. Hazen produced a 24-lecture course for The Teaching Company, titled, Origins of Life. Lecture 8 is titled, Emergence. In the notes for Lecture 8 there is an outline, portions of which are shown below, that gives some background information on emergent systems.
       I.          What is an emergent system? Think of it as a process in which many interactions among lots of simple objects give rise to behaviors that are much more complex. Lots of sand grains interact to form sand dunes, and lots of nerve cells interact to form our conscious brain.
     II.          Complexity appears to be the hallmark of every emergent system, especially life. Indeed, that’s one way we recognize emergent systems. A simple mathematical equation would be helpful, one that relates the state of a system (the temperature or pressure, for example, expressed in numbers), to the resultant complexity of the system (also expressed as a number).
a.      If we could formulate such an equation, it would constitute that elusive missing “law of emergence.” That law would serve as a guide for modeling the origin of life—the ultimate complex system.
b.      To formulate a law of complexity requires a quantitative definition of the complexity of a physical system. No one has such a definition, though scientists have thought about complex systems and ways to model them mathematically.
    III.          ...
a.      Sand, for example, provides common and comprehensible examples of the kind of structures that arise when an energetic flow—wind or water—interacts with lots of agents.
                                                    i.     Below a critical concentration of particles, we see no patterns of interest.
                                                   ii.     As the particle concentration increases, so does complexity.
                                                  iii.     Yet complexity only increases to a point: Above some critical density of particles, no new patterns emerge.
1

Notice that there is no “law of emergence”—it’s missing! But Hazen believes, by faith, that it must exist.

Professor Hazen says we can’t measure complexity. That’s particularly surprising in light of the fact that he made a big deal in Lecture 1 about how important it is to have a multi-disciplinary team to study the origin of life. Clearly he didn’t have any engineers or computer scientists on his team. Software engineers and computer scientists have known how to measure complexity for 30 years. The McCabe software complexity metric is perhaps the most commonly used means to measure software complexity.
Cyclomatic complexity is the most widely used member of a class of static software metrics. Cyclomatic complexity may be considered a broad measure of soundness and confidence for a program. Introduced by Thomas McCabe in 1976, it measures the number of linearly-independent paths through a program module. This measure provides a single ordinal number that can be compared to the complexity of other programs. Cyclomatic complexity is often referred to simply as program complexity, or as McCabe's complexity. It is often used in concert with other software metrics. As one of the more widely-accepted software metrics, it is intended to be independent of language and language format [McCabe 94]. 2
Basically, the McCabe metric involves counting the number of inputs, outputs, and decision points in a computer program. The more different ways there are to get from an input to an output, the more complex the program is. There are commercially-available software tools that analyze a computer program to determine its McCabe complexity.
Table 4: Cyclomatic Complexity 3
Cyclomatic Complexity
Risk Evaluation
1-10
a simple program, without much risk
11-20
more complex, moderate risk
21-50
complex, high risk program
greater than 50
untestable program (very high risk)
Engineers are interested in complexity because the more complex a system is, the greater the chance that something will go wrong with it. Software engineers use McCabe (or possibly one of several other algorithms) to measure the overall complexity of a computer program, as well as the complexity of portions of the program. Some software development organizations have a rule that if the complexity of a computer program exceeds a certain McCabe value, the program must be simplified.
Roche Applied Science publishes a huge two-part chart called Metabolic Pathways 4. Part 1 is 54 inches by 38 inches. Part 2 is only  44 inches by 38 inches. It is packed with fine lines and very fine print showing metabolic reactions in living things. Part 1 is shown at the left.
Of course, to get it on the page it has to be reduced so much that you can’t read it, but you can get an idea of how complex it is.
Here is what the lower-right corner (section L-10 of the chart) looks like.



The metabolic process takes lots of inputs, produces lots of outputs, and makes lots of decisions about what to do based upon combinations of inputs. I would hate to have to compute the McCabe complexity metric for this chart. If I did, however, I am sure that it would far exceed the complexity of the most complex computer program I have ever written. As Professor Hazen says, life really is the ultimate complex system.

Ordinary Differential Equations

Imagine an automobile on a highway. It is in one place now, but it will be in a different place in a few minutes (unless it is rush hour ). So, we can describe its motion using differential equations. That is, the equations describe how its position differs with time. That’s where the name, “differential,” comes from.
One could say that Isaac Newton, in studying the motions of the planets, invented the field of differential equations by writing down the first one. 5
Of course, the laws of motion have existed since the beginning of time, but it wasn’t until the 17th century that Newton invented a way to describe motion using differential equations. If differential equations had been invented today by someone other than Newton, they would no doubt have been called “evolutionary equations” because evolutionists love to confuse the argument by calling all kinds of changes, “evolution.” Then, after proving that one kind of “evolution” is true, they try to use that as proof that everything they call evolution is true.
Differential equations are important because they allow us to model physical processes. The differential equations describing the aerodynamic response of a full-size airplane are the same as the differential equations describing a scale-model airplane. Since it is often easier to build a scale-model than a full-size one, models can be tested in a wind tunnel to determine how the actual airplane would perform.

Sometimes physical limitations even make it impractical to build an accurate scale-model, but there are many things that obey the same kind of differential equations. The equations that describe what happens when you push a weight suspended from a spring are the same equations that describe what happens when you shove some electricity into a capacitor through a coil of wire. Engineers take often advantage of this because it is usually easier to build and modify an electrical circuit in the laboratory than it is to build and modify a mechanical system in a machine shop.

For example, if a mechanical engineer is designing an airplane, he computes all the forces on the wings, how springy the wing will be, and comes up with some differential equations describing how the wing will respond to those forces. Then, using straightforward techniques that relate velocity to voltage, force to electric current, mass to capacitance, etc., he builds an electrical circuit whose voltages and currents are described by the same differential equations as the equations that describe the forces and motion he is interested in. He then tests the circuit to see if it performs the way he expected it to. He might make some changes to see what would happen if he made the wing stiffer, or heavier, etc. That’s much easier and cheaper than building a full-size airplane.

Decades ago, this process was greatly simplified by analog computers, which were specially designed to use electrical circuits to compute the solutions to differential equations. Today we can solve differential equations on a digital computer. If an engineer wants to confirm how a mechanical system or electrical circuit will perform, he will more likely use a digital computer program whose outputs are described by the same differential equations that govern the mechanical system or electrical circuit.

Simulated Complexity

By now you are no doubt wondering why we have gone off on this tangent. Here’s why this little background discussion was relevant: Mechanical and electrical systems are described by differential equations. The more complex the system, the more complex the differential equations that describe that system, and the more complex the computer program that simulates those differential equations. Therefore, we can quantify the complexity of electrical and mechanical systems by quantifying the complexity of the computer program necessary to simulate them. We know how to quantify the complexity of computer programs. Therefore, we can compute the complexity of mechanical and electrical systems.
This approach has been taken in the analysis of biological systems, too.
The programs that control development in the embryos of multicellular animals are thought to be complex. But what does that mean? Azevedo et al. have addressed this question based on the similarity between developmental and computer programs. Looking at the embryologies of animals such as roundworms whose cell lineages can be precisely determined, they find the course of development less complex than one would expect by chance. In fact, given the necessity of placing precise numbers of cells in particular positions in developing embryos, these cell lineages could not be much simpler than they are. Evolution has selected for decreased complexity. 6 [emphasis supplied]
The really remarkable statement is that embryonic development is “less complex than one would expect by chance.” We wonder how one makes an a priori expectation of the degree of complexity produced by chance. Azevedo didn’t make that very clear in his paper. We don’t know of any functional computer program that has been generated ENTIRELY by chance, and we don’t think any exist. 7 If one does exist, we don’t know what its McCabe complexity would be. We can’t compare the complexity of a program that simulates embryonic development to the complexity of a program that doesn’t exist, so there is no basis for saying it is “less complex.”

When it comes to computer programs, the better the programmer, the lower the complexity. If the McCabe value is too high, management will generally make the programmer go back and simplify the program, to make it more reliable. Azevedo’s studies showed, “these cell lineages could not be much simpler than they are.” In other words, the programmer did a darn good job.It is much better “than one would expect by chance.” The logical conclusion is that chance wasn’t involved. Yet, Azevedo’s conclusion is, “Evolution has selected for decreased complexity.” What is the basis for this conclusion? The basis is simply the unfounded belief that evolution did everything.

Simulation Limits

One has to be cautious when using the complexity of a computer program to determine the complexity of a system. I have written some simple, “quick-and-dirty simulations” of guided missiles just to see if it would be feasible to build them. After they were built, I wrote more accurate simulations. The guided missile didn’t get more complex just because I wrote a more comprehensive program to describe its behavior.

Decades ago someone wrote a program called Life that generates some interesting patterns that roughly represent growth. A cell “died” if it was surrounded by too many other cells, but was “born” if it was adjacent to the right number living cells. It was much simpler than real life. One should not use an overly simple program to determine the complexity of life.

Any program that simulates a complete living organism will have to make simplifying assumptions. The complexity of the program will depend upon how many simplifying assumptions are made. So, we might not be able to assign an exact number to a living entity, but we can be sure that the complexity will be very high.

Systems

There is an important distinction that Professor Hazen seems to miss when he tries to treat sand ripples as a naturally emergent complex system. He has forgotten the definition of “system.”Look up “system” in a dictionary and you will find a long list of specific definitions. The common theme in all these definitions is the notion of a group of different things assembled together for a purpose.

Sand ripples and snowflakes are made up of the same kinds of things (grains of sand, or molecules of water) and have no purpose. The leaf of a tree is made up of all kinds of different molecules which convert sunlight into sugars for use by the rest of the tree. A leaf is a system. A snowflake is not.

Of course, from an evolutionary point of view, the fundamental problem with the definition of “system” is that nasty notion of “purpose.” To their way of thinking, there is no purpose to anything. Everything is the result of random chance. The notion of “purpose” implies “intention,” which implies conscious volition on the part of someone or something.
Automobiles, televisions, computers, bird nests, spider webs, and honeycombs are systems because they serve a purpose. Some living thing put them together to satisfy a need for transportation, entertainment, information, shelter, or food. Systems satisfy needs. But the impersonal, natural world of the evolutionists has no needs.

Aggregate Behavior

Occasionally, when discussing emergent patterns, the discussion turns to schools of fish, flocks of birds, or swarms of insects. The individual creatures naturally form a pattern when in a group. Group behavior emerges.

Historically, there has been considerable interest in figuring out how a collective consciousness emerges from a large group of individuals. Perhaps the first scientist to ponder aggregate behavior was Agur, the son of Jakeh, who observed locusts around 961 BC. 8 He noted that locusts attacked crops in a coordinated manner, just like soldiers. This is not the behavior one would expect if the theory of evolution were true. Natural selection rewards competition, not cooperation. Natural selection is based on survival of the fittest—that is, obtaining food when others around you can’t. Natural selection can explain the aggressive, competitive behavior of shoppers at a day-after-Christmas sale, but not the polite cooperation of locusts as they totally destroy a field.

One of the most striking patterns in biology is the formation of animal aggregations. Classically, aggregation has been viewed as an evolutionarily advantageous state, in which members derive the benefits of protection, mate choice, and centralized information, balanced by the costs of limiting resources. Consisting of individual members, aggregations nevertheless function as an integrated whole, displaying a complex set of behaviors not possible at the level of the individual organism. Complexity theory indicates that large populations of units can self-organize into aggregations that generate pattern, store information, and engage in collective decision-making. This begs the question, are all emergent properties of animal aggregations functional or are some simply pattern? Solutions to this dilemma will necessitate a closer marriage of theoretical and modeling studies linked to empirical work addressing the choices, and trajectories, of individuals constrained by membership in the group. 9 [emphasis supplied]

Complexity theory “begs the question” because it is based on the premise that cooperative behavior must have evolved because cooperative behavior exists. There must be a natural explanation for cooperative behavior because, by the evolutionists’ new definition of “science,” there can be no explanation other than a natural one. If one arbitrarily rules out any supernatural force whose eye is on the sparrow, then one must find a natural force that makes sparrows flock together. If there is no natural force, then the search for one is doomed to failure.

Again, the problem for evolutionists, when examining coordinated behavior, is purpose. The school of fish changes direction and swims away when a shark appears because the school of fish has a goal in mind. Specifically, the goal is not to be eaten. There is a reason why birds fly south for the winter, even if the birds aren’t consciously aware of it.
From the evolutionists’ perspective, there is no purpose to life. All life is just the result of random, purposeless changes. But locusts and fish and birds seem to be purpose-driven, just as they seem to be designed.

The cardio-vascular system appears to be designed. One could argue that it appears to be designed because it was designed. On the other hand, evolutionists, such as Richard Dawkins, argue that the appearance of design is merely an illusion. Although it is more likely that appearance reflects reality, one cannot rule out the possibility that things that have not been designed simply appear to have been designed.

But the “appearance of purpose” argument is harder to make than the “appearance of design” argument. There is no question that the cardio-vascular system has a purpose. Its purpose is to supply oxygen to internal cells. It does not merely appear to supply oxygen to internal cells. One need not speculate upon whether or not a system performs a function. Unlike design, which can only be inferred, function is an observable characteristic which can be examined in the laboratory.

Evolutionists have to explain away the “illusion” of purpose and design because the existence of purpose and design imply some sort of intelligence and intention outside the physical, material world. Rather than following the data to its logical conclusion, they are forced to explain why things aren’t really as they seem.

Any non-biological system you can name (an automobile, a computer, a television, a bird’s nest, a spider’s web, etc.) consists of components intentionally assembled for a purpose. Biological systems (the immune system, the cardio-vascular system, the digestive system, etc.) consist of components that function together for a purpose. Buzzwords like “emergence,” “complexity theory,” and “self-assembly” are used to try to explain how diverse biological components just happened to join together and fulfill some useful purpose without any intention to do so.

Yes, ice crystals form snow flakes, and sand grains drift into dunes, but they don’t serve any purpose. They are just particles seeking the lowest energy state in accordance with the second law of thermodynamics. The formation of crystals and dunes doesn’t explain how random changes to cells in the body of a reptile could form a mammalian breast which supplies milk for the purpose of nourishing newly-born offspring. The lactation system in mammals is made up of several different physical components which respond in harmony to hormonal changes with remarkable results.

Yes, geese fly in a discernable pattern (which reduces aerodynamic drag), insects swarm, and fish swim in schools. But this behavior is conscious and purpose-driven. It doesn’t teach us anything about the mythical process by which amino acids and proteins naturally formed the first living cell by accident.

No non-biological system, from the first wheeled vehicle to the most recent space probe, has ever emerged through some self-assembly process. All functional systems are the result of conscious design. It is illogical and unscientific to believe that any biological system could have emerged through any unguided self-assembly process. 

No comments:

Post a Comment