Lazy fall colors

my writing and musings

unpublished thoughts, poems, experiences, stories, science on their way becoming a book

Friday, January 29, 2010

Section III -- Science, Mathematical analysis of DNA --- 2006 through 2010



This third section is about science -- -- specifically applying mathematical techniques to try to capture the inherent complexity of organic systems.  My initial  interest was piqued by some other scientists writing by using my cyclomatic software complexity measure to capture a notion of emergent intelligence or even cellular complexity.

The first two articles serve as general background -- -- including a letter I wrote describing my initial thoughts. The third article is an introduction to my research on a mathematical characterization of DNA complexity.



Measuring Complexity  ....... published on the web. About using the McCabe algorithm complexity measure on the DNA transcription process. Can be used to classify and compare the complexity of different species. 

Complexity of Emergent Intelligence............ a letter about the same. What's fun here is the connection between molelecture biology and compute science. 


A Mathematical Analysis of DNA Double Helix Complexity ..... an introduction, suggests the regularity of DNA/RNA/Protein synthesis to be three isomorphic mathematical spaces.  Have since focused on the AIDS genome.

Measuring Complexity -- my mathematics applied to emergent intelligence --- this got me started on the mathematical complexity of DNA -- -- March 2006

Feature Article - March 2006



Emerging Complexity

Is there a natural process by which complex systems can emerge from simple components?
Living things are complex. Even the so-called “simple cell” is complex. How did such complexity arise? The answer one sometimes hears from evolutionists is that it arose through some poorly understood process called, “emergence” or “self-assembly.” The idea is that simple components will naturally assemble themselves into more complex structures.

The self-assembly literature usually talks about the formation of crystals or the accumulation of charged particles on a surface. Certainly particles can form interesting patterns, and these patterns are understandable in terms of thermodynamics. The molecules or particles are simply falling into lower energy states. As we will soon see, they aren't complex systems.

Another common example of emergence or self-assembly is the Internet. Despite Al Gore’s claim , he didn't invent the Internet. No one can claim to be the sole designer of the Internet. In a sense, the Internet did just happen without a single designer; but it didn’t happen apart from conscious volition. There were committees that set standards for communication protocols (TCP/IP, FTP, etc.). Networks were intentionally installed, and computers were connected to them by people who had a goal in mind. The Internet isn’t a good example of a complex system arising naturally by chance.

Emergence Defined

In 2005, Professor Robert M. Hazen produced a 24-lecture course for The Teaching Company, titled, Origins of Life. Lecture 8 is titled, Emergence. In the notes for Lecture 8 there is an outline, portions of which are shown below, that gives some background information on emergent systems.
       I.          What is an emergent system? Think of it as a process in which many interactions among lots of simple objects give rise to behaviors that are much more complex. Lots of sand grains interact to form sand dunes, and lots of nerve cells interact to form our conscious brain.
     II.          Complexity appears to be the hallmark of every emergent system, especially life. Indeed, that’s one way we recognize emergent systems. A simple mathematical equation would be helpful, one that relates the state of a system (the temperature or pressure, for example, expressed in numbers), to the resultant complexity of the system (also expressed as a number).
a.      If we could formulate such an equation, it would constitute that elusive missing “law of emergence.” That law would serve as a guide for modeling the origin of life—the ultimate complex system.
b.      To formulate a law of complexity requires a quantitative definition of the complexity of a physical system. No one has such a definition, though scientists have thought about complex systems and ways to model them mathematically.
    III.          ...
a.      Sand, for example, provides common and comprehensible examples of the kind of structures that arise when an energetic flow—wind or water—interacts with lots of agents.
                                                    i.     Below a critical concentration of particles, we see no patterns of interest.
                                                   ii.     As the particle concentration increases, so does complexity.
                                                  iii.     Yet complexity only increases to a point: Above some critical density of particles, no new patterns emerge.
1

Notice that there is no “law of emergence”—it’s missing! But Hazen believes, by faith, that it must exist.

Professor Hazen says we can’t measure complexity. That’s particularly surprising in light of the fact that he made a big deal in Lecture 1 about how important it is to have a multi-disciplinary team to study the origin of life. Clearly he didn’t have any engineers or computer scientists on his team. Software engineers and computer scientists have known how to measure complexity for 30 years. The McCabe software complexity metric is perhaps the most commonly used means to measure software complexity.
Cyclomatic complexity is the most widely used member of a class of static software metrics. Cyclomatic complexity may be considered a broad measure of soundness and confidence for a program. Introduced by Thomas McCabe in 1976, it measures the number of linearly-independent paths through a program module. This measure provides a single ordinal number that can be compared to the complexity of other programs. Cyclomatic complexity is often referred to simply as program complexity, or as McCabe's complexity. It is often used in concert with other software metrics. As one of the more widely-accepted software metrics, it is intended to be independent of language and language format [McCabe 94]. 2
Basically, the McCabe metric involves counting the number of inputs, outputs, and decision points in a computer program. The more different ways there are to get from an input to an output, the more complex the program is. There are commercially-available software tools that analyze a computer program to determine its McCabe complexity.
Table 4: Cyclomatic Complexity 3
Cyclomatic Complexity
Risk Evaluation
1-10
a simple program, without much risk
11-20
more complex, moderate risk
21-50
complex, high risk program
greater than 50
untestable program (very high risk)
Engineers are interested in complexity because the more complex a system is, the greater the chance that something will go wrong with it. Software engineers use McCabe (or possibly one of several other algorithms) to measure the overall complexity of a computer program, as well as the complexity of portions of the program. Some software development organizations have a rule that if the complexity of a computer program exceeds a certain McCabe value, the program must be simplified.
Roche Applied Science publishes a huge two-part chart called Metabolic Pathways 4. Part 1 is 54 inches by 38 inches. Part 2 is only  44 inches by 38 inches. It is packed with fine lines and very fine print showing metabolic reactions in living things. Part 1 is shown at the left.
Of course, to get it on the page it has to be reduced so much that you can’t read it, but you can get an idea of how complex it is.
Here is what the lower-right corner (section L-10 of the chart) looks like.



The metabolic process takes lots of inputs, produces lots of outputs, and makes lots of decisions about what to do based upon combinations of inputs. I would hate to have to compute the McCabe complexity metric for this chart. If I did, however, I am sure that it would far exceed the complexity of the most complex computer program I have ever written. As Professor Hazen says, life really is the ultimate complex system.

Ordinary Differential Equations

Imagine an automobile on a highway. It is in one place now, but it will be in a different place in a few minutes (unless it is rush hour ). So, we can describe its motion using differential equations. That is, the equations describe how its position differs with time. That’s where the name, “differential,” comes from.
One could say that Isaac Newton, in studying the motions of the planets, invented the field of differential equations by writing down the first one. 5
Of course, the laws of motion have existed since the beginning of time, but it wasn’t until the 17th century that Newton invented a way to describe motion using differential equations. If differential equations had been invented today by someone other than Newton, they would no doubt have been called “evolutionary equations” because evolutionists love to confuse the argument by calling all kinds of changes, “evolution.” Then, after proving that one kind of “evolution” is true, they try to use that as proof that everything they call evolution is true.
Differential equations are important because they allow us to model physical processes. The differential equations describing the aerodynamic response of a full-size airplane are the same as the differential equations describing a scale-model airplane. Since it is often easier to build a scale-model than a full-size one, models can be tested in a wind tunnel to determine how the actual airplane would perform.

Sometimes physical limitations even make it impractical to build an accurate scale-model, but there are many things that obey the same kind of differential equations. The equations that describe what happens when you push a weight suspended from a spring are the same equations that describe what happens when you shove some electricity into a capacitor through a coil of wire. Engineers take often advantage of this because it is usually easier to build and modify an electrical circuit in the laboratory than it is to build and modify a mechanical system in a machine shop.

For example, if a mechanical engineer is designing an airplane, he computes all the forces on the wings, how springy the wing will be, and comes up with some differential equations describing how the wing will respond to those forces. Then, using straightforward techniques that relate velocity to voltage, force to electric current, mass to capacitance, etc., he builds an electrical circuit whose voltages and currents are described by the same differential equations as the equations that describe the forces and motion he is interested in. He then tests the circuit to see if it performs the way he expected it to. He might make some changes to see what would happen if he made the wing stiffer, or heavier, etc. That’s much easier and cheaper than building a full-size airplane.

Decades ago, this process was greatly simplified by analog computers, which were specially designed to use electrical circuits to compute the solutions to differential equations. Today we can solve differential equations on a digital computer. If an engineer wants to confirm how a mechanical system or electrical circuit will perform, he will more likely use a digital computer program whose outputs are described by the same differential equations that govern the mechanical system or electrical circuit.

Simulated Complexity

By now you are no doubt wondering why we have gone off on this tangent. Here’s why this little background discussion was relevant: Mechanical and electrical systems are described by differential equations. The more complex the system, the more complex the differential equations that describe that system, and the more complex the computer program that simulates those differential equations. Therefore, we can quantify the complexity of electrical and mechanical systems by quantifying the complexity of the computer program necessary to simulate them. We know how to quantify the complexity of computer programs. Therefore, we can compute the complexity of mechanical and electrical systems.
This approach has been taken in the analysis of biological systems, too.
The programs that control development in the embryos of multicellular animals are thought to be complex. But what does that mean? Azevedo et al. have addressed this question based on the similarity between developmental and computer programs. Looking at the embryologies of animals such as roundworms whose cell lineages can be precisely determined, they find the course of development less complex than one would expect by chance. In fact, given the necessity of placing precise numbers of cells in particular positions in developing embryos, these cell lineages could not be much simpler than they are. Evolution has selected for decreased complexity. 6 [emphasis supplied]
The really remarkable statement is that embryonic development is “less complex than one would expect by chance.” We wonder how one makes an a priori expectation of the degree of complexity produced by chance. Azevedo didn’t make that very clear in his paper. We don’t know of any functional computer program that has been generated ENTIRELY by chance, and we don’t think any exist. 7 If one does exist, we don’t know what its McCabe complexity would be. We can’t compare the complexity of a program that simulates embryonic development to the complexity of a program that doesn’t exist, so there is no basis for saying it is “less complex.”

When it comes to computer programs, the better the programmer, the lower the complexity. If the McCabe value is too high, management will generally make the programmer go back and simplify the program, to make it more reliable. Azevedo’s studies showed, “these cell lineages could not be much simpler than they are.” In other words, the programmer did a darn good job.It is much better “than one would expect by chance.” The logical conclusion is that chance wasn’t involved. Yet, Azevedo’s conclusion is, “Evolution has selected for decreased complexity.” What is the basis for this conclusion? The basis is simply the unfounded belief that evolution did everything.

Simulation Limits

One has to be cautious when using the complexity of a computer program to determine the complexity of a system. I have written some simple, “quick-and-dirty simulations” of guided missiles just to see if it would be feasible to build them. After they were built, I wrote more accurate simulations. The guided missile didn’t get more complex just because I wrote a more comprehensive program to describe its behavior.

Decades ago someone wrote a program called Life that generates some interesting patterns that roughly represent growth. A cell “died” if it was surrounded by too many other cells, but was “born” if it was adjacent to the right number living cells. It was much simpler than real life. One should not use an overly simple program to determine the complexity of life.

Any program that simulates a complete living organism will have to make simplifying assumptions. The complexity of the program will depend upon how many simplifying assumptions are made. So, we might not be able to assign an exact number to a living entity, but we can be sure that the complexity will be very high.

Systems

There is an important distinction that Professor Hazen seems to miss when he tries to treat sand ripples as a naturally emergent complex system. He has forgotten the definition of “system.”Look up “system” in a dictionary and you will find a long list of specific definitions. The common theme in all these definitions is the notion of a group of different things assembled together for a purpose.

Sand ripples and snowflakes are made up of the same kinds of things (grains of sand, or molecules of water) and have no purpose. The leaf of a tree is made up of all kinds of different molecules which convert sunlight into sugars for use by the rest of the tree. A leaf is a system. A snowflake is not.

Of course, from an evolutionary point of view, the fundamental problem with the definition of “system” is that nasty notion of “purpose.” To their way of thinking, there is no purpose to anything. Everything is the result of random chance. The notion of “purpose” implies “intention,” which implies conscious volition on the part of someone or something.
Automobiles, televisions, computers, bird nests, spider webs, and honeycombs are systems because they serve a purpose. Some living thing put them together to satisfy a need for transportation, entertainment, information, shelter, or food. Systems satisfy needs. But the impersonal, natural world of the evolutionists has no needs.

Aggregate Behavior

Occasionally, when discussing emergent patterns, the discussion turns to schools of fish, flocks of birds, or swarms of insects. The individual creatures naturally form a pattern when in a group. Group behavior emerges.

Historically, there has been considerable interest in figuring out how a collective consciousness emerges from a large group of individuals. Perhaps the first scientist to ponder aggregate behavior was Agur, the son of Jakeh, who observed locusts around 961 BC. 8 He noted that locusts attacked crops in a coordinated manner, just like soldiers. This is not the behavior one would expect if the theory of evolution were true. Natural selection rewards competition, not cooperation. Natural selection is based on survival of the fittest—that is, obtaining food when others around you can’t. Natural selection can explain the aggressive, competitive behavior of shoppers at a day-after-Christmas sale, but not the polite cooperation of locusts as they totally destroy a field.

One of the most striking patterns in biology is the formation of animal aggregations. Classically, aggregation has been viewed as an evolutionarily advantageous state, in which members derive the benefits of protection, mate choice, and centralized information, balanced by the costs of limiting resources. Consisting of individual members, aggregations nevertheless function as an integrated whole, displaying a complex set of behaviors not possible at the level of the individual organism. Complexity theory indicates that large populations of units can self-organize into aggregations that generate pattern, store information, and engage in collective decision-making. This begs the question, are all emergent properties of animal aggregations functional or are some simply pattern? Solutions to this dilemma will necessitate a closer marriage of theoretical and modeling studies linked to empirical work addressing the choices, and trajectories, of individuals constrained by membership in the group. 9 [emphasis supplied]

Complexity theory “begs the question” because it is based on the premise that cooperative behavior must have evolved because cooperative behavior exists. There must be a natural explanation for cooperative behavior because, by the evolutionists’ new definition of “science,” there can be no explanation other than a natural one. If one arbitrarily rules out any supernatural force whose eye is on the sparrow, then one must find a natural force that makes sparrows flock together. If there is no natural force, then the search for one is doomed to failure.

Again, the problem for evolutionists, when examining coordinated behavior, is purpose. The school of fish changes direction and swims away when a shark appears because the school of fish has a goal in mind. Specifically, the goal is not to be eaten. There is a reason why birds fly south for the winter, even if the birds aren’t consciously aware of it.
From the evolutionists’ perspective, there is no purpose to life. All life is just the result of random, purposeless changes. But locusts and fish and birds seem to be purpose-driven, just as they seem to be designed.

The cardio-vascular system appears to be designed. One could argue that it appears to be designed because it was designed. On the other hand, evolutionists, such as Richard Dawkins, argue that the appearance of design is merely an illusion. Although it is more likely that appearance reflects reality, one cannot rule out the possibility that things that have not been designed simply appear to have been designed.

But the “appearance of purpose” argument is harder to make than the “appearance of design” argument. There is no question that the cardio-vascular system has a purpose. Its purpose is to supply oxygen to internal cells. It does not merely appear to supply oxygen to internal cells. One need not speculate upon whether or not a system performs a function. Unlike design, which can only be inferred, function is an observable characteristic which can be examined in the laboratory.

Evolutionists have to explain away the “illusion” of purpose and design because the existence of purpose and design imply some sort of intelligence and intention outside the physical, material world. Rather than following the data to its logical conclusion, they are forced to explain why things aren’t really as they seem.

Any non-biological system you can name (an automobile, a computer, a television, a bird’s nest, a spider’s web, etc.) consists of components intentionally assembled for a purpose. Biological systems (the immune system, the cardio-vascular system, the digestive system, etc.) consist of components that function together for a purpose. Buzzwords like “emergence,” “complexity theory,” and “self-assembly” are used to try to explain how diverse biological components just happened to join together and fulfill some useful purpose without any intention to do so.

Yes, ice crystals form snow flakes, and sand grains drift into dunes, but they don’t serve any purpose. They are just particles seeking the lowest energy state in accordance with the second law of thermodynamics. The formation of crystals and dunes doesn’t explain how random changes to cells in the body of a reptile could form a mammalian breast which supplies milk for the purpose of nourishing newly-born offspring. The lactation system in mammals is made up of several different physical components which respond in harmony to hormonal changes with remarkable results.

Yes, geese fly in a discernable pattern (which reduces aerodynamic drag), insects swarm, and fish swim in schools. But this behavior is conscious and purpose-driven. It doesn’t teach us anything about the mythical process by which amino acids and proteins naturally formed the first living cell by accident.

No non-biological system, from the first wheeled vehicle to the most recent space probe, has ever emerged through some self-assembly process. All functional systems are the result of conscious design. It is illogical and unscientific to believe that any biological system could have emerged through any unguided self-assembly process. 

The complexity of emergent systems -- -- a mathematical complexity theory of life -- -- August 2006

email - August 2006

Measuring Complexity

We got a response from a famous expert in the field of complexity.
In March we wrote an essay on Emerging Complexity. In that essay we discussed Dr. Robert Hazen’s speculation about how complex systems might emerge from simple components through purely natural, unguided processes. He says,
A fundamental law of nature, the law describing the emergence of complex ordered systems (including every living cell), is missing from our textbooks. 1
The law is missing, of course, because it doesn’t exist. He then goes on to say,
All emergent systems display the rather subjective characteristic of “complexity”—a property that thus far lacks a precise quantitative definition. 2
He then goes on to try to invent some measure of complexity involving “the concentration of interacting particles (n), the degree of those particles’ interconnectivity (i), the time-varying energy flow through the system [VE(t)], and perhaps other variables as well.” 3

He admitted that he doesn’t really know how to measure complexity, but it would certainly be helpful if we had a precise quantitative definition.

Our response was that software engineers have known how to measure complexity of software programs for decades, and suggested that the same general technique could be applied to biological systems. The best known, most widely used complexity measure is McCabe software complexity metric. In our feature article, we gave the link to the Software Engineering Institute website that describes it. 4

We were delighted to receive this email from Thomas McCabe himself.
Dear Dave,
Like you, my previous life had been [involved with] computer software. I had built a company on the application of math to computer algorithms. With your background in both computers and science I thought you might enjoy these first thoughts on DNA algorithm analysis.

This is a letter that I am sending to several people and it may eventually evolve into something of substance. I have been reading about DNA and was pleasantly surprised to find some scientists using my theory in molecular genetics arguing about evolution and the beginnings of emergent intelligence.

I have been thinking about DNA and complexity as you'll [see] in the attachment. This I find, like the beginning of my algorithm complexity research, to be captivating. The power of my original research was the leverage of mathematical rigor to software. The power of this idea is the leverage of hundreds of technology man-years to the new promising field of DNA. If we could find a connection we could take all the years of McCabe complexity experience and directly apply it to DNA. The mathematical theory, the various metrics, the visualization, the testing methodology, the empirical studies, the 30 years of industrial experience, ... All could be applied to DNA. That, would be a blast. [ellipsis his]

Tell me what you think,
Hope this finds you well,
Tom
He enclosed a draft of a paper he is writing on the subject. We won’t share that paper with you for several reasons. First, our audience is the general public, and technical detail of his paper assumes familiarity with biological concepts not generally known by the general public. Second, and more importantly, we don’t want to steal his thunder by publishing his work before it is finished. When finished, his paper will be worthy of being published in Science or Nature, but since it argues so powerfully against evolution, there is no chance that either journal will publish it.

We will tell you that he is taking a different approach than the one we suggested. We proposed looking at all the metabolic processes in a living cell, and attempting to compute the complexity of the cell.

He is taking a much more achievable approach. At the risk of oversimplifying his idea, we will say that instead of looking at all the metabolic processes in the cell, he is looking at just one. Specifically, there is a process in living cells that decodes the genetic information in the DNA molecule and builds biological structures accordingly. Conceptually, this process is not much different from the software program in a CD player that reads a compact disk and converts the information into music. Since we can compute the complexity of a program that reads a CD, one should also be able to compute the complexity of the biological process that reads and processes genetic information.
We are looking forward to learning his results.

Thursday, January 28, 2010

Complexity of the DNA double helix -- -- introduction -- -- September 2006


A Mathematical Analysis of the DNA Double Helix

  Thomas J McCabe
  Copyright c, September 2006


How to read this

Don't worry about not understanding all of it.  See the direction, the motivation, and the spirit.  There is enough here to get mathematicians interested in genetics, and there's plenty to get geneticists interested in the possibilities of applied 
mathematics.  There is more than enough to get the general reader interested in the new breakthroughs in genetics and excited by the possibility that the underlying genetic machinery may be mathematically tractable.

Introduction

There is a seduction here.  It seduced me.  I hope it seduces you.

This is a story that interrelates two branches of study -- -- molecular biology and pure mathematics.  Molecular biology is about genes and the biological structure that holds them -- -- the DNA double helix.  The new science of molecular genetics has had momentous breakthroughs in the last several years: sequencing the human genome, identifying and locating over 30,000 human genes, identifying various morbid genes that lead to fatal diseases.  The breakthroughs in molecular genetics are happening every day; for example, today October 30, 2006 there was an announcement in the news that genetic have identified a 100 million year old honeybee and have sequenced the genome of current honeybees to understand their social 
behavior.  The breakthroughs in molecular genetics concerning our DNA and genes will be the defining event of the 21st century.  Many diseases presently fatal will have genetic prescriptions, lifespan will be extended, and a host of chronic disabling diseases will be both prevented and have more effective palliative treatments. More broadly the striking genetic similarly of many species is being identified, our evolutionary ancestry is being clarified, and the underlying genetic machinery common to all life will be fully understood and replicated.

The universality of DNA and genes, the common machinery of its cell division and protein synthesis, is stunning.  Every living species has such DNA and genes.  Likewise, every extinct species had its own DNA and genes.  The breakthroughs being made in genetics, as often as not, are pertinent to several species.  In fact, the similarities across species are striking -- -- the underlying mechanism for the splitting of cells and the protein synthesis of genes within each of many varied creatures is almost identical. The genome of a mouse is 99 per cent similar to a human; our genome is only 15 per cent larger than a mustard plant’s.

The structure of the DNA is intriguing; besides it’s universality as the code book for all living species it is also mathematically symmetrical and regular. There are two north-south backbones; one male, oriented north to south, one female oriented south to north. The horizontal latter has the four letter alphabet of the genes; which are encoded as bases, 


made up of four nucleotide chemicals: adenine, cytosine, thymine and guanine, usually denoted by the letters A,C,T, and G. The letters form base pairs that link together to form the rungs on the ladder of the DNA double helix. Genes are finite sequences of bases along each of the vertical backbones.

The other thread binding this story is of applied mathematics. Mathematics is called the purest of the sciences.  Mathematics requires no experimentation, or physical validation. Mathematics is not dependent upon a contemporary view of physics or chemistry, or astronomy. To mathematicians the classical life sciences and biology had seemed intractable; their processes seemed way to randomly chaotic and loose to admit of mathematical analysis.

Molecular biology has changed all that.  The DNA double helix is such a regular, stable, and rich structure that it can be thought of as mathematical object. Also the recent discoveries of the detailed process of cell replication and gene synthesis is so uniform, even across different species, and meticulous, that we can now apply mathematical concepts. The incredible recent genetic breakthroughs have demonstrated a universal regularity; the molecular biologists have opened the door for mathematicians.

This paper will present mathematical insights that systematize and simplify the way genes are produced from DNA. The mathematical concept we will use is a Vector Space -- a rigorous concept which has been developed over the course of many centuries. This paper will attempt to model the DNA double helix as such a mathematical Vector Space; if successful there will be many direct results. One of which will be a classification of genes into two camps -- -- basis genes and composite genes.  Vector Space analysis will provide a way to generate composite genes without the usual DNA protein synthesis -- -- composite genes will be shown to be linear combination of basis genes. Vector Spaces can potentially lead to alternate ways to generate genes and proteins; it could enable man made genetic drugs and medicines.

Partly because mathematics is pure and not dependent on physical science it’s subjects tend to be abstract, obscure, and lifeless. Not here, the target of our mathematical analysis, DNA and gene generation, is life itself.  Every living thing, present or past, has DNA and genes -- -- so our subject is life universal. My finger tactile genes are being generated faster than I can type about the mechanism my finger DNA cells are using.  Your DNA has produced genes for your optic nerve faster than you can read about how it does this. This  is a mathematicians delight; our mathematical target is the most universal of life forms… the mathematical analysis of the DNA double helix … I hope you’re getting seduced.

The physical characteristics of DNA and genes are indeed mystical and seductive.

Size

Each Homo Sapiens has 500 billion cells.  Each cell has a nucleus containing 24 chromosomes that contain the DNA double helix.  Within each cell the information along the male and female strands of the double helix is rich enough to produce a complete clone --- not just of the cell itself, but a clone of the complete person.  If we spliced a human's DNA together and stood it end-on-end, it would stand about a six story-building high.

Speed

Each cell generates a new gene about every four seconds.  In the time it takes you to read this page your cells would have produced about 2 trillion new genes.



Every living thing has DNA and genes.  Whales do. Microscopic life forms which could have an 8 million metropolis located on the head of a pin, indeed have DNA and genes.  Chimpanzees have 99.9% of the same genetic makeup that we do; we share 75 percent of our genetic makeup with a mouse.  For that matter the sea squirt, which Aristotle thought was a plant, has embryos which are strikingly similar to those of humans.

Symmetry

to be continued ……………..