Evolution of the Flagellum
the Climbing of "Mt. Improbable
Sean D. Pitman, M.D.
© May 2006
Last Updated: August 2010
The Appearance of Design
Most modern scientists believe that all living things, with all of their various parts and systems of function, evolved through a process of random mutation and natural selection from a common ancestor life form over the course of hundreds of millions of years of time. Of course, Darwin's famous observations and documentations of various real time changes in many different creatures, such as the fairly striking differences in finches in different regions, helped to popularize this concept (link). Since then, the interpretations of the geologic column and the fossil record along with many modern examples of real time evolution, such as the rapid evolution of antibiotic resistance in bacteria, only seem to confirm the theory of evolution as something, "more than a theory".
Still, there are some who continue to question the creative potential of such a mindless process. Can the seemingly simplistic mechanism of random mutation and natural selection really give rise to the amazing complexity and diversity of all living things? For many, the wonders of the natural world and universe, especially when it comes to living things, are so awe inspiring that they intuitively appear as though they were deliberately set in place by an extraordinary brilliant intelligence - even a God or at least godlike intelligence. Consider an observation from Keiichi Namba (program director of a team of dedicated scientists working to detail the various steps in flagellar assembly):
"An enormous number of those macromolecules play each role just like purposefully designed machines and maintain the complex network activities."12
counter argument is, of course, that though grand the universe and all
things only have the "appearance of design" when, in reality, no
deliberate foresight or planning went into their creation. How is
mindless non-directed creation accomplished? Through the creative
of random mutation and natural selection. (
Back to Top )
question is, of course, can a mindless non-directed process that has
no goal or higher purpose in mind, really produce the wonders that we
around us and even within our own selves? We
are told by
popular science that it is all about "baby steps" - the little
changes that add up over time to produce fantastic variation in form
function. Clearly, if little
changes could be selected in a positive way over time they could
up to produce bigger and bigger changes until all the diversity and
functional complexity that we see today is the end
result - the result of billions of years of tiny modifications in many
and even converging family trees.
famous British biologist, Richard Dawkins, describes this process as,
"Climbing Mt. Improbable". In
his book Dawkins explains that although the mountain of complexity that
exists might seem quite improbable to achieve through blind chance,
not really a process of purely random chance.
Evolution uses random chance to create little steps where each
change is either good or bad or neutral with regard to overall
through a process popularly known as the "survival of the fittest",
gives increased reproductive fitness to those creatures that have
changes and takes away reproductive fitness from those creatures that
negative changes. Obviously then, the next generation is going to be
heavily populated by those creatures with the most beneficial changes.
way, the beneficial changes add up over time in each generation so that
of enormous complexity is scaled one baby step at a time. ( Back to Top
( Back to Top
all sounds very good, and quite convincing actually, except perhaps for
little problem. Natural selection
is limited in that it can only select, in a positive way, for changes
improvement in function over what was there before.
As it turns out, many mutational changes (i.e., changes in the
genetic codes of DNA that dictate how a creature is formed) have
affect on the function of the organism. Such changes, or mutations, are
"neutral" with respect to functional selectability.
There is even a "Neutral Theory of Evolution" proposed fairly
recently by Motoo Kimura.
A neutral difference is like "spelling" the code for the same function in a different way. This different spelling still results in producing the same / equal / equivalent result - as I just did by using three different words that mean pretty much the same thing. Or, neutral differences may exist between equally non-meaningful sequences - like the difference between "quiziligook" and "quiziliguck". Both are equally meaningless when spoken in most situations - right? Therefore, neither has more meaningful or beneficial "fitness" in a given environment as compared with the other. Obviously then, selection between them would be equal or "neutral" with respect to function - i.e., completely random.
Beyond this, most mutations that do happen to affect function do so in a negative way. Natural selection actively works against such mutations to eliminate them from the gene pool over time. These mutational changes are not therefore "beneficial" either.
So, why might this be a problem for evolution? Well, at very low levels of functional complexity (i.e., functions that require a very short sequence of fairly specified genetic real estate to be realized) the ratio of potentially beneficial to non-beneficial sequences is quite high. So, the numbers of non-beneficial differences between one beneficial sequence and the next closest potentially beneficial sequence in sequence space are relatively few.
For example, consider the sequence: cat - hat - bat - bad - big - dig - dog. Here we just evolved from cat to dog where every single character change was meaningful and potentially beneficial in the right environment. It is easy to get between every potential 3-character sequence in the English language system because the ratio between meaningful and non-meaningful in the "sequence space" of 3-character sequences is only about 1 in 18. However, this ratio decreases dramatically, exponentially in fact, with each increase in minimum sequence length. For example, in 7-character sequence space, the ratio is about 1 in 250,000 - and that is not even taking into account the "beneficial" nature of a particular sequence relative to a particular environment/situation. Still, meaningful 7-character sequences are generally very interconnected, like a web made up of thin interconnected roads going around the large pockets of non-meaningful/non-beneficial potential sequences. However, the exponential decrease in the ratio is obvious and the implications are clear. For higher and higher level functions, requiring larger and larger fairly specified sequences to code for them, the ratio of meaningful to meaningless becomes so small so quickly that when more than a few dozen characters are needed the interconnected roadways and bridges that connect various island-clusters of beneficial sequences start to snap apart. At surprisingly low levels of functional complexity this process isolates the tiny islands of beneficial sequences from every other island to such an extent that there is simply no way to reach these tiny isolated islands except to traverse the gap of non-beneficial sequences through a process of purely random change(s) over time.
every additional step up the ladder of
complexity, this gap gets
wider and wider, in an exponential manner, until it is simply
side of trillions upon trillions of years of average time.
Natural selection is simply blind when it comes to crossing such
Without the guidance of natural selection, this crossing takes
greater amounts of time since the non-beneficial junk sequences of
space must be sorted through randomly before a very rare
sequence is discovered by sheer luck (see link).
course, some have suggested to me that a single fortuitous insertion
composed of just the right sequence of multiple characters, could cross
sizable gap between one island function and another far away island.
Certainly this is true, but the
problem here is that not just any multi-character sequence or insertion
This sequence has to be just right to work for many types of high-level
functions. The odds that such a specific sequence will actually come
extremely remote this side of trillions of years of time when the gap
sizes of just a few dozen or so average non-beneficial character
differences. Then, even if the needed sequence did happen to
arrive in the genome, it must
be inserted in just the right place for it to work in a beneficial
manner for a
particular evolving function. The vast majority of potential
positions would be detrimental or at best neutral with respect to
Then, even if the needed sequence did happen to arrive in the genome, it must be inserted in just the right place for it to work in a beneficial manner for a particular evolving function. The vast majority of potential insertion positions would be detrimental or at best neutral with respect to overall function.
then, getting it "right" is not a simple matter. It literally
requires trillions upon trillions of years
on average, to cross relatively
small non-beneficial gaps. That's the problem in a nutshell. How
is this problem
overcome? ( Back
to Top )
The Eukaryotic Flagellum
The eukaryotic flagellum is quite a bit different from the bacterial or prokaryotic flagellum. To emphasize the significant underlying mechanistic differences, the eukaryotic flagellum is also called a "cilium" or "undulipodium". The only thing that the bacterial, archaeal, and eukaryotic flagella really have in common is that they project a whip-like structure from the cell that moves in such a way that it produces useful motility.
A eukaryotic flagellum is a bundle of nine fused pairs of microtubules doublets surrounding two central single microtubules. The so-called "9+2" structure is characteristic of the core of the eukaryotic flagellum called an axoneme. At the base of a eukaryotic flagellum is a basal body or kinetosome, which is the microtubule organizing center for flagellar microtubules (about 500 nanometers long). These basal bodies are structurally identical to centrioles. The flagellum is encased within the cell membrane so that the interior of the flagellum is accessible to the cell's cytoplasm. Each of the outer 9 doublet microtubules extends a pair of dynein arms (an inner and an outer arm) to the adjacent microtubule. These dynein arms are responsible for flagellar beating since they cause the microtubule doublets to slide against each other and the flagellum as a whole to bend. The dynein arms produce this force through ATP hydrolysis. The flagellar axoneme also contains radial spokes (polypeptide complexes extending from each of the outer 9 microtubule doublets towards the central pair, with the "head" of the spoke facing inwards). The radial spoke is thought to be involved in the regulation of flagellar motion, although its exact function and method of action are not yet understood.
the eukaryotic flagellum is obviously very complex, this particular
specifically discusses the bacterial rotary flagellum and its supposed
evolutionary origin. (
Back to Top )
Evolving Highly Complex Functions
Now, some have argued that the actual minimum number is less than 50 genes since certain types of bacteria can build useful flagellar systems a bit less than the 30 parts usually listed. The following table lists 21 protein parts shared by the widely different types of bacteria to include Aquifex aeolicus, Bacillus subtilis, Escherichia coli, and Treponema pallidum. 15
MotA, MotB, FliG (C-term)
FliF, FliG (N-term), FliM/N
FlhB, FliQ, FliR, FliP, FliI, FlhA
FlgB, FlgC, FlgG, FliE
Hook and Adapters
FlgE, FlgL, FlgK, FlgD
It seems then that these 21 structural genes are at least close to the required minimum for useful flagellar function. Combined with the other genes needed to assist in the building of the flagellar structure, the bare minimum still seems to be around 35 to 40 unique genes. Clearly then, the flagellar system of motility is very informationally complex. To achieve the motility function the flagellum requires a minimum of several thousand amino acid residues working together in a very specific or "specified" arrangement relative to each other.
Many attempts to explain the stepwise evolution of such an obviously complex system have been proposed. Most are very superficial, leaping over huge evolutionary gaps, involving large changes of multiple proteins, with a wave of the hand. However, there are some better attempts. Perhaps one of the best attempts to explain flagellar evolution is that proposed by Nicholas J. Matzke in this 2003 paper, "Evolution in (Brownian) space: a model for the origin of the bacterial flagellum." 1
the time, Matzke was a geography grad student at the University of
Santa Barbara who had obvious passions outside of geography.
In this paper Matzke suggests that the starting point for
evolution probably began with a type III secretion system (TTSS).
( Back to Top )
( Back to Top )
So, if anything, it seems like the TTSS system would have evolved from the flagellum (which does in fact contain TTSS system-like subparts, such as a basal body that secretes various non-flagellar proteins - including virulence factors), and not vice versa.
evidence for this comes from the fact that the TTSS system shows little
with any other bacterial transport system (at least 4 major ones). Yet,
evolution is supposed to build upon what already exists.
Since the TTSS system is the most complex of the bunch, why
evolve from one of these less complex systems and therefore maintain
degree of homology with at least one of them? This evidence suggests
TTSS system did not exist, nor anything homologous, in the
era". It must therefore have
arisen from the fully formed flagellum via the removal of
parts - and not the other way around. In fact, several scientists have
started promoting this idea in recent literature.2-7 For
example, consider the following 2008 article published by Toft and
"Genome shrinkage is a common feature of most intracellular pathogens and symbionts. Reduction of genome sizes is among the best-characterized evolutionary ways of intracellular organisms to save and avoid maintaining expensive redundant biological processes. Endosymbiotic bacteria of insects are examples of biological economy taken to completion because their genomes are dramatically reduced. These bacteria are nonmotile, and their biochemical processes are intimately related to those of their host. Because of this relationship, many of the processes in these bacteria have been either lost or have suffered massive remodeling to adapt to the intracellular symbiotic lifestyle. An example of such changes is the flagellum structure that is essential for bacterial motility and infectivity. Our analysis indicates that genes responsible for flagellar assembly have been partially or totally lost in most intracellular symbionts of gamma-Proteobacteria. Comparative genomic analyses show that flagellar genes have been differentially lost in endosymbiotic bacteria of insects. Only proteins involved in protein export within the flagella assembly pathway (type III secretion system and the basal body) have been kept in most of the endosymbionts, whereas those involved in building the filament and hook of flagella have only in few instances been kept, indicating a change in the functional purpose of this pathway. In some endosymbionts, genes controlling protein-export switch and hook length have undergone functional divergence as shown through an analysis of their evolutionary dynamics. Based on our results, we suggest that genes of flagellum have diverged functionally as to specialize in the export of proteins from the bacterium to the host." 13
In other words, the TTSS system is the result of a degenerative process; not a creative process of something structurally new, in a qualitative sense, that wasn't already there to begin with.
it is so very handy to start one's explanation of a very complex system
beginning in the middle - or so it might seem at first. Of the 27 or so
protein parts utilized in the flagellar
structure, 10 of them are homologues to proteins in the TTSS.
One of these 10 is the "FliI" protein.
FliI is an ATPase that is anchored to the cytoplasmic face of
membrane and probably supplies energy for the synthesis of the export
or transport of secreted proteins, which are selectively captured from
cytoplasm for the purpose of transport. Then,
there are the proteins that make up the inner membrane transport
probably make up the protein-conducting channel.
These include FlhA, FliP, FliQ, FliR, and FlhB.
The flagellar homologue to the MS-ring is made by FliF and the
to the C-ring is made by FliN and FliG. The last protein part, FliH,
seems then that most of these 10 flagellar homologues are required for
function. So, the assumption of an intact proto-TTSS system is a nice
head start for explaining flagellar evolution. The fact is that the
is highly complex in its own right and this only adds to the notion
TTSS system did not evolve from a system of lesser complexity, but
arose from a
system of much higher complexity (the fully formed flagellum) via a
removing pre-established parts - not the addition of new parts.
Obviously, it is much easier to take parts away and maintain
sub-functions that are already there than to add new parts to lower
functions to gain beneficial higher-level functions that are, as yet,
to this the fact that some of the homologues between the flagellar
the TTSS system are not that homologous. The
FliN in TTSS is only homologous to ~80 C-terminal residues of flagellar
(out of 137aa). There is very little FliG similarity and TTSS FliF is
the C- and N-terminal domains that are involved in forming the MS ring.
is left of FliF is about 90 out of over 550 amino acid residues.
What this means is that the TTSS system cannot rotate.
Evolving the ability to rotate would involve the addition of a
number of specifically sequenced residues.
In short, the function of the TTSS system itself is very difficult to explain using mindless evolutionary mechanisms. I have yet to see reasonable attempt to explain how a TTSS system could have evolved with neutral gaps small enough to be crossed by random mutations of any type.
and other evolutionists approach such problems by suggesting that some
as of yet
undiscovered homologue to the flagellar secretory apparatus may one day
found. Matzke explains:
If type III virulence systems are derived from flagella, what is the basis for hypothesizing a type III secretion system ancestral to flagella? The question would be resolved if nonflagellar homologs of the type III export apparatus were to be discovered in other bacterial phyla, performing functions that would be useful in a pre-eukaryote world. That such an observation has not yet been made is a valid point against the present model, but at the same time serves as a prediction: the model will be considerably strengthened if a such a homolog is discovered. For the moment, it is easy enough to explain the lack of discovery of such a homolog on the basis of lack of data.1
So, for the moment, the evidence for the
evolution of the very first step in flagellar synthesis is safely
a "lack of data"? Where is the "detailed" explanation of
flagellar evolution in that? Well,
Matzke and others envision what might have taken place to
first proto-TTSS system.The
origin of this proto-TTSS system begins with the homologue to FliF, a
pore protein complex. FlhB, the protein complex that controls the type
proteins secreted through the pore, is somehow attached to FliF. FlhA,
function is unknown, was also added so that together with FlhB, a
general transporter pore could be turned into a substrate specific
Where FlhB or FlhA might have come from or what other jobs they
have had is not discussed nor is it clear how their selective abilities
necessarily have been helpful especially if the wrong proteins were
any case, once the FlhB and FlhA are combined with FliF, some power is
for active transport. In comes FliI
to the rescue. The proposal is that
F1-αβ ATPase, a heterohexamer made up alternating α-subunits
(noncatalytic) and β-subunits (catalytic) found in many types of
evolved from a common ancestor of
homohexamer made up of catalytic subunits and the power source for
FliI shares ~30% homology with the F1 subunit of F1F0-ATP
synthetase. No detailed account of
how this could have happened, mutation by mutation, is given. It is
assumed to have happened. Perhaps, given enough faith in evolution as a
force, no real detail in needed here?
course, given that FliI can be made, it is easy to get the FliI power
attach to the FliF pore - right? Not
so fast. FliI cannot attach to FliF
directly. Another protein called
"FliH" is required to get the FliI ATPase to stick to FliF.
Where FliH came from or how it might have miraculously evolved
ability to stick to both FliI and FliF in just the right way is not
But it gets worse. Another protein
complex, known as "FliJ", is required to interact with the FliI ATPase
and FliH before any flagellar components can be exported.
for active protein export to be achieved by TTSS, three protein
needed to be arranged just so (FliI, FliH, and FliJ) - and this is just
imaginary version. The other parts
of the secretory apparatus, FliOPQR, are simply not discussed in
"detailed" step-by-step model of flagellar evolution owing to a
"lack of data."
On top of this, what about the argument that similarities between proteins of the F1F0-ATP synthetase and the flagellar type III export apparatus support the notion that they share a common early ancestor? In the very same breath Matzke adds, "Individually, the cited similarities are easily attributable to chance, but together they are at least suggestive."1 This sounds to me like some rather large gaps are at least potentially present in the proposed pathway already. At least these gaps are not discussed in any sort of detail by Matzke's model nor any other model that I am aware of. Those steps, which seem to require hundreds of fairly specific genetic differences, are simply passed over with a wave of the hand and the conclusion that,
"The key event in the origin of type III export was the association of a primitive F1F0-ATP synthetase with a proto-FlhA or FlhB inside the proto-FliF ring, converting it from a passive to active transporter. Since little is known about the details of the coupling of ATPase activity to protein export in Type III export, this step remains speculative."1
kidding! And I thought
this was supposed to be a "detailed" discussion of flagellar
evolution? So far, it seems to be
fairly superficial speculation. (
Back to Top )
Although alluded to earlier in this essay, the term "irreducible complexity" was originally defined by Behe as:
"A single system which is composed of several interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning".16
Behe chose the flagellar motility system as one of his examples of an irreducibly complex system. However, the following is an interesting reaction to Behe's concept of irreducible complexity - suggesting that the flagellar system really isn't irreducibly complex at all:
Mainstream scientists regard this argument as having been largely disproved in the light of fairly recent research. They point out that the basal body of the flagella has been found to be similar to the Type III secretion system (TTSS), a needle-like structure that pathogenic germs such as salmonella use to inject toxins into living eukaryote cells. The needle's base has many elements in common with the flagellum, but it is missing most of the proteins that make a flagellum work. Thus, this system seems to negate the claim that taking away any of the flagellum's parts would render it useless. This has caused [Kenneth] Miller to note that, "The parts of this supposedly irreducibly complex system actually have functions of their own." 17,18
What "mainstream scientists", like Kenneth Miller, don't seem to understand is that all systems of function are irreducibly complex regardless of whether or not a working subsystem can be found within the larger system. The flagellar motility system still requires at least 35-40 genes producing a structure with at least 21 different specifically arranged proteins each requiring a minimum of hundreds of specifically arranged amino acid residues in order for the function of flagellar motility to be realized at all - even a little bit. Just because one or more subsystems may be found within the overall requirements needed to build a flagellar motility system, such as a TTSS system, does not remove the fact that the flagellar system still has a minimum structural requirement that cannot be reduced beyond a high threshold point without complete loss of the flagellar motility function. System reduction may leave the TTSS system intact, since the TTSS system has a much lower minimum threshold structural requirement. However, having the TTSS function in place, does not mean that the flagellar function will also be in place.
Now, one might be able to build higher-level functional systems, systems that require greater and greater minimum structural requirements, with the use of pre-established smaller systems that are already available. However, this potentiality does not remove the fact that higher-level systems have greater minimum size and specificity requirements before they can be realized at all - even a little bit. All types of functions have their own minimum requirements. These minimum requirements are not all the same. And, it is this difference in minimum requirements that makes all the difference.
The real question is, can irreducibly complex systems be built up with the use of what is already in the gene pool? And, if so, is it equally likely to end up with functions at different levels of minimum size and specificity requirements?
It is my position that functional systems that require a minimum of only a few dozen amino acid residues in a fairly specific orientation can be evolved in relatively short order (just a few generations for a colony of a few billion bacteria). However, the likelihood that higher and higher level functions will be within reach in such a short amount of time decreases in an exponential manner with each step up this ladder of irreducible functional complexity.
This notion is born out in literature. There are a whole lot of examples of evolution "in action" when it comes to functions that require a minimum of only a few dozen residues or if the residue positions need not be very specified (antibiotic resistance, improved immune system specificity, phage infectivity, etc). However, when it comes to functions that require a minimum of a few hundred fairly specified residues working together at the same time (as in single protein enzymes like lactase, nylonase, etc) the number of examples drops off dramatically and the number of bacterial gene pools that are capable of evolving functions at this level, even in a highly selective environment, also drops off exponentially.
When one gets to the level of functions that require just 1,000 fairly specified residues working together at the same time, there simply are no examples of evolution "in action" mentioned in literature - none at all. All that there are, at this point, are stories about how evolutionary mechanisms of random mutation and function-based selection must have done the job. That's it, just stories based on nothing more than assumptions. There are no actual observations of evolution in action beyond this point - not one example. There are also no serious attempts to calculate the odds of evolution happening at such levels within the proposed few billion years of time that the evolution of life has supposedly taken place on this Earth.
The paper discussed here, by Matzke, is no exception. Matzke makes no attempt to calculate the odds of evolution crossing any of his proposed steps in the flagellar evolution pathway. He simply relies, as do other mainstream scientists, on the notion that sequence similarities could only be the result of an evolutionary relationship. Statistical calculations concerning the ability of random mutations and function-based selection to actually make it across these proposed steps just doesn't seem to be needed by mainstream scientists. Why calculate the odds when the story seems so good?
Perhaps, just perhaps, there is a little problem with these stories.
to be a linearly expanding gap problem between what is and what might
Each step up the ladder of functional complexity results in a linear
of the non-beneficial gap between what exists in a gene pool and the
next closest potentially beneficial genetic sequence(s) in the vastness
"sequence space". Each linear expansion in gap distance, as
defined by the number of residue changes that would be needed to
achieve the new
function, results in an exponential increase in the number of random
selection steps that would be needed - on average. Of course,
in an exponential increase in the average time required to find a new
functional sequence at higher and higher levels of minimum functional
Back to Top )
The "Little" Steps Up the Mountain
Maztke's summary of the evolutionary model for the origin of the flagellum, showing the six major stages and key intermediates. White components have identified or reasonably probable nonflagellar homologs; grey components have either suggested but unsupported homologs, or no specific identified homologs, although ancestral functions can be postulated. The model begins with a passive, somewhat general inner membrane pore (1a) that is converted to a more substrate-specific pore (1b) by binding of proto-FlhA and/or FlhB to FliF. Interaction of an F1F0-ATP synthetase with FlhA/B produces an active transporter, a primitive type III export apparatus (1c). Addition of a secretin which associates with the cytoplasmic ring converts this to a type III secretion system (2). A mutated secretion substrate becomes a secreted adhesin (or alternatively an adhesin is coopted by transposition of the secretion recognition sequence), and a later mutation lets it bind to the outer side of the secretin (3a). Oligomerization of the adhesin produces a pentameric ring, allowing more surface adhesins without blocking other secretion substrates (3b). Polymerization of this ring produces a tube, a primitive type III pilus (4a; in the diagram, a white axial structure is substituted for the individual pilin subunits; all further axial proteins are descended from this common pilin ancestor). Oligomerization of a pilin produces the cap, increasing assembly speed and efficiency (4b). A duplicate pilin that loses its outer domains becomes the proto-rod protein, extending down through the secretin and strengthening pilus attachment by association with the base (4c). Further duplications of the proto-rod, filament, and cap proteins, occurring before and after the origin of the flagellum (6) produce the rest of the axial proteins; these repeated subfunctionalization events are not shown here. The protoflagellum (5a) is produced by cooption of TolQR homologs from a Tol-Pal-like system; perhaps a portion of a TolA homolog bound to FliF to produce proto-FliG. In order to improve rotation, the secretin loses its binding sites to the axial filament, becoming the proto-P-ring, and the role of outer membrane pore is taken over by secretin's lipoprotein chaperone ring, which becomes the proto-L-ring (5b). Perfection of the L-ring and addition of the rod cap FlgJ muramidase domain (which removes the necessity of finding a natural gap in the cell wall) results in 5c. Finally, binding of a mutant proto-FliN (probably a CheC receptor) to FliG couples the signal transduction system to the protoflagellum, producing a chemotactic flagellum (6); fusion of proto-FliN and CheC produces FliM. Each stage would obviously be followed by gradual coevolutionary optimization of component interactions. The origin of the flagellum is thus reduced to a series of mutationally plausible steps.1
the TTSS system as a starting point, regardless of the tenuousness of
hypothesis, the next steps in the evolution of the flagellum should be
right? With just a few residue
changes here and there, the pathway of improved beneficial function
made up of neat, closely spaced, steppingstones.
Consider that Matzke's proposed scenario is one of the most
descriptions that I have come across - as superficial as it is.
Necessary parts just pop into existence and easily attach to
in just the right way. No detailed
discussion concerning the significant modifications that would be
such specific attachments to be realized to a beneficial degree is
Matzke's discussion is a gross underestimate of the complexity involved
from one beneficial state to the next along his proposed evolutionary
pathway. ( Back to Top )
( Back to Top )
go into a but more detail, the next step, given the pre-existence of a
TTSS-like system, is the addition of a
filament. Matzke and many others
argue that simple protein-based filaments are easy to make - pointing
polymerization of hemoglobin in sickle cell patients as a result of a
point mutation (like changing a single letter in a paragraph and
getting a brand
new function). This sort of
thinking understates several rather specific requirements needed to
form a useful
filament of any sort.
For instance, the parts of a random filament, like those that form sickled hemoglobin, are very likely to aggregate into clumps or long tangled strands before they are transported through any sort of pore to the outer surface of the cell. Obviously that wouldn't be helpful. Also, even if such filament monomers do make it to the outer surface without clumping up, they have to preferentially stick to the right place. That requires fairly specific binding features. What are the odds that such a random filament monomer will also have such binding features? These odds translate into an enormous amount of time - on average.
And, there are a lot of other potential problems for average filament monomers. What about degradation? What about transport to the channel and the selectivity of channel uptake? What about sticking to the inside of the channel and clogging up the pathway? What if the filament ended up forming a solid core instead of a hollow core? How would more filament parts get through to get added to the tip? What if the tip was not capped with a different type of protein that placed each new filament protein part in the right spot? What are the odds that just any filament stuck onto the export machinery is going to be "beneficial" in a given environment? - even as a "simple" anchoring filament?
not only do the more and more "special" filament parts have to stick
to themselves as well as the secretory apparatus in just the
they must form a filament where the distal tip is able to stick to
other than itself and its own host bacterial surface. On top of
that might be just a bit tricky to achieve. What are the odds that a
to code for such specialized filament proteins will just happen to come
be secreted in a specific way by an existing active transport pore?
to Top )
to Top )
"Simple" P Pilus
order to even begin answering this question, let's consider what it
make even the most simple useful bacterial "filament" -
like the P pilus.
P pilus functions as an attachment anchor between bacterial cells and
cells. It is a thin hollow filament that thins near the tip. On this
tip is a
protein that specifically binds to certain types of sugar molecules on
types of cells (like kidney cells). Even
though this pilus is about as simple as it gets in real life and even
function seems quite mundane, it is coded for by around10 or 11 genes -
many genes as code for the obviously complex type III secretory system
The thicker proximal portion is formed by PapA protein parts, the
portion by PapE parts, and the very tip by PapG (the specific
"adhesin" that binds to sugars).
There is also an adaptor protein, PapF, that binds PapG to PapE
another, PapK, that binds PapE to PapA.14 That's a
total of 5
different proteins that come together in a very specific order. How is
it is done with a rather complicated interaction of "chaperone"
proteins. But first, the cell has to make a multiprotein export pathway
the sec pathway, which dumps cytoplasmic material into the periplasmic
The trick for gram-negative bacteria "wanting" to grow a pilus is to
get a filament to penetrate the outer membrane. This takes some fancy
coordination. First, all the pilus subunits are preferentially
exported, in an
unfolded state, into the periplasm through the sec pathway where they
However, if left to themselves, they would form disorganized
a chaperone protein, PapD, is required to prevent this clumping problem
aid in proper folding conformation with the use of donor strand
(DSC). The filament parts, by themselves, are very unstable and never
properly. And, PapD, has no other known function.
Next, the pilus subunit-chaperone complex specifically interacts with a protein channel on the outer membrane known as PapC. This channel is large enough for the tip of the filament to go through, but not the proximal part. PapD, the chaperone, hands off the pilus subunit to PapC, which then aids in its attachment to the growing filament where each subunit contributes a strand to perfectly complete the fold of its neighbor, thereby stabilizing it.14,15
Of course, there are various numbers of structural subunits involved with p-pilus functionality. P-fimbriae (or P-Pili) have the most structural subunits in E.coli bacteria (with 9 structural subunits). However, type 1 fimbriae are built up from 7 subunits and the more recently characterized Yqi fimbriae displays a generic genomic organisation of just 4 structural subunits (a major-, chaperone, usher and adhesin subunit).19 Note also that E.coli possess numerous cryptic fimbrial operons with varying numbers of structural subunits. Yet, "all fimbrial adhesins share common genetic organization, in that the adhesin regulatory genes precede the major subunit gene, which is followed by the periplasmic chaperone, outer membrane usher, and finally the adhesin genes. This gene cluster organization is seen in type 1 fimbriae (fim), S fimbriae (sfa), F1C fimbriae (foc) and the Dr-antigen recognizing fimbriae (dra). Organization of the yqi adhesin gene cluster differs by having the positions of the usher and chaperone inverted, which is also true for the pap adhesin gene cluster of the P fimbriae although the reason for this, be it functional or biological, is still unclear."19
In short, all functional bacterial pili or fimbriae share the same basic necessary building blocks and same basic necessary structural programming - which is rather complex even in the most simple known examples of functional fimbriae.
even something as relatively "simple" as building a pilus seems rather
complicated in comparison to the proposed evolutionary step of filament
evolution. It's just very difficult
to make a "useful" filament - it would seem. But, lets say that some
such filament does happen to evolve somehow. How is it going to evolve
flagellum? A flagellum needs to be
able to secrete proteins in order to be built.
The problem is, no P-pilus has been shown to secrete proteins -
because of the small channel size or the lack of an associated energy
pumping proteins out. In any case, all such pili are very different
flagellum in one very important respect. Such
pili are built from the top down where each new monomer that is added
existing pilus up and out. Flagella,
on the other hand, are built from the bottom up where each new monomer
to the tip as the tip grows outward on the existing flagellum (see
flagellar assembly above).
late Robert Macnab, a former professor of
biophysics and biochemistry at Yale University who also studied
that the mechanism of flagellar assembly is, "a much more sophisticated
process than any of us could have envisaged."8 He went on to
note that, "We think it would not be possible for the system to work
any significantly lower complexity." 9
An interesting non-motile
type of bacteria, known as Shigella, has
flagellar genes, but makes no flagella. Some Shigella strains
missing genes than other strains, but in certain strains, the only gene
is the FliD gene. This FliD gene
codes for the vital filament cap protein. Without the FliD cap protein
tip of the flagellar filament, the flagellin monomers (FliC) that form
filament fall away. Not only that,
but without FliD, the FliC parts simply would not assemble properly
animations by Keiichi Namba et. al. 12).The
FliD cap looks like a pentagon-shaped ring sitting atop the hollow
filament. Each one of the 5-part FliD pentamer units has a leg-like
that points downward and interacts tightly with the filament monomers.
there is a slight mismatch.
The cap has 5 legs, but the end of the filament has 5.5
subunits in its circumference. So, there is always a little crevice at
between the cap and the filament. The next subunit gets added to the
filament at this site. As the new
subunit is added to the open spot, the cap is rotated so that a new
up adjacent to the one that was just filled. So, as the cap spins round
round, at 10 rotations per second, new flagellin monomers (FliC) are
one-at-a-time, 50 per second.8
What is most interesting about all of this is that the ends of the flagellin subunits are unfolded as the travel down the hollow filament tube. One of the reasons for this is that folded flagellin has a big kink in the middle that makes it too big to travel through the tube. By themselves, the flagellin subunits cannot fold properly. So, the FliD cap is required to both fold and place the flagellin monomers. It other words, it is a type of chaperon protein. In addition, the hollow area just below the cap is about twice the size as the rest of the tube and just large enough to allow for the folding of one monomer subunit. The spinning of the cap combined with favorable protein-protein interactions provides the energy for this folding process - since there is no ATP involved.8
short, without this highly specialized cap, the flagellin units cannot
self-assemble to form such an orderly filament at all. And, neither the
protein nor the flagellin monomers have any other cellular function.
this, how is it that the cap gets placed in the right position at the
tip of the
filament and that no other cap monomers are sent down the tube once
done? Again, a specific chaperone
is required for cap assembly and prevention of untimely aggregation.
To counter this argument, the assertion is made that since the FliL and FliK flagellar protein tubular units do not need a cap for proper assembly that the addition of a cap to the system was a late evolutionary modification for improved speed and efficiency.1 One potential problem is that FliL and FliK are only linking proteins. They link the hook-part of the flagellum (FlgE) to the rest of the flagellum (FliC) (see animation above). They do not form flagella by themselves. Even if they did, this would not explain how the flagellin (FliC) units, in particular, could have self-assembled without a cap or how they could have evolved without co-evolution of the very specific FliD cap - involving a large number of highly specified residue changes for minimum selective advantage.
what about the "cap first" hypothesis in which the cap evolves,
because of its adhesive properties, and is improved upon by further
pilus proteins which extend the cap outward from the cell?
Again, how long would it take to come up with a flagellar protein
specific enough to interact with such a cap in such a complex manner?
explanations get no more detailed than this.
If such evolutionary steps were so easy to cross, they could be
tested in the lab. Simply delete
the flagellin FliC gene in a bacterium and see if its decedents will
the flagellum under the pre-established cap. As far as I am aware, no
experiments have ever been successful. As
previously mentioned, the same thing is true of bacteria that do not
FliD cap gene, like Shigella. These bacteria may have all the
flagellar genes, but have lost the cap gene - and they can't make a
nor have they evolved back the cap gene. Why
to Top )
Motorizing the Flagellum
Ok, let's say that somehow an early colony of bacteria was in fact able to evolve a proto-TTSS system and a proto-flagellar filament system where each system was independently functional in some beneficial manner. At this point, Matzke argues that it would be a very simple thing to simply stick these two systems together to gain flagellar motility.
In looking into this notion, let's do just a bit of review. Remember that the flagellar motor is indeed broken down into two basic units - the stator and the rotor. The stator is composed of motA and motB subunits (each comprised of approximately 300 residues). The rotor is composed of FliM (~330aa), FliN (~130aa), and FliG (~330aa). All 3 rotor components are involved in flagellar assembly. The C-ring formed by these components acts as a sort of measuring cup that determines the size of the hook filament. What happens is that approximately 120 hook monomers bind to FliM, FliN and FliG, (4 binding sites each). When all the binding sites are filled, all the monomers are released at once and a "hook" segment of a specific length is formed. After the hook monomers leave the C-ring, another protein enters and converts the C-ring from a hook-monomer-secretor to a flagellin-monomer-secretor. The specificity of the C-ring changes with regard to which monomers it accepts.
So, FliG is important in flagellar assembly in that the 200 N-terminal residues of FliG seem to be required. In fact, if one divides up the 331aa of FliG into segments of 10aa each, deletion mutations of segments 11, 13, 16, 17, 20, 21, and 27 result in a lack of adequate flagellum formation and obviously the motility function. Also, those bacteria with 1, 3, 12, 14, 15, 22, 23, and 26 mutations to FliG are completely "nonflagellate".10
This means that Metzke's assertion that FliG, as part of the proto-secretion complex, is "retained only in order to stabilize/support the coadapted secretion complex and the FliF ring, and [is] otherwise vestigial" is complete nonsense. FliG is vital to secretion and has nothing to do with FliF stabilization (FliF has been shown to be quite independently stable). It is just that FliF without FliG cannot form an adequate flagellum.
Of course, the FliG protein (with no significant homologous counterparts by the way) is also the subpart responsible for converting the proton-motive force into torque forces for the rotating motion of the flagellum. The ~100 C-terminal residues seem to be required for this function to be realized. In addition, specific mutations to segments 10, 18, 19, 24, 25, 28, 29 and 31 formed flagella, but were paralyzed.10
As far as the rotary function is concerned, FliM and FliN are responsible for switching the motion from one direction to the other - not for the actual creation of the torque forces. However, FliM and FliN are still necessary for flagellar assembly.
Now, let's talk about FliF (~550aa central MS-ring membrane pore complex) for a minute. FliF has no known homologues outside of TTSS systems (which are thought to have evolved from the flagellar system - not the other way around). Even given its proto-form existence, explaining how a proto-flagellum/filament could get stuck to it in a beneficial manner is quite a challenge. The assembly of even the simplest filaments is quite involved, as described above. Various chaperone proteins are involved in bringing specific monomers into place at just the right time and folding and attaching them together just so. The building of an apparently simple pilus is extremely complex. The building of a hollow flagellum where the flagellum is formed by adding monomers to the distal tip is extraordinarily complicated.
Given these few facts presented so far, I have just a few questions. Matzke suggests that FliG didn't need to evolve with FliF as part of the export apparatus. How is this explained if FliG is currently required for flagellar assembly? If FliG did not evolve with FliF, then wouldn't it need not only to bind strongly to FliF in a manner that overcomes the sheer forces of the spinning FliG, but also in a way that aids in flagellar assembly? Not only does FliG have to bind to FliF, but it also has to have specificity in place for a certain type of filament monomer. I mean, without the flagellin specificity of FliG, the flagellum does not form. When the flagellum is first starting to form in real life, the MS-ring (FliF) and the C-ring (FliG N-term + FliN + FliM) must form first or the flagellum will not form. That seems just a bit hard to explain using evolutionary mechanisms.
Oh, but maybe it would be easier if FliG was already bound to FliF? - If FliG originally evolved with FliF? Then it would already have filament monomer specificity in place and the flagellar filament could already be in place - right? But, how would motA/B bind to FliG in a beneficial manner then? Quite a number of very specific residues have to be aligned just right in order for the proton-motive force of motA/B to be transferred into FliG torque power - and that is in addition to the simple preferential binding of motA/B to FliG + FliF - right?
In short, either way one looks at it there is more that is needed than simple FliF-FliG binding. What good would FliG specificity be for flagellin if it were not bound to FliF first? And, what good would FliG specificity be for motA/B proton-motive force if it were not bound to motA/B first? This specificity would most certainly involve quite a few additional residue position differences starting from the original "proto" forms. And, most likely, these required differences would not be sequentially beneficial in a way where natural selection could guide them along.
Beyond this, a selectable degree of non-covalent binding is not going to happen between FliG and FliF with just one or two correct residue positions in place out of the 46 fairly specific residue positions used to link up FliG with FliF in modern flagella. To overcome the buffeting effects of Brownian motion, the flagellum needs to spin very rapidly (~100-300 rotations/second for 3-4 seconds). This means that a lot of inertial and sheer forces must be overcome to keep FliG connected to FliF. A significant number of the 46 attachment "bolt-like" residues would have to be in place, all at the same time, in order to overcome these sheer forces to any selectable degree. In fact, it is suggested by deletion experiments that only N-terminal segment 4 of FliG can sustain significant change without a complete loss of motility. Mutations in the first 3 N-terminal segments (~30aa) resulted in a complete loss of motility - obviously due to a lack of sufficient binding strength to FliF and/or a lack of ability to aid in the formation of the flagellum.10
However, it just so happens that the genes for FliF and FliG are located right next to each other in the genome. Certain deletion mutations between FliG and FliF result in a fusion protein, a covalently bound FliG/FliF protein that does in fact work fairly well. Clearly a covalent bond is much stronger than a non-covalent bond, so the need for dozens of non-covalent bonds to be in place is removed. Although the covalently bound fusion protein doesn't work as well as the non-covalently bound wild-type system, it works well enough to get the job done.
Because of this ability to covalently bind FliG with FliF, without the need to get dozens of sequences just right, some have told me that this makes it easy to get the two independently beneficial lower level systems (i.e., the motor and the rotor) to bind together to give rise to the much higher level system of flagellar motility. This simply isn't true because of the multifunctional need for FliG in both systems at the same time - as described above. In short, either way one looks at it there is more that is needed than simple FliF-FliG binding. What good would FliG specificity be for flagellin if it were not bound to FliF first? And, what good would FliG specificity be for motA/B proton-motive force if it were not bound to motA/B first? This specificity would most certainly involve quite a few additional residue position differences starting from the original "proto" forms. And, most likely, these required differences would not be sequentially beneficial in a way where natural selection could guide them along.
done with FliF mutations show that a "short C-terminus stretch" of 9
"core" amino acid residues is
required for "flagellar assembly".
Note that this assembly process is going on at a time when the
turned off and no FliG rotation is going on. The authors go on to state
"Removal or substitution of up to 10 amino acids immediately upstream
the core region resulted in a paralyzed flagellum."11
That sounds quite specified. The
authors said that
removal or substitution of 10 additional residues
flagellar paralysis. It seems then
that flagellar rotation requires something structurally
what flagellar formation required. A total of around 19aa
specified residues of the FliF protein need to be in place for both
flagellar assembly and motility to be realized. (
Back to Top )
Trillions upon Trillions of Years
A non-beneficial gap of just a couple dozen specific residues required at a specific position in the genome may not sound like much at first glance, but such a gap would literally take trillions upon trillions of years of average time for a population of all the bacteria on Earth (~1030 individuals) to cross (see calculation in appendix below). In fact, not a single evolutionary step proposed by Matzke or anyone else has ever been demonstrated to be "crossable" in any laboratory experiment - - not one. Without the ability to test such stories in the laboratory, they are simply not falsifiable and therefore are, by definition, not supported by scientific method. It may seems strange for many to even consider this, but such statements concerning the evolution of complex functions, on the order of flagellar system complexity, are not scientific at all - they aren't even theory. At the very best they are untested and perhaps untestable propositions. Simply put, these "stories" about flagellar evolution are just that - - fairytale stories. And, when examined in closer detail, they don't even look good on paper.
It just seems a bit more complicated than Matzke and other evolutionary scientists seem to be letting on. Consider this most interesting conclusion of Lynn Margulis, also noted in an interesting review of Matzke's work by William Dembski:
"Like a sugary snack that temporarily satisfies our appetite but deprives us of more nutritious foods, neo-Darwinism sates intellectual curiosity with abstractions bereft of actual details -- whether metabolic, biochemical, ecological, or of natural history." (Acquiring Genomes, p. 103.)13
( Back to Top )
Nicholas Matzke, Evolution
in (Brownian) space: a model for the origin of the bacterial flagellum,
talkreason.org, 2003 ( http://www.talkreason.org/articles/flagellum.cfm
Anand Sukhan, Tomoko
Kubori, James Wilson, and Jorge E. Galin. 2001. Genetic Analysis of
Assembly of the Salmonella enterica Serovar Typhimurium Type
III Secretion-Associated Needle Complex. J. Bacteriology 183:
Macnab, R. M., 1999. The
bacterial flagellum: reversible rotary propellor and type III export
apparatus. J Bacteriology. 181 (23), 7149-7153.
He, S. Y., 1998. Type III
protein secretion in plant and animal pathogenic bacteria. Annual
Reviews in Phytopathology. 36, 363-392.
Kim, J. F., 2001.
Revisiting the chlamydial type III protein secretion system: clues to
the origin of type III protein secretion. Trends Genet. 17 (2),
Plano, G. V., Day, J. B.
and Ferracci, F., 2001. Type III export: new uses for an old pathway. Mol
Microbiol. 40 (2), 284-293.
Nguyen, L., Paulsen, I. T.,
Tchieu, J., Hueck, C. J. and Saier, M. H., Jr., 2000. Phylogenetic
analyses of the constituents of Type III protein secretion systems. J
Mol Microbiol Biotechnol. 2 (2), 125-144.
Macnab, R. M., Science
290, p. 2087
Macnab R. M., Bacteria create natural nanomachines, USA Today, 2005 (http://www.USAtoday.com/weather/science/aaas/flagella121500.htm)
May Kihara, Gabriele U. Miller, and Robert M. Macnab, Deletion Analysis of the Flagellar Switch Protein FliG of Salmonella, J. Bacteriol. 2000 June; 182(11): 3022-3028. ( http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=94485 )
Bjorn Grunenfelder, Stefanie Gehrig, and Urs Jenal,Role of the Cytoplasmic C Terminus of the FliF Motor Protein in Flagellar Assembly and Rotation, Journal of Bacteriology, Mar. 2003, p. 1624-1633 Vol. 185, No. 5 ( http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=148050&blobtype=pdf )
All animations presented here are the amazing work of Keiichi Namba et al. of the ERATO Protonic NanoMachine Project ( http://www.npn.jst.go.jp/index.html )
William Dembski, Biology in the Subjunctive Mood: A Response to Nicholas Matzke, personal website, 2003. ( http://www.designinference.com/documents/2003.11.Matzke_Response.htm )
Yvonne M. Lee, Patricia A. DiGiuseppe, Thomas J. Silhavy, and Scott J. Hultgren, P Pilus Assembly Motif Necessary for Activation of the CpxRA Pathway by PapE in Escherichia coli, Journal of Bacteriology, July 2004, p. 4326-4337, Vol. 186, No. 13 ( http://jb.asm.org/cgi/content/full/186/13/4326 )
Special thanks to Mike Gene for the excellent information provided on the topic of flagellar evolution on his website:
( http://www.idthink.net/ )
Behe, Michael (1996). Darwin's Black Box. New York: The Free Press. ISBN 0-684-83493-6
Wikipedia, Irreducible Complexity, last accessed 9/28/06 ( Link )
Miller, Kenneth R. The Flagellum Unspun: The Collapse of "Irreducible Complexity" with reply here (last accessed 9/28/06)
2009 Signature-Tagged Mutagenesis in a Chicken Infection Model Leads to the Identification of a Novel Avian Pathogenic Escherichia coli Fimbrial Adhesin. PLoS ONE 4(11): e7796. doi:10.1371/journal.pone.0007796: ( Link )
Back to Top )
Calculation of "Trillions upon Trillions of Years"
Functional Complexity Paper (MS Word File): Download Link
( Link )
Take a population of bacteria the size of all the bacteria that currently exist on the entire Earth - about 1e30 bacteria. Let's say that this steady state population produces a new generation at a rate of 20 minutes and has a mutation rate of 1e-8 per codon position - given a genome per bacterium of 10 million codons. How long would it take such a population to find a new beneficial function at the level of 1,000 fairly specified amino acid residues?
First off, what are "fairly specified" amino acid residues? This is a measure of the sequence flexibility that can be tolerated by a functionally beneficial system. Examples include enzymes like lactase, nylonase, or penicillinase or other types of functionally beneficial proteins like cytochrome c (CytoC) that helps make energy as part of the electron-transport chain in mitochondria. They also include systems that require multiple specifically arranged proteins working together at the same time - - like rotary flagellar motility systems. Each of these types of functional protein-based systems has a certain degree of flexibility that can be tolerated without a complete loss of the function in question. However, this flexibility has a certain limit. Some functional systems are very flexible while others are very constrained. CytoC, in particular, is more than averagely constrained and therefore has a fairly high degree of sequence/structure "specificity".
Functional Sequence Complexity:
Some authors, like Durston et. al., call such limits on sequence flexibility "functional sequence complexity" or "FSC".1 According to Durston et. al., the measure of FSC of the whole molecule is the total sum of the measured FSC for each site in the aligned sequences in units called "Fits". The maximum value per residue location is 4.32 fits/site (log2 of 20) and this value can be realized only if just one amino acid residue option (out of 20 total possible) can be tolerated at the site. The lowest possible value is zero and this value is realized at a particular site if all 20 residue options can be tolerated at that location (i.e., complete flexibility). In other words, a random sequence would have a fit value of zero. In the Durston study, the average of the Fit values per site ("FSC Density") for the proteins listed is around 2.2 (taken from Table 1).1
"For example, if we find that the Ribosomal S12 protein family has a Fit value of , we can use the equations presented thus far to predict that there are about 1049 different 121-residue sequences that could fall into the Ribosomal S12 family of proteins, resulting in an evolutionary search target of approximately 10-106 percent of the 121-residue sequence space."
In other words, a fit value of "359" means that only 1 out of every 2359 sequences has the needed functionality. To get the ratio of sequences with that type of function, multiply this fraction by the total size of 121aa sequence space or 20121. Or (1/ 2359 ) (20121) = 1049. Another way to say this is that the average value per site for this protein, or the "FSC density", is about 2.966 Fits per residue site. This means that only about 2.55 residue options out of 20 are available per site.
Other authors seem to agree with Durston's basic idea that functional complexity is based on the ratio of those sequences in sequence space that can produce a particular type of function to a useful degree of functionality. For example, Hazen et. al. define functional complexity as follows:
n, the number of letters in the sequence.
Ex, the degree of function x of that sequence. In the case of the fire example cited above, Ex might represent the probability that a local fire department will understand and respond to the message (a value that might, in principle, be measured through statistical studies of the responses of many fire departments). Therefore, Ex is a measure (in this case from 0 to 1) of the effectiveness of the message in invoking a response.
M(Ex), the total number of different letter sequences that will achieve the desired function, in this case, the threshold degree of response, rEx. The functional information, I(Ex), for a system that achieves a degree of function, rEx, for sequences of exactly n letters is therefore
I(Ex)= - log2 [M(Ex) / Cn] (C = number of possible characters per position) 21
What is also interesting is that Hazen et. al. go on to note that, "In every system, the fraction of configurations, F(Ex), capable of achieving a specified degree of function will generally decrease with increasing Ex." 21 And, according to their own formulas, this decrease is an exponential decrease with each linear increase in n - or the number of "letters" or characters (or in this case amino acid residues), at minimum, required by the system to achieve the beneficial function in question.
Hazen et. al. also make a very interesting argument for Behe's "irreducible complexity" when it comes to the minimum structural threshold needed to produce certain specific types of functions. They write:
By using Avida’s default set of 26 machine instructions, a randomly generated sequence with length of a magnitude of ~102 lines was found to be functional (i.e., was able to perform at least one logic or arithmetic operation at least once) with probability P ~10-3. The functional fraction of a population decreases with decreasing sequence length until it reaches zero for populations with sequences of a length of four machine instructions or less. 21
In other words, the function of "logic or arithmetic operation" in Avida requires a minimum of at least four machine instructions. Any sequence with less than this minimum does not produce any beneficial functionality of this particular qualitative type. What this means is that for functional systems that have a greater minimum structural threshold requirement (i.e., a linearly greater "n" requirement), the ratio of beneficial vs. non-beneficial sequences at that level will be exponentially lower than it was for lower level systems with lesser minimum structural threshold requirements (according to Hazen's formula noted above).
Consider also (as illustrated in the above diagram) that viable sequences with a given level of functionality are separated in sequence space in "islands" that are not connected with each other. The gaps between these islands grows, in a linear manner, with each increase in minimum functional complexity of beneficial systems under consideration.
In this light, also consider the work of those like Yockey2-5 on estimates of CytoC ratios in sequence space. Yockey's estimate for the number of sequences with the CytoC function is around 1e65 for 100aa sequence space. This works out to be around the average FSC density of the sequences listed by Durston of around 2.2 Fits per site. For a 100aa sequence with an FSC density of 2.2 the Durston formula would produce (1/2220)(20100) = 1e63; which is pretty close to the Yockey prediction. Direct experimentally-determined degrees of sequence flexibility using cassette mutations by those like Sauer, Olsen, Bowie, Axe and others seem to confirm these rough general estimates for individual system ratios.6-9
So, what do such ratios say about sequence/structure space? What do they mean? These very small ratios strongly suggest that fairly specified systems like CytoC and other such systems are relatively rare in sequence space. Just to get some sort of idea, the entire Sahara Desert contains only about 1e30 grains of sand. And, there are only about 1e80 atoms in the whole universe! Protein sequence space quickly becomes much much larger than many universes.
What does this have to do with anything though? Well, consider the odds that anything within our gene pool of 1e30 bacteria (with the features noted above) will be within one residue difference of a particular protein-based system with an overall FSC value of 2.2.
An FSC density of 2.2 would produce 1e63 protein sequences with a size of 100aa for a ratio of proteins with the given function in question of about 1 in 1e67 (or 1e-67 out of a total sequence space size of about 1e130). For a 1000aa system minimum with the same FSC density the total number of sequences with the function in question would be about 1e638 - - a very very large number. However, the total size of sequence space at this level is about 1e1301 (i.e., 201000). This produces an overall ratio of 1 in 1e663. So, the overall ratio of 1e-663 at the 1000aa level is a far far smaller number compared to the 100aa level of FSC of 1e-67 - given the same degree of overall specificity.
A difference of almost 600 orders of magnitude is not really even comparable.
The Hamming Gap Distance:
So, what is the maximum possible gap size (Hamming Distance) between any particular starting point sequence in our gene pool and one of these 1e638 potentially beneficial sequences? A maximally distant sequence is one that has all sites with a different residue compared to the starting point sequence (i.e., Maximum Hamming Distance). How many such sequences are there in 1000aa sequence space? Well, there are 191000 of them for a total of 1e1278. Given that the total number of target sequences is 1e638, it is possible that all of the target sequences could be maximally distant from a starting sequence.
The Likely Minimum Hamming Gap Distance:
Now, the really important question: What is the likely gap distance from a given starting sequence to the closest target sequence given that the actual locations of the 1e638 target sequences are not known? Well, the odds that the starting point will be maximally distant from all targets are quite unlikely. The odds that a single 1000aa sequence would be maximally distant are 19/201000 = ~1e-23. So, the odds that 1e638 such sequences would be maximally distant are 1e-231e638 - - or an extremely unlikely number. So, the odds that a starting sequence will be closer than the maximum possible distance of 1000 differences to at least one target in sequence space are essentially certain. This is looking promising for the evolutionary mechanism of random mutation and natural selection . . . or is it?
Well, consider the odds that a starting point sequence will be 50 residue differences of a particular target sequence with an unknown location (i.e., have at least 950 of 1000 sites the same as the target). The odds of having at least 950 residue positions the same can be calculated using the formula for binomial probabilities. The probability of 950 matching residue positions out of 1000 (leaving a gap of 50aa differences) uses the following formula for the binomial probability:
P(k out of N) =
N = the number of opportunities for event x to occur;
k = the number of times that event x occurs or is stipulated to occur;
p = the probability that event x will occur on any particular occasion; and
q = the probability that event x will not occur on any
Plugging in the numbers:
P(950 out of 1000) =
These are the odds of producing exactly 950 matches out of a 1000aa sequence.
Adding together the results for matches for 950 up to 1000 positions results in a slight improvement of the odds of there being at least one match within 50aa differences (i.e., to about 1e-1152).
In other words, it would take about 1e1152 sequences of 1000aa to have one of them match the needed 950 residue positions to make a gap just 50aa differences wide between any one starting point and a particular target in sequence space at these ratios - on average. However, we have about 1e638 target sequences. Surely, given so many potential target sequences, at least one of them is likely to be within a Hamming distance of 50 - right? Well, the odds are indeed improved, but not significantly. Take the individual odds of success for finding a single target sequence, 1e-1152, and multiply those odds by the number of potential target sequences, 1e638, and the result is still odds of 1 chance in 1e514 cycles of 1e638 tries - - i.e., still essentially impossible.
That's about as likely as finding a specific atom placed in an unknown location in the universe of about 1e80 atoms at least 6 times - - in a row! Most would consider such odds truly "impossible" - i.e., making a gap distance as small as 50aa differences in this case essentially impossible as well.
Ah, but our population is made up of 1e30 bacteria with 1e7 codons each. That's a total of 1e37 potentially different codons. This is equivalent to 1e34 sequences with a size of 1000aa each (i.e., 1e34 starting points). This does improve the odds, but not significantly. The calculation of 1e514/ 1e34 = 1e480. It is an improvement in the odds of success since fewer sequences would be needed, on average, to achieve a gap distance of 50aa differences or less. But, this "improvement" hardly solves the problem. The odds are still essentially impossible.
The average or expected number of homologous residue positions (i.e., the mean) is 50. This is produces an expected average Hamming gap distance of about 950 residue differences between a given 1000aa starting point sequence and a target sequence. Given these numbers and calculated odds, the suggestion that a gap distance of only 50aa differences might exist between any one of the 1e34 starting point sequences and any potentially beneficial "target" sequence seems to be extraordinarily unlikely - - to the point of practical impossibility.
How many functional systems are there?
The counter argument, of course, is that this very low ratio would only be representative of a single type of function. In reality, many potentially beneficial systems of different types would exist in the various levels of sequence/structure spaces with these minimum structural threshold requirements. So, the number of potentially beneficial sequences as a collective combination of all types should be significantly increased at each level. Why only consider CytoC sequences or lactase sequences? There could be trillions of unknown types of potentially beneficial sequences in sequence/structure space!
This is an interesting argument. While it is true that a fairly large number of unique types of beneficial functional systems may exist at the level of 1000 fairly specified residues, this number really doesn't affect the ratio to a significant degree relative to the problem at hand - even if one is very generous in imagining the overall number of all types of potentially beneficial sequences in the potential of sequence space at various levels of functional complexity.
In a paper published in 2000, Thirumalai and Klimov make the following relevant comments:
The minimum energy compact structures (MECSs), which have protein-like properties, require that the ground states have H residues surrounded by a large number of hydrophobic residues as is topologically allowed. . . There are implications of the spectacular finding that the number of MECSs, which have protein-like characteristics, is very small and does not grow significantly with the size of the polypeptide chain.
The number of possible sequences for a protein with N amino acids is 20N which, for N = 100, is approximately 10130. The number of folds in natural proteins, which are low free energy compact structures, is clearly far less than the number of possible sequences. . .
The number of protein structures is far less than the number of sequences. By imposing simple generic features of proteins (low energy and compaction) on all possible sequences we show that the structure space is sparse compared to the sequence space. Even though the sequence space grows exponentially with N (the number of amino acid residues [by 20N]) we conjecture that the number of low energy compact structures only scales as lnN [The natural logarithm or the power to which e (2.718 . . . ) would have to be raised to reach N] . . . The number of sequences for which a given fold emerges as a native structure is further reduced by the dual requirements of stability and kinetic accessibility. . . We also suggest that the functional requirement may further reduce the number of sequences that are biologically competent. 10
So if, as sequence space size grows by 20N the number of even theoretically useful protein systems only scales by the natural log of N, this differential rapidly produces an unimaginably huge discrepancy between potential target and non-target systems. For example, the sequence space size of 1000aa space is 201000 = ~1e1301. According to these authors, what is the number of potentially useful protein structures contained within this space? It is 20ln1000 = ~1e9.
This calculated number is backed up by numerous other published estimates of the extreme rarity of viable (i.e., stable) amino acid residue sequences in protein sequence space. According to literature, there seems to be general agreement that the likely number of stable protein "folds" is less than 10,000.
"The estimated number of folds ranges widely from 700 to ~8000 based on differences in assumptions and data sets (Orengo et al. 1994; Zhang and DeLisi 1998; Govindarajan et al. 1999; Wolf et al. 2000; Coulson and Moult 2002; X. Liu et al. 2004). The fact that there are already 898 soluble protein folds, according to the current release of the SCOP database (v1.69) (Murzin et al. 1995), indicates that the number will greatly exceed 1000." 14
And, according to Govindarajan S. et al.:
"Our results suggest that there are approximately 4,000 possible folds, some so unlikely that only approximately 2,000 folds exist among naturally-occurring proteins." 20
Now, the question is, how big are viable protein folds? Well, the vast majority of them are less than 300aa in size.
"Folding domains are typically 50-200 residues in length and utilize a specific sequence of side chains to encode tertiary structures enriched in secondary structure with hydrophobic cores that are shielded from solvent by a predominately hydrophilic surface." 15
So, with less than 10,000 viable protein folds 300aa in size or less, how many viable sequences are there per fold? Well, given an FSC of a viable fold of just 2.0, the estimate goes as follows:
2.0 x 300aa = fit value of 600. 1 in 2600 , or 1e-180 x 20300 = ~1e120 sequences will stable/viable per fold. Multiplying 1e120 by 10,000 folds = 1e124 total stable/viable sequences in all of 300aa sequence space. The ratio of viable vs. non-viable is 1e24 / 20300 = ~1e-267.
So, the formula of 20lnN viable folds is actually an overestimate, if anything, and overestimates the total number of viable sequences in sequence space. . . 20ln300 = ~1e7 * 1e120 = 1e127 viable sequences per this formula. So, the estimate based on the 20lnN estimate is actually a generous estimate of the total number of viable/stable sequences in sequence space - and this isn't even considering sequences that are both stable/viable and beneficial to a particular organism in a particular environment.
Few structures, but many sequences?
Although many sequences do fold into the same or essentially the same 3D structure as an absolute number, as a relative number to the size of sequence space, it is a very tiny ratio. The reason for this is because there is an "experimentally observed exponential decline in the fraction of functional proteins with increasing numbers of mutations (Bloom et al. 2005)."11
"Our theory predicts that for large numbers of substitutions the probability that a protein retains its structure will decline exponentially with the number of substitutions, with the severity of this decline determined by properties of the structure. . . Our work unifies observations about the clustering of functional proteins in sequence space. . . " [emphasis added] 12
Bloom goes on to point out that,
"Experiments have demonstrated that proteins can be extremely tolerant to single substitutions; for example, 84% of single-residue mutants of T4 lysozyme and 65% of single-residue mutants of lac repressor were scored as functional. For multiple substitutions, the fraction of functional proteins decreases roughly exponentially with the number of substitutions, although the severity of this decline varies among proteins." 12
In short, most mutations that affect a region or island cluster of thermodynamically stable sequences in sequence space are destabilizing in such a way that each additional mutation has an exponentially destabilizing effect. Obviously, this means that the vast majority of sequences in sequence space would not produce viably stable proteins. It also suggests that as sequence space increases in size by 20N, the ratio of viable vs. non-viable sequences, not just systems, decreases exponentially.
"This is the reason why "simulations (Taverna and Goldstein 2002a) and experiments (Davidson et al. 1995; Keefe and Szostak 2001) clearly show that the vast majority of protein sequences do not stably fold into any structure (meaning the least stable folded protein is still far more stable than the typical random sequence)."11
As per the calculations used above, the number of stable/viable 1000aa sequences in sequence space is around 1e707. Given the size of sequence space at this level is 201000, the ratio of viable vs. non-viable is ~1e-594. And, this isn't the worst of it. This number is "further reduced by the dual requirements of stability and kinetic accessibility and the number of sequences that are biologically competent."10 In short, the ratio of 1e-594 potential targets vs. non-targets is being generous for 1000aa sequence space.
But, for arguments sake, lets be overwhelmingly generous and say that the number of uniquely different potentially beneficial systems at the 1000aa threshold level, from the perspective of a given bacterial colony, isn't just 20ln1000 = ~1e9 or one billion, or even 10 billion, or a trillion, but 100 trillion (1e14) potentially beneficial target systems at the same level of functional complexity. Of course, since the degree of specificity is already given for this threshold level (as noted above), each of these systems is represented by 1e707 amino acid sequences (each fairly specified protein-based system being 1000aa in size). What would that do to the total number of potentially beneficial sequences in sequence space? Well, it would multiply it by 1e14 to make a total of about 1e721 instead of 1e707 beneficial sequences. Likewise, it would increase the ratio of beneficial vs. non-beneficial 1e14 fold to 1e-580 (from 1e-594).
Relative to the problem at hand, having a ratio of 1e-580 vs. a ratio of 1e-594 makes little difference. The odds of having a single starting point sequence within 50 residue differences of any potentially viable target sequence remain essentially nil at around 1e-431 (i.e., 1e-1152 x 1e721) .
This is the reason why viable/stable protein sequence islands are so isolated in sequence spaces and become exponentially more and more isolated with increasing minimum size and/or specificity requirements.
"The basic effect of protein folding captured by this model is that, as the chain folds, it is forced to have a clearly defined inside (core) and outside (surface) determined by the twofold identity of its residues. The hydrophobicity of small, single folding domain proteins is peaked around one-half so that roughly one-half the residues are forced into the core. Lower hydrophobicity results in nonfolding sequences, whereas higher hydrophobicity leads to aggregation . . .
The results of this model suggest that the sequence space of single folding domain proteins is split into mutually dissimilar, low frustration families folding to mutually dissimilar native structures. The principle by which this situation emerges is the design requirement of minimal frustration, which allows efficient folding of sequences into their functional (native) structures. . .
Sequence space as being populated by families, each folding to a particular coarse grained structure and each surrounded by a shell of increasingly frustrated sequences. . . This produces a frustration barrier, e.g., a region of frustrated sequences between each pair of minimally frustrated families. Any stepwise mutational path between one minimally frustrated sequence family and another must then visit a region of slow or nonfolding sequences. . . In the case of real proteins, the sequences in these high frustration regions are much less likely to meet physiological requirements on foldability. . . If the requirement is sufficient, the region between two families will be completely excluded, which cuts sequence space into separate fast-folding, stable parts. This provides a mechanism for partitioning protein sequence information into evolutionarily stable, biochemically useful (foldable) subsets." 16
So, now that we see that viable/stable/"non-frustrated" protein islands are indeed discretely separated, on all sides, by non-viable sequences, and we see that this separation increases, in a linear manner (a linear increase in Hamming distance) with each increase in the minimum size and/or specificity requirements, what happens at the 1000 fsaar level of functional complexity?
At the fairly specified 1000aa level of complexity, what does a gap of just 50aa residue differences mean? - given the above calculations that show that a Hamming gap distance of at least 50 is almost certain to exist between a given island of viable, not to mention beneficial, sequences and the next closest viable potentially beneficial island in sequence space? What would this gap distance do to evolutionary potential?
Well, consider that even for much smaller proteins with sizes of only 18aa and minimum Hamming gap distances of less than half a dozen or so, consider that Cui et. al., suggest that such distances are essentially beyond the realm of crossability by point mutations.
Evolutionary explorations by point mutations may be likened to diffusion. Their extent is limited on a fragmented mortality landscape, because sequences belonging to different networks (Table 1) are beyond reach. 17
A gap of at least 50 specific required residue differences from each one of a large population of 1e34 starting point sequences of 1000aa (or 1000-codons of DNA) means that each one of these 1e34 sequences is surrounded by at least 1e65 non-beneficial options (i.e., > 2050).
Given these very generous assumptions, how long will it take to get these 50 needed character changes in at least one sequence in one bacterium in our huge 1e30 population of bacteria? - a population as large as all the bacteria on Earth?
Well, each one of our starting point sequences must search through a sequence space of at least 1e65/1e34 = ~1e31 sequences before success will be realized. With a mutation rate of 1e-8 per codon per generation each 1,000-codon sequence will get mutated once every 1e5 generations. With a generation time of 20 minutes, that is one mutational step every 2,000,000 minutes; which equals ~ 4 years. So, with one random walk/mutation step every 4 years, it would take 1e31 * 4 = 4e31 years for at least one individual in the entire population to achieve success - on average (i.e., trillions upon trillions of years).
Larger Population on a Single Starting Point.
When presented with this problem some have suggested to me that if the entire population starts on a single starting point instead of being spread in different places in sequence space that the odds of traversing a Hamming distance of 50 would be improved. For example, a Hamming distance of 1 around a 1000aa island equals 19 times 1000 or 19,000 sequences within one character difference. If the population size were only 1000 identical sequences of 1000aa, (i.e., all starting on the same island), it would take about 100 generations for one of the sequences in the population to mutate. So, assuming no repeat mutations, it would take 19,000 x 100 = 1,900,000 generations to explore all of the sequences immediately adjacent to the starting point island (about 72 years). However, if the starting population were increases to 1 million, about 10 of these sequences would mutate each generation. So, assuming no repeat mutations, 19,000 / 10 = 1,900 generations would be needed (a little less than one month).
Obviously then, increasing the population size can have a dramatic effect on how fast a given Hamming distance can be searched. The question is, how large of a starting population on a given island would it take to search the space within a Hamming distance of 50 in a reasonable amount of time? - say 4 billion years?
Well, a Hamming distance of 50 contains 1e65 sequences. The number of 1000aa sequences contained by all the bacteria on earth is only 1e34. So, if all of these sequences started at the same island, how long would it take to cover a Hamming distance of 50? Each generation 1e29 sequences would be explored. So, 1e65/1e29 = 1e36 generations . . . or about 1e31 years (i.e., over 10 million trillion trillion years).
Hardly solves the problem . . . especially considering that a more likely Hamming gap distance is going to be greater than 150 rather than just 50 at the level of 1000 fsaars.
Sequence space is more crowded?
Ah, but what about those arguments, often promoted in discussion groups like Talk.Origins, suggesting that sequence space is much more crowded with potential targets than this?
"Functional sequences are not so rare and isolated. Experiments show that roughly 1 in 1011 of all random-sequence proteins have ATP-binding activity (Keefe and Szostak 2001), and theoretical work by H. P. Yockey (1992, 326-330) shows that at this density all functional sequences are connected by single amino acid changes." - Talk.Origins archive ( Link ) Other estimates for the density of functional proteins in sequence space range anywhere from 1 in 1012 to 1 in 1077. ( Link ).
Obviously, the ratio of targets vs. non-targets can be increased, quite dramatically, in sequence space by simply lowering the minimum size and/or specificity threshold under consideration - thereby lowering the "level of functional complexity". Some types of systems have relatively low sequence specificity or size requirements. A simple binding function - like a protein sequence binding to an ATP molecule, without any other specific binding or functional requirements, doesn't logically seem like it would require very much sequence specificity at all. And, it obviously doesn't by direct experimental demonstration. However, many other protein-based functional systems are not so loosely specified. A great deal more specific sequencing as well as size is necessary to achieve higher level functions. Such functions, obviously, would be much less common in sequence space and would therefore be much more widely spaced, on average.
More specifically, it is not usually noted that the above experimentally-derived ratio estimates are based on proteins that have minimum size requirements less than 100 fairly specified amino acid residues (fsaars). Obviously, the ratio of potential targets below the minimum structural threshold of 100 fsaars is going to be exponentially greater than the ratio above 1,000 fsaars. This is actually the key to understanding the oft-referenced ratio estimates noted above.
These ratios are only for very low levels of functional complexity that come it far lower than 1000 fsaars.
Functional starting points more likely to find higher level targets?
Then there is the argument that if you start with an intact functioning genome, the starting points are more likely to combine together to produce at least stable protein combinations and therefore are much more likely to hit upon functionally beneficial combinations as well - especially when non-homologous crossover and combination mutations as well as point mutations are considered. This is an interesting suggestion, but let's actually consider the odds of this scenario:
Consider a simple example, for illustration, involving 20aa sequence space. How many stable 20aa sequences are there? (20ln20)(1/244)(2020) = ~4e16 out of 1e26 sequences.
Now how many different combinations of smaller stable sequences are there that could produce a 20aa sequence of any kind? For example, how many different combinations of stable 10aa sequences are there? The total number of stable 10aa sequences is 20ln10 x 1/222 x 2010 = ~2e9 different sequences.
So, the total number of possible combinations or permutations of 2e9 sequences, to include insertions as well as end-to-end combos and different length combos (like 3aa + 7aa sequences), is 2e92 x 11 x 10 = 4.4e20 sequences. Add to this the number of non-homologous crossover mutations of stable 20aa sequences (1e16 of them) and essentially all of 20aa sequence space (~1e26) is covered - which is far greater than the number of stable 20aa sequences. In short, the number of possible combinations starting with viable sequences 20aa or shorter vastly outnumbers the total number of stable 20aa sequences. That means that the vast majority of possible combinations, even starting with viable sequences, that end up producing a 20aa sequence will not produce a viable 20aa sequence. The same thing is essential true for 1000aa sequence space.
Is there an advantage to starting with viable sequences? Not really . . .
Consider a paper published by Blanco et. al.18 This study involved two proteins, alpha-spectrin SH3 domain (SH3) and the B1 domain of streptococcal protein G (PG), two small dissimilar proteins. The researchers tried creating hybrid proteins to gradually move through sequence space from the SH3 protein to PG via non-homologous crossover mutations. They found that the intermediate sequences did not fold into stable structures. They found that even when they added most of the residues from PG to the SH3 sequence, while maintaining the SH3 core residues, the protein did not fold into a stable shape. Only when the core SH3 residues were removed was the sequence able to fold into the PG shape. Furthermore, the SH3 structure was found be nonfolding even when only two non-core residues were mutated. This implies that folding is a holistic feature, requiring the cooperation of the hydrophobic core resides and the hydrophilic non-core residues to specify the unique "low-frustration" structure of a given protein family. In addition, it appears that the core residues from different protein families can actually counteract each other, producing an unstable, non-folded protein when combined.
"The set of sequences analyzed here are hybrids of the sequences of SH3 and PG and represent a more or less uniform sampling of sequence identities between 100 and ~10% with each protein, but only those sequences very similar to the wild-type proteins have unique folds." 18
The implication is that the new daughter protein, in order to fold stably, must have precisely coordinated hydrophobic and hydrophilic residues that work together to produce a stable structure. This, in turn, means that not just any parents will do--only certain ones which contain just the right complementary sequences which, when combined (at the correct position in the sequence), give a unique stable daughter fold. Yet, Bogarad and Deem argue that non-homologous DNA "swapping" or crossovers are helpful in finding novel viable protein folds/families.
"We demonstrate that point mutation alone is incapable of evolving systems with substantially new protein folds. We demonstrate further that even the DNA shuffling approach is incapable of evolving substantially new protein folds. Our Monte Carlo simulations demonstrate that nonhomologous DNA 'swapping' of low-energy structures is a key step in searching protein space." 19
Yet, Cui et. al. counter by noting:
"The power of [nonhomologous] recombination is in amplifying existing diversity, not in generating a high degree of diversity from a very small number of starting sequences."17
So, what are the potential and problems of crossover mutations? See the next section on "Point mutations vs. Non-Homologous Crossover Mutations."
Point Mutations vs. Non-Homologous Crossover Mutations
As an illustration of this effect, consider a paper by Cui et. al. (one of the figures illustrated below):
The end of the Cui paper shows that the benefit provided by nonhomologous recombinations decreases as the ruggedness (noted as "α" in the figure listed above) of the fitness landscape increases; and a very rugged landscape provides only marginal benefit compared to a less rugged landscape. The concept of "ruggedness" is a situation where the edges of islands are sharply defined compared to more gently sloping islands in sequence space. As illustrated to the right, the slope of beneficial "islands" of clustered sequences in sequence space drops off "exponentially". The authors discuss the implications of their Fig. 6 in the following excerpt:
Fig. 6 shows that exploration is more efficient when the evolutionary landscape is smooth (small α), but as ruggedness or the average selection gradient increases (larger α), exploration becomes sluggish. When α is large, populations are more concentrated in a relatively small number of low-mortality sequences. When the landscape is smooth, with the same total rate (0.1) of sequence transformation, point mutations plus crossovers visit more sequences and more structures than point mutations alone. When the landscape is rugged, the number of sequences explored by point mutations alone is comparable to that explored by point mutations plus crossovers. This is because point mutations are more effective in finding a low-mortality area from an already well populated spot nearby, whereas when the landscape is rugged many crossover offspring are likely to end up at high-mortality spots. Even so, Fig. 6B shows the remarkable result that structural innovation is still enhanced by crossovers at high α values. This result implies that when the average selection gradient is high, acting in concert with point mutations, crossovers can make more efficient use of their offspring sequences to achieve a higher structural diversity than a comparable number of sequences explored by point mutations alone. 17
So, while there is indeed a statistical advantage to multicharacter mutations, to include crossover mutations as well as cut-and-paste mutations, when it comes to finding novel beneficial islands in sequence space (over point mutations), this advantage becomes less and less significant at higher and higher levels of functional complexity. Consider that in the Cui paper, the maximum size of protein sequences considered was only 18aa. Obviously, the density of beneficial vs. non-beneficial sequences and/or structures at this very low level is going to be exponentially greater compared to 1000aa sequence space when considering systems that require at least a fair degree of specificity (as defined above). Also, while Cui et al say there is a "significant" difference between point mutations and crossovers, it seems that crossovers are only about twice as good as point mutations, even when the landscape is minimally rugged, for finding novel viable structures.
And, as it turns out, the fitness landscape of sequence space gets linearly more and more "rugged" with each step up the ladder of functional complexity. So, the problem of generating novel viable protein-based systems remains just as stark as ever - regardless of if one is considering point mutations or multicharacter mutations of any kind.
"When the landscape is rugged, the number of sequence explored by point mutations alone is comparable to that explored by point mutations plus [non-homologous] crossovers. This is because point mutations are more effective in finding a low-mortality area from an already well populated spot nearby, whereas when the landscape is rugged many crossover offspring are likely to end up at high-mortality spots." 17
So, it seems that rugged (functionally "quantized") landscapes are crippling for both point mutations and non-homologous crossover or "multicharacter" mutations - to essentially the same degree beyond a certain level of landscape "ruggedness". The reason is that a large step, like non-homologous crossover, is far more likely to produce a non-folding, nonfunctional sequence than to a stably folding, functional one. What I suspect is happening is that requirements for stable folding are becoming so stringent that the probabilities are just too low for recombination to do any good. In other words, perhaps one or more of the needed parental sequences are not present, or the probability is just too small that the correct two will recombine in precisely the right way to produce a functional daughter sequence. The fact that the folding requirements on proteins are indeed very stringent is well supported in the literature - as is the concept that with each increase in the minimum size and/or specificity of the systems in question the rarity of viable sequences increases exponentially.
Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors, "Measuring the functional sequence complexity of proteins", Theoretical Biology and Medical Modelling 2007, 4:47
Yockey, H.P. Information Theory and Molecular Biology. Cambridge University Press, 1992. pp. 255, 257.
Yockey, H.P., On the information content of cytochrome C, Journal of Theoretical Biology , 67 (1977), p. 345-376.
Yockey, H. P. "A Calculation of the Probability of Spontaneous Biogenesis by Information Theory", Journal of Theoretical Biology (1978) 67, 377-398.
Yockey, H. P. "Self Organization Origin of Life Scenarios and Information Theory", Journal of Theoretical Biology (1981) 91, 13-31.
Bowie, J. U., & Sauer, R. T. (1989) "Identifying Determinants of Folding and Activity for a Protein of Unknown Structure", Proceedings of the National Academy of Sciences USA 86, 2152-2156.
Bowie, J. U., Reidhaar-Olson, J. F., Lim, W. A., & Sauer, R. T. (1990) "Deciphering the Message in Protein Sequences: Tolerance to Amino Acid Substitution", Science 247, 1306-1310.
Reidhaar-Olson, J. F., & Sauer, R. T. (1990) "Functionally Acceptable Substitutions in Two -Helical Regions of Repressor", Proteins: Structure, Function, and Genetics 7, 306-316.
R.T. Sauer, James U Bowie, John F.R. Olson, and Wendall A. Lim, 1989, 'Proceedings of the National Academy of Science's USA 86, 2152-2156. and 1990, March 16, Science, 247; and, Olson and R.T. Sauer, 'Proteins: Structure, Function and Genetics', 7:306 - 316, 1990.
Thirumalai, D.; Klimov, D. K., Emergence of stable and fast folding protein structures, STOCHASTIC DYNAMICS AND PATTERN FORMATION IN BIOLOGICAL AND COMPLEX SYSTEMS: The APCTP Conference. AIP Conference Proceedings, Volume 501, pp. 95-111 (2000). ( Link )
- -266. ( Link )
Jesse D. Bloom, Alpan Raval, and Claus O. Wilke, Thermodynamics of Neutral Protein Evolution, Genetics. 2007 January; 175(1): 255
- -611. ( Link )
Jesse D. Bloom, Jonathan J. Silberg, Claus O. Wilke, D. Allan Drummond, Christoph Adami, and Frances H. Arnold, Thermodynamic prediction of protein neutrality, Proc Natl Acad Sci U S A. 2005 January 18; 102(3): 606
Christina Toftand Mario A. Fares ,The Evolution of the Flagellar Assembly Pathway in Endosymbiotic Bacterial Genomes, Molecular Biology and Evolution 2008 25(9):2069-2076 ( Link )
- -1734. ( Link )
Amit Oberai, Yungok Ihm, Sanguk Kim, and James U. Bowie, A limited universe of membrane protein families and folds, Protein Sci. 2006 July; 15(7): 1723
- -339. ( Link )
J.S. Richardson, The anatomy and taxonomy of protein structure. Adv. Protein Chem. 34 (1981), pp. 167
- -10686, September 1998 ( Link )
Erik David Nelson and Jose Nelson Onuchic, Proposed mechanism for stability of proteins to evolutionary mutations, Evolution, Vol. 95, pp. 10682
Yan Cui, Wing Hung Wong, Erich Bornberg-Bauer, Hue Sun Chan, Recombinatoric exploration of novel folded structures: A heteropolymer-based model of protein evolutionary landscapes., PNAS, Vol. 99, Issue 2, 809-814, January 22, 2002. ( Link )
Francisco J. Blanco, Isabelle Angrand and Luis Serrano, Exploring the conformational properties of the sequence space between two proteins with different folds: an experimental study, Journal of Molecular Biology, Volume 285, Issue 2, 15 January 1999, Pages 741-753 ( Link )
Bogarad and Deem, A hierarchical approach to protein molecular evolution, PNAS Vol. 96, Issue 6, 2591-2595, March 16, 1999. ( Link )
Govindarajan S, Recabarren R, Goldstein RA., Estimating the total number of protein folds, Proteins. 1999 Jun 1;35(4):408-14. ( Link )
Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak, Functional information and the emergence of biocomplexity, 8574-8581| PNAS | May 15, 2007 | vol. 104 | suppl. 1 ( Link )
Back to Top )
. Home Page . Truth, the Scientific Method, and Evolution
. Maquiziliducks - The Language of Evolution . Defining Evolution
. DNA Mutation Rates . Donkeys, Horses, Mules and Evolution
. Amino Acid Racemization Dating . The Steppingstone Problem
. Harlen Bretz . Milankovitch Cycles
Since June 1, 2002