Relevant extracts from my posts of March 1999


9-Mar-1999
9-Mar-1999

I'm not a theist (at the most a pantheist), but an evolutionist
even more consistent than Darwinists, because I do not accept
such a discontinuity in evolution as the magical appearence of
consciousness some billions years after 'big bang'.

It is quite normal that material in agreement with one's own
prejudices seems to be facts whereas the rest seems to be opinion!

It is my, Johnson's and many others' right to find it disquieting (or
even absurd) that "the mind is merely what the brain does" and
that "our thoughts and theories are products of mindless forces".

It is not difficult to make computer games or AI programs where
also chance influences the result. It may even happen that the result
is one not expected by the programmers, but I do not believe in results
which are in principle different from what could have thought possible
by the programmers.

In any case, such algorithms are rather evidence for a guided (there
are engineers and programmers) evolution than for a purely random.

J>" To illustrate the point with an analogy: It is hard enough to
J>" earn one million dollars by winning the grand prize in a lottery,
J>" but it is no easier to achieve that feat by winning a $100 prize
J>" 10,000 times.


9-Mar-1999

J>" As we have seen, Darwinian evolution is by definition unguided
J>" and purposeless, and such evolution cannot in any meaningful
J>" sense be theistic.

> It is neither totally unguided, as natural selection is indeed
> guided by the environmental pressures with which an organism
> is faced, nor purposeless, as its function is to make populations
> more fit within their environments. It certainly is not designed
> to result in human beings, or to move toward "higher" life forms.
> If he had said, "intelligently guided" and "apparently purposeless,"
> then he would have been more accurate.

Nothing is 'guided', 'unguided', 'random' or 'determined' in every
respect. One should always choose the most consistent
interpretation of a text.
So the meanings of 'unguided' and
'purposeless' as used by Johnson are clear, at least for me.

You think Johnson would have been more accurate if he had said
"apparently purposeless". This shows that you do not understand
what Johnson wants to say: according to neo-Darwininism the
apparent purposefulness of nature (which cannot be denied) is
based on an (ontological) purposelessness.

The evidence in the earth can inform us that there has been a
continuous evolution of live, but this evidence certainly can
not inform us whether reductionist causal laws are enough to
explain the actual biodiversity on earth! And the very basis of
at least neo-Darwinism is the rejection of any teleological
principle.


9-Mar-1999
9-Mar-1999

If for a first organism with the power of reproduction to apppear
only twenty different conditions with a probability of each 0.001
were necessary, then the probability of such an organism to appear
would be only 10^-60. Even if 19 of the 20 conditions were fulfilled,
the organism could not replicate and evolution of life could not
start. In realty certainly more conditions with lower probabilties
are necessary for such a self-replicating system with the additional
capacity to undergo further improvement by mutation and selection.

Probabilities must be multiplied!

Within neo-Darwinian framework one must not take it for granted
that the "first organism" "certainly would have been vastly
overpowered and driven to extinction by its more advanced
children who were born after successive mutation and selection."


10-Mar-1999
10-Mar-1999

Certainly at least 20 (independent) conditions with a probability
of at most 0.001 are necessary for a self-replicating system to
appear. If it were less, it should be possible to show how such a
system could appear and work or even to produce one in the
laboratory.

I agree with you that it is useless to calculate exactly such probabilites!
However, it is possible to estimate upper limits.
It is my opinion that
everybody who deals with this problem in a logically correct and
unprejudiced way must recognize that reductionist causal laws
cannot explain the appearence of such a self-replicating organism.


10-Mar-1999

If reductionist causal laws cannot explain life, neo-Darwinism is
refuted and it is necessary to look for new entities or principles
which can explain life. I'm sure that neo-Darwinism will seem to
future generations a completely absurd theory, because it denies
the most obvious.

Whereas Johnson suggests God for what cannot be explained
by causal laws, I suggest final laws of nature and immaterial
unities undergoing evolution which I call psychons or souls.


10-Mar-1999
10-Mar-1999

But I think that you overestimate science and scientists.
Representative of modern science is Galilei. At a time when
the Copernican theory was already spreading he usurped
this theory and fought the theories of Johannes Kepler.
Kepler had been the first to surpass substantially the
astronomy of Aristarchus by smashing the whole epicycle
theory and by introducing modern physical laws.

Also Kepler's explanation of life which was quite similar to
mine was ridiculed and fought.

For a self-replicating system at least 20 molecules which are at least
as complex as nucleotids or amino acids are necessary. According
to neo-Darwinism the movements of molecules depend on random
thermal collisions (apart from chemical and physical laws).

Now I assume that 20 molecules are enough for a self-replicating
system to appear, if every molecule has the right position in space.

A further simplification is needed. I assume that all 20 molecules
are in a cube which is subdivided into 1000 mini-cubes, and for the
right position nothing more is required than the center of gravity of
the molecule being located within the right mini-cube.

In this simpified case the probability for the self-replicating system
to appear is 10^-60. Common sense is enough to show that for a
self-replicating system to appear in nature, the probability is even
much much lower.

It is possible to calculate upper limits of such probabilities, because
we can recognize conditions which must be fulfilled in any case.


10-Mar-1999

That's actually new for me: the existence of "replicating molecules
around 32 molecules long". The only possibility I see is the following:
there are RNA enzymes 32 bases long which replicate by base
pairing. Are you sure that such molecules actually have been
discovered, molecules replicating independently? However, even if
they actually exist, they are not enough to start evolution.

Do you know the results of Miller's experiment? A mixture of simple
organic molecules. This result is almost irrelevant to the explanation
of life and evolution.


11-Mar-1999

What is a self-replicating protein? To assume that every
collision between molecules corresponds to a new arrangement
of a sequence of 32 amino acids seems absurd to me. Incorrect
chemical bonds between amino acids are possible. Bonds with
other molecules cannot be excluded. Where could a soup with
such a high proportion of amino acids have existed?


12-Mar-1999

The concepts 'causality' and 'finality' are very old philosophical
concepts. Maybe until the time of Descartes (1596-1650) they had
equal rights in philosophy and science. There is, however, no
apriori reason why 'causality' should be more scientific than 'finality'.

Johnson's concept 'naturalistic' is in some respect almost the same
as my concept 'reductionist', but there is also a (maybe only linguistic)
difference: according to my usage of 'natural' there is nothing
supernatural. Final laws or souls are totally natural entities.

In this context it may be interesting to look at the history of
'naturalism'. A certainly questionable and maybe subjective
simplification is the assumption that there was an evolution from
animism to polytheism, to monotheism with God outside the world,
to monotheism with God inside the world, to pantheism and finally
to atheism. The difference between atheism and pantheism is not
big, because in pantheism 'God' is only a synomym for 'world' and
'nature', or means a special aspect of the world.

The basis of modern science was build in the 17th century. One of
the first consistent naturalists was Baruch Spinoza (1632-1677),
who explained the world in a panpsychist and panmaterialist way:
space or matter is one aspect of the world (or of God or of nature),
and thinking or consciousness a second. Johannes Kepler (1571-
1630) had explained the world in a quite similar way, based on a
monotheism with God inside the world.

Both Kepler and Spinoza were fought and ridiculed especially by
theologians but also by scientists. The alternative was the
philosophy of Descartes: on the one hand was the material world
and on the other human souls and God. Animals were considered pure
machines without consciousness. The current scientific world view
is based on the philosophy of Descartes. The big inconsistency of
Cartesianism (animals as pure machines, humans having souls)
was removed by removing the concept 'soul' (and 'God').

So why do you consider panpsychism as something supernatural?
One main reason for its defeat was that it was a naturalistic
explanation of the world not in agreement with theology.

According to my usage of the word 'reductionist', Descartes'
philosophy is reductionist with the exception of the concepts
'human soul' (and 'God'), whereas all explanations involving final or
teleological principles, or based on vitalism or panpsychism are not.

Many enzymes work at defined places in a cell. If we create an
enlarged model, where enzymes are like little balls, then the volume
of the whole cell is about 1000 cubic metres. Imagine concretely
this situation: a little ball must come very near to a substrat and
the substrat recognition even depends on the correct alignment of the
little ball. In addition to that, enzymes often have to pass cell
membranes in order to reach their destination. What is the moving
force of enzymes? It cannot be electromagnetic attraction or
repulsion. So the moving force must primarily depend on random
thermal motions (as Brownian movements do).

You certainly will object that we do not know well enough the
chemistry of enzymes in order to conclude that: there may always
be the needed chemical forces responsible for the 'apparently'
very purposeful motions of enzymes. This implies that the information
for these motions to desired destinations is somehow stored in
the amino acid sequence of an enzyme, in addition to to the
information for folding, substrat specifity and so on, because even
similar enzymes can have very different destinations. A mutation
could change a description factor in such a way that the protein
would search its usual substrate in a wrong chromosome.

The present-day scientific ignorance is no better evidence for
reductionism than for panpsychism! But is it really a necessary
ignorance? Ignorance is often the result of false premises.

I'm convinced that physical laws as described by classical physics
or by QM cannot be responsible for the fact that living organisms
evolved and survive. The often cited 'complex dynamic systems' as
e.g. the appearance of ordered vortices, waves or similar things
doesn't affect evolution much more than the appearance of solar
systems does. And the appearance of crystals (carefully studied
by Kepler) is rather evidence for panpsychism than for reductionism.

One must not confuse logical reasoning with empirical facts.
Calculations of probability must be based on clear and sound
assumptions, but the calculations themselves must not be
influenced by empirical facts. From the fact that evolution has
occured we cannot conclude that it can be explained on the basis
of the generally accepted metaphysical principles of current science.


14-Mar-1999

My use of 'reductionism' is: to reduce all phenomena of nature to
a purely material basis.

I am a friend of a rationalist epistemology. But I am aware that
the theories by which we explain the world cannot be explained
themselves in the same way. The rationalist epistemology is
sometimes used to explain the simple (gravitation) by something
complicated (material gravitons).

> I think it's a bit simplistic to say that "the current scientific
> world-view" (whatever that is) is based on Cartesian ideas.
> A lot of water has gone under the bridge since Descartes.

But the way was pointed at that time for our current materialist
reductionism and against panpsychism. Leibniz was the last
well-known panpsychist, but the many inconsistencies of his
'monadology' (a compromise between panpsychism and christian
theology) were rather harmful.

> The burden of proof isn't on my shoulders: after all, you're the one
> making claims about the certainty of your probability estimates.
> I'm just asking you how you can meaningfully make such estimates,
> given that you and no one else yet understands very much about
> the mechanisms and processes involved.

That's a reasoning based on authority and dogmatism. The fact
that we do not understand very much about the mechanisms and
processes involved, is unconsciously taken as evidence for God,
neo-Darwinism or any other world view we believe in.

When Pythagoras said, the earth is not flat, the burden of proof
seemed to be only on his shoulders. Despite a lot of evidence
most people continued to believe in a flat earth for many centuries.
Kepler has provided much empirical evidence for his three laws,
but not even Galilei did accept the evidence!

Let us assume that there are two stages of the same appearance
probability. One stage could be part of a self-replicating system,
whereas the other could not. Within the reductionist causal
framework there is really no reason at all to assume that the first
stage is rather conserved than the second, only because it could
be part of a self-replication system in future.

That's exactly one of the reasons why I have introduced final laws
of nature: steps in the right direction can have a higher probability
than steps in a wrong direction. There is some kind of 'causal
effect' from the future.
You know, in relativity theory the present,
past and future are given in some respect 'at the same time'.
That events of different times are correlated in a non-causal way,
is in principle similar to actions at a distance, where events of
different places (e.g. motions of the earth and of the sun) are
correlated without mediating material causes such as gravitons.

Waves and vortices are incomparably less complex than simple
living cells. They really can be understood on the basis of
simple physical principles. They appear under certain conditions
and disappear with these conditions.

Do you understand at least in principle the appearance of very
elaborate snow crystals? I understand the emergence of the
highly ordered planetary movements (described by Kepler's laws)
from two simple hypotheses (law of inertia, reciprocal attraction),
but I do not understand the formation of crystals. If a sound
(and therefore simple and elegant) explanation existed, this
explanation should be more widespread.

For me it is not enough that a problem is declared to have
been resolved. The fact that computer simulations of crystal
formation are possible, explains not very much.


14-Mar-1999

Now it's me who is afraid that the explanation of the purposeful
movements of enzymes by an entirely mechanical "traffic control"
misses the point. How do you explain such a "transport system"?
Are there some kind of currents in the cell or even some kind of
taxis which transport the enzymes to their needed destinations?

When I wrote "the information for these motions to desired
destinations is somehow stored in the amino acid sequence
of an enzyme" I did not mean some kind of "bar code" on the
enzyme, which is interpreted by a cellular transport system being
responsible for the motions of the enzyme. It seems to me even
much more difficult to explain how such a "traffic control" could
transport all the enzymes to their very different destinations
depending on such a "bar code".

That enzymes may consist of several parts or domains and that
one such domain can be resonsible for the destination of the
whole enzyme is obvious. There is no clear transition line between
simple proteins and protein complexes.

It is actually a fact that "all theories that impute knowledge or
purpose to subcellular or cellular structures have been discarded"
by mainstream science. But my suspicion is that reductionist
(materialist) prejudices have been the main reason.

Normally the first who detects a new mechanism has the
possibility to provide a philosophical interpretation. I do not
agree with Monod's interpretation, because it is not based on
facts. The same facts can be interpreted in very different ways.

According to Monod's interpretation, every system we have
examined carefully enough can be called mechanical.

In my model, little proteins have diameters of few millimetres,
e.g. myoglobin with 153 amino acids about 4.5mm x 3.5mm x 2.5mm.
The major groove of DNA is about 2 mm large. The whole human
DNA (both chromosome sets) is about 2000 km long. The major
groove is even longer. If the recognition by transcription factors
depends on direct contact, then a transcription factor must
come very near (maybe 1 mm or 2 mm) to its destination and
should even have the correct alignment. A normal living cell
with a diameter of about 10 m would consists of 10^12 different
cubes of 1 mm length.

There is an essential difference between panpsychism and vitalism.
Vitalism starts with dead matter and introduces some kind of 'vital
forces' or souls. In panpsychism, however, there is no dead matter,
because even elementary particles have two aspects, a 'materialist'
and a 'vitalist'.

In a similar way a human soul influences the behaviour of a
human body, some kind of primitive soul is responsible for the
astonishingly complex behaviour of photons.

The immun system has turned out to be more complex than assumed
by the "clonal selection theory". To kill or inactivate those
cells whose antibodies react to "self" is not so easy.

There was once the following problem:

How can the immune system distinguish between "self"
and "foreign"?

It was 'resolved' in this way:

Cells or antibodies reacting to "self" are inactivated or
killed by the immune system whereas those reacting to
"foreign" are not.

Once I read somewhere (or I dreamt to have read) that if we put
antigens to one of two glasses with the same fresh blood, also
in the glass without antigens corresponding antibodies appear. If
it is true, it would be very strong evidence for the psychon theory.

> So in general, there are no subcellular or biochemical systems
> that embody purpose in their present-day operation. Reductionism
> has been triumphant for many years, and is embodied on the once-
> controversial slogan "anything organisms can do, cells can do;
> anything cells can do, molecules can do".

I go even further. I assume a continuity from elementary particles
to human souls. The mind body problem, one of the oldest
philosophical problems, has never been resolved by materialist
reductionism, it only has been declared a pseudo problem.

Also the appearance of consciousness some billions years after
big bang is an open question. In addition to that, it is impossible
to localize the memories in the brain. Theories have seriously
been proposed where one memory is somehow stored in the whole
brain (in a holistic way). That seems to me rather evidence that
such memories are not stored in the brain.


14-Mar-1999

> You are also overly fond of your own hypotheses, a usually-fatal
> affliction in science.

I don't think that this is necessarily a usually-fatal affliction.
Many great scientific successes would have been impossible if not
at least the scientists themselves had been fond of their
own hypotheses.


14-Mar-1999

It is necessary to have a concrete imagination of the proportions
between cells, enzymes, molecules and so on. Therefore I have
introduced the enlarged model where 1 mm corresponds to 1 nm.
The 'diameters' of enzymes are then in the order of a few millimetres
and the 'diameter' of a water molecule is about 0.3 mm (there is
room for 33.3 water molecules in 1 cubic millimetre).

> Sometimes, but sometimes the enzymes are thethered to mebranes
> or other structures (eg some of the respiritory enzymes, signal
> transducing G-protiens etc). As Gavin Tabor pointed out, in aquoes
> solution there are around 10^14 interactions/sec between molecules,
> and the volume involved is nanoliters, which is miniscule compared to
> the free diffusion paths of these molecules, so it is not particularly
> difficult for a substrate to meet it's enzyme in the right orientation
> just by randomly bouncing around in the cell.

One nanolitre gives a cube with a side length of 0.1 mm, and in
our model this corresponds to 100 m !!! You even claim that such
lenghts are minuscule compared to the free diffusion paths of the
enzymes. Are you confusing kinetic theory of gases with diffusion
in aqueous solutions ...

It seems that you do not know the theory of Brownian motion.
Einstein calculated in a 1905 paper that a particle with a
diameter of 0.001 mm (the size of a bacterium) would result
in an average motion of 0.0008 mm in a second and of 0.006 mm
(less than the length of normal cell) in a minute (at a temperature
of 17C). The average motions per time unit of enzymes are
certainly longer because they are much smaller. The bigger
the particles, the slower random thermal motions. The reason
is simple: random collisions with many surrounding molecules
can cancel each other out and the remaining change in
momentum does not increase proportionally to the mass
of the moving particle.

In our model, a normal living cell with a diameter of about
10 m consists of 10^12 different cubes of 1 mm length. So its
rather difficult for a substrate to meet it's enzyme in the right
orientation "just by randomly bouncing around in the cell".
Don't forget, most of the 10^14 random collisions primarily with
water molecules have no effect at all! Every square millimetre
of the enzyme surface corresponds to about 10 water molecules.

Only at the most 5 percent of the hundreds of different
mitochondrial proteins are coded by mitochondrial DNA. The
proteins even have to pass a double membrane in order to
reach their destination. According to the very convincing
endosymbiont theory, at least most of these proteins (or their
ancestors) were coded once by the mitochondrial DNA itself.
How do you explain the fact that the proteins could find
even after the transfer of the genetic code to the nucleus
their destinations within the mitochondrion?

> What apparent purposeful motion of proteins?

For instance motions of transcription factors or of ribosomal
proteins. There are many other examples: look for instance at
the many enzymes involved in DNA replication. If there were
only random thermal motions, only a very small percentage of
the enzymes would work (by chance).

>> The voyage of transcription factors to their destiny can be compared
>> with the voyages of migratory birds and other migratory animals.

> Not really, you're missing the scale of the cell and thermal motions.
> A transcription factor can cross the cell in a nanosceond, the
> observed rates of transcription factor initiation is entirely
> compatible with the enzymes just randomly boncing around in the cell.

If this is true and transcription factors generally reach their
destinations some micrometres away in a nanosecond, than
(materialist) reductionism is dead !!!

>> You ask me, how can I "be certain that 'at least 20' independent
>> and mutually improbable conditions all have to be satisfied, in one
>> particular sequence". I have the same right to ask you, how can you
>> be certain that not at least 20 different conditions with low probablilty
>> must be at the same time satisfied for a replicating system to appear.

> Well, that's a nice way to avoid the question. But from model self
> replicators, and our current knowledge of chemistry, we can say that
> the steps (however many they are) are not independent and at least
> some are not particularly improbable.

I don't speak of steps but of conditions. If in an experiment
the necessary amino acids are are put together in an optimal
concentration, then we have a condition whose probability is
certainly less than 10^-100.

> What is a prediction of panpsychism in chemistry that can be tested.

Unfortunately panpsychism predicts what you think can be
explained by reductionist causal laws.

I ask you: can you imagine anything which would convince you
that neo-Darwinism is not generally correct?

If it were possible to produce large amounts of rare proteins
such as luminous proteins by biotechnological means, this would
be strong evidence against the psychon theory.

In the same way you cannot predict from an evolution theory,
what kind of animals have appeared you cannot predict from
the psychon theory what kind of behaviour molecules have
developed during evolution.


18-Mar-1999

[ Theory of Brownian motion ]

I don't understand: according to Einstein's calculation a bacteria
sized particle travels 0.8 and not 800 micrometers in a second. The
mean path is proportional to the square root of both the absolute
temperature and the time, and inversely proportional to the square
root of the diameter (of a spherical particle). The mean path of a
spherical enzyme with a diameter of 10 nanometers is roughly
10 micrometers in a second and roughly 10 nanometers in a
microsecond. For a mean path of 1 nanometer (about three times
the lenght of a water molecule) ten nanoseconds are needed!
The mean path of a little protein such a trypsin in 10 nanoseconds
is about 2 nanometers.

Why is a 100-fold increase in time needed for 10-fold increase in
the mean path? Because the movements are purely random and
the particle will come back to the starting point over and over
again (in infinite time).

"The 'diameters' of enzymes are [in our model] in the order of a
few millimetres and the 'diameter' of a water molecule is about
0.3 mm (there is room for 33.3 water molecules in 1 cubic
millimetre)."

Please imagine very concretely the many water molecules
(0.3 millimeters) colliding with the enzyme (e.g. a diameter of
some millimeters). Think about the effect of such a collision on
a protein whose mass is thousands of times bigger than the
the mass of the water molecule.

> Acording to you, enzyme reactions can't take place at all, even in a test
> tube. However, they take place in test tubes at rates that are entirely
> consistent with substrates and enzymes meeting randomly in a diffusion
> limited way (with the exception of enzymes with lipid soluble substrates).
> This is all elementary enzyme kinetics.

On the contrary, according to your reductionist Darwinism enzyme
reactions can't take place at all. The main principle of this modern
word view is the hypothesis that biology can be reduced to chemistry
and chemistry to physics. But if we explain chemistry by physics we
should do it in a careful and consistent way. It has no sense to put
forward some formulas which are in agreement with the experiments,
and to simply assume or claim that they are consistent with physics.

How probable is it that hundreds of mitochondrial genes received
during or after their transfer into the nucleus by random mutations
(blind chance) exactly such specific sequences which direct their
corresponding enzymes back to the mitochondria?


21-Mar-1999

Simple replicating proteins are not enough to start evolution,
because they cannot continuously evolve to proteins which
are coded by RNA or DNA. Take for instance a folded protein
with an amino acid length of 32. Can you imagine some
mechanism by which the sequence information is transfered
to a 96 (or 64 in a genetic precursor code) nucleodid long
RNA or DNA molecule?

How big is the probability that DNA or RNA sequences which
correspond to the yet evolved proteins could have appeared
by chance? How probable is it that proteins being nothing
more than dead matter could invent the genetic code?

According to the psychon theory, enzymes are like primitive
animals. At least proteins and RNA enzymes should have evolved
at first indepentently. Self replicating proteins learned
sometime to use short RNA templates in order to accelerate
the production of new proteins. This technique was
continuously improved not only by pure chance but also by
final laws of nature (one could call it God).

The emergence of the genetic code can be compared with
the emergence of human languages, and the emergence of
the highly complex living cell with the emergence of modern
cities. All species and evolutionary innovations were designed
in a similar way houses, cars or ships have been designed.
But at the same time they have evolved by chance in a similar
way houses, cars and ships have evolved.

If it is true that living cells using the modern genetic code
appeared very early on the earth, then it is highly improbable that
this very complex code evolved the first time on earth. Such an
information transfer from 'cosmic ancestors' to the earth is
possible, because an essential part of the information of
living systems is stored in the immaterial psychons.

We cannot reproduce the behaviour of enzymes of the early
earth because their behaviour depends on psychons which have
further evolved. So amino acid sequences which would have folded
millions or billions of years ago, today do not fold any more.
In the same way, sequences which fold today did not fold on the
early earth.

With this example I intend only to show how few conditions give such
a low probabilty of 10^-60. If for a self-replicating system only
60 building blocks were needed and the probability of the correct
chemical bonds with the others is 10% for each of the 60 blocks,
then the final probability of the system also results in 10^-60.

>>> My understanding is that the smallest self-replicating protein
>>> has 32 base units in it. If there are say 10 different amino
>>> acid choices for each block, that implies there are 10^32
>>> such proteins which can form - ignoring any issues of
>>> chemistry. In a liquid molecules interact at a rate of around
>>> 10^14 interactions/sec : lets say new arrangements are formed
>>> at that rate. In 1 mole of amino acids that means there are
>>> 10^22 32-element proteins, and in 1 second, there are
>>> thus 10^36 proteins being tried. Thus in 1 second, 10^4
>>> of these self-replicating proteins will be formed.
>>> Sounds like pretty good odds to me.

>> Are you telling a joke?

> Do I look like I'm joking? I'm providing you with an example
> of how to go about estimating the likelyhood of generating the
> simplest self-replicating entity given a flask of amino acids.
> This seems much more useful for the discussion than the probability
> of 20 marbles forming a pretty pattern in space (which is what
> you have worked out).

If you have not been joking, then you have been making
catastrophic errors!

I assume that the 10^14 interactions/sec are meant in the
following way: two neighbouring molecules collide with a
frequency of 10^-14 Hertz.

You assume that at this rate new arrangements of 32-element
proteins and only of 32-element proteins are formed! But why
only elements of a length of 32 amino acids?

The hypothesis that correct 32-element proteins are formed in
one step represents a condition whose probability is almost
zero both in nature and in laboratory!

But even if we take for granted this hypothesis, and the correct
sequence could be found by random trials, the folding of the
protein would be impossible because 10-^14 seconds are far
from being enough.

So your calculation is further based on the hypothesis that
if the correct sequence is found, the sequence remains
unchanged for at least microseconds or maybe even
milliseconds so that the protein can fold. But already in one
microsecond all the other amino acids are each part of 10^-8
different 32-element sequences.

This hypothesis represents a condition whose probability is
virtually zero!

That not every thermal collision between two molecules results
in a new chemical bond is self-evident. And far the most
collisions occur between amino acids and water molecules.
Such 32-element sequences can only form by continuously
adding new amino acids. If sequences only grew and never
decayed, then after a short period there would be no more
spare amino acids and the whole process would stop long
before the correct sequence could have appeared.

If the sequences decay easily, a sequence of 32 elements can
not be reached (this is what occurs naturally). So one must
include in the probability calculations also the probability
that a sequence of 32 correctly chained amino acids can
appear. If we assume that the probability of the correct
addition of a further amino acid is 0.5 (in reality the
probability is even lower) than the probability that a correct
32-element long chain can form is below 10^- 9 and not
10^0 as you assume.

If peptids are synthesized in vitro, special methods are
needed in order to prevent undesired chemical bonds.

And don't forget: Miller-Urey-style experiments result
predominantly in tar. Only the simple amino acids glycin and
alanin appear in substantial quantities. Most amino acids
do not appear in measurable quantities.

In addition to that there is the problem that not only the
needed forms of amino acids appear in such experiments,
but also their mirror-image isomers.

Your calculations are representative for all neo-Darwinist
calculations which have 'shown' that evolution is possible
within materialist reductionism. You all seriously misunderstand
what is meant by probability calculation !!!

>> What is a self-replicating protein?

> One capable of reproducing itself from spare amino acids.

I agree. But the Lee peptide is not capable of reproducing
itself from spare amino acids! To call a simple peptide ligase
a 'self-replicating protein', even if it is able to ligate two
halves of its own amino acid sequence, is not only misleading but
maybe even dishonest!

The Lee peptide is made of 32 building blocks. There are 20
different building blocks, and it is normal to attribute the
probability 20^-32 to the special sequence of the Lee protein.

20^-32 = 2.33 * 10^-42 = 0.000'000'000'000'000'000'000'...

The self-ligating Lee protein is made of 2 building blocks and
there are only 2 building blocks. So instead of

20^-32 = 2.33 * 10^-42

we have a probability of only

2^-2 = 0.25 !!!


23-Mar-1999

... And it is not possible to impress me by scientific papers
which presuppose what should be explained.

It is clear that every obstacle to forming a correct chain of
amino acids can be removed by some techniques or by some special
assumptions. But they represent conditions to which we also
must ascribe a probability.

"Mineral catalized or assisted peptide formation" is not available
everywhere. Each of 20 amino acids must then be available in
high proportions exactly in places, where "mineral catalized or
assisted peptide formation" is possible.

>> I assume that the 10^14 interactions/sec are meant in the
>> following way: two neighbouring molecules collide with a
>> frequency of 10^-14 Hertz.

> No. A given molecule will interact with a new molecule
> every 10^-14 s. If you like, the 'pack of cards' of each
> molecule is being reshuffled every 10^-14 secs.

The velocity of molecules in the air is around 10^2 or 10^3 m/s.
This gives a distance of 10^-11 m or 10^-12 m every 10^-14 s.
(Einstein's formula for Brownian molecular motions gives similar
results in water, but for a x-time increase in distance, a x^2-time
increase in time is necessary.)

The length of simple molecules is 10^-9 or 10^-10, this is
around two orders of magnitude bigger than the distance
molecules move in 10^-14 s. How is it possible that a "given
molecule will interact with a new molecule every 10^-14 s",
if the given molecule moves only a small fraction of the
length of a molecule?

The existence of real self-replicating (the Lee protein is
only self-ligating) proteins would be strong evidence for the
psychon theory (in the same way as prions and inteins are
strong evidence for this theory).

There are proteins with very different behaviours.

Even if we take for granted that a 32-element protein creates
correct amino acid sequences of the needed length, a big
problem remains. There are 20^32 different sequences.

So general chemical and physical laws must lead to:

  1. The sequence of 32 amino acids folds correctly.
  2. The folded protein recognizes many different amino acids
    and is able to chain them in the correct way.
  3. Always the same sequence is created.
  4. This sequence corresponds by chance (no base pairing is
    possible) to the amino acid sequence of the protein itself.

I garantee you, that's completely absurd.


© No rights reserved, 2000