Design in the Human Eye
The computer chip, which runs a computer, is a little wafer of silicon that has a marvelously intricate connection of parts, all a fraction of an inch in size. It has been designed and created. I do not know any evolutionists who claim that any particular computer chip was formed by a series of accidents of fire, water, gravity, sparks, and so on. So there is no doubt even among the most doctrinaire evolutionists that almost any silicon computer chip you can imagine was designed and created by a higher intelligence.
On the other hand, the human retina is far more complex than any computer chip. Yet an evolutionist who is looking at the human retina will say, "Now let's see: what combination of wind, fire, water, sparks, reducing atmosphere, etc., caused this to happen?" I think this points out the curious double standard concerning the subject of origins that is present among otherwise good scientists. I think the computer-retina analogy is very useful for vividly demonstrating this double standard.
The retina lining the back of the eye is a very thin "membrane" even thinner than Saran-Wrap. Compare this with a computer chip. The May 1985 issue of High Technology showed computer silicon chips. The actual chip is about 7 millimeters across and has the complexity of 100,000 transistors. The retina contains photoreceptors that may be compared to transistors. However, a photoreceptor is actually a very efficient, high gain amplifier and much more complex than a transistor. The retina in the fovea has 200,000 of these photoreceptors for every square millimeter.
Phenomenal! So here you have high technology that does not even come new the retina in complexity.
The retinal rods and cones are composed of various layers. The human rods have a dynamic range of about 10 billion-to-one. In other words, when fine-tuned for high gain amplification (as when you are out on a dark night and there is only starlight), your photoreceptors can pick up a single photon. Phenomenal sensitivity! Of course the retina does a number of processing tricks on that just to make sure it is not picking up noise, so you don't see static; it really wants at least six receptors in the same area to pick up the same signal before it "believes" that it is true and sends it to the brain. In bright daylight the retina bleaches out and the volume control turns way down for, again, admirable performance.
Modern photographic film has a dynamic range of only about 1,000-to-one. The retina is a visual system that handles nearly all ranges of light intensity without even changing film or developer. Not bad for a structure engineered several thousand years ago!
Is the eye poorly designed?
Some evolutionists have said, "My goodness, look at that mistake. The retina is inside out and should be turned around the other way since the light should hit the photoreceptors first." There is a very good reason the mammalian retina is the way it is. The photoreceptors (the rods and cones) have a very high rate of metabolism and they have to be in touch with the nutrient supply. Those photoreceptors (in mammals) replace themselves probably every seven days if they are young and healthy. This is a very good protective mechanism. If you ever have looked at the sun, you have probably burned out a number of rods and cones, and they usually regenerate rapidly.
Because all the retinal neurons, ganglia, and other hardware are packed in by a separation of less than a wavelength of light, the retina is totally transparent. You look at a retina, even though it has all the "hardware" and is much more complex than a silicon computer chip, and it is totally transparent. Light goes right through it.
What is going on inside the retina?
It has been estimated by a number of computer scientists who are trying simulate the visual system with computer models that ten billion calculations occur every second before the image ever goes back the brain. Here is a quotation from John Stevens who is a Ph.D. associate professor of physiology and biomedical engineering (Byte, April 1985): "To simulate 10 milliseconds of the complete processing of even a single nerve cell from the retina would require the solution of about 500 simultaneous non-linear differential equations one hundred times and would take at least several minutes of processing time on a Cray super-computer. Keeping in mind that there are 10 million or more such cells interacting with each other in complex ways, it would take a minimum of a hundred years of Cray time to simulate what takes place in your eye many times every second." You have to keep in mind that this particular engineering feat was done several thousand years ago. You might even say that it is a little old-fashioned. It is using neurons that are a million times slower than the little wires inside a computer chip (the conducting traces). So you are starting out with hardware that is already a million times slower than anything you have in a silicon chip. However, it's put together in such a highly organized and sophisticated way that even the retina of a lowly animal marvelously outperforms our most advanced computers! And it keeps repairing itself!
The author I have just quoted has been very interested in simulating visual systems with a computer chip. He dreams of someday building a silicon chip that actually does what the retina does. Even though that is not yet possible, he conjectures about what it would be like. Going through a list of specifications, he is saying that it would weigh 44 to 110 pounds. Typically, the little silicon chip that runs a computer is about a fraction of an inch in size and is wafer-thin. This "dream chip" would have to weigh 100 pounds to do what the retina does! For comparison, the mammalian retina weighs less than a gram. The "dream chip" would also have to occupy ten thousand cubic inches of space.
The retina only occupies 0.0003 cubic inches of space. The power consumption would be about 300 watts. The retina only consumes about 0.001 watts of power. He goes through other calculations as well. The resolution of this "dream chip" would be about 2000 by 2000 pixels; whereas the retina has a resolution of about 10,000 by 10,000. This chip would have about a million gates (transistors that act like one-way valves); whereas the retina has about 25 billion equivalent gates in it. The circuit layout of this chip would be two dimensional whereas the retina is three dimensional. I could go on and on.
Jared Diamond (Discover, June 1985) criticizes the mammalian eye as badly designed (or not designed). The only "evidence" presented is, in his opinion, that the retina is "inside out." Inside the eye, the light first has to go through the retinal hardware (that does all the marvelous ten billion calculations per second) and then finally hits the photoreceptors, the rods and the cones. The rods and cones pick up the light and send the signal back for all the processing to occur. This critic is saying that if God knew what He was doing, He would have put the retina the other way around because any idiot knows that light should hit the photoreceptors first without having to go through all the hardware.
The problem with that criticism is that there is actually no significant scattering or absorption of light in passing through the retinal hardware. Because the hardware is packed in so tightly (less than a wavelength of light separations it is transparent. Looking through the biological hardware is like looking through window glass.
Second, if the photoreceptors were "sitting out here in the breeze" (as in Dr. Diamond's design), they would have very little means of nutrient supply. It's like taking the front lines and cutting off the supply lines. You have an enormous metabolic activity going on in those photoreceptors; they are constantly replacing themselves. Otherwise, we would all be blinded from bright lights. So, it makes a whole lot more sense to have the photoreceptors in contact with the choroid, a very rich blood supply, supplying all the needed nutrients. Incidentally, the choroid may even be a bit over-engineered. Some have said that you could get by with a lot less choroidal blood flow. However, if you designed it the way Dr. Diamond says "any sensible engineer would have designed it," the photoreceptors would not last very long. The first time someone took a picture of you with a flashbulb at close range, that would be it for your central retina - it may take months to regenerate the foveal cones without enough nutrients. You would have to avoid reading or driving a car for months.
Another advantage to his design, according to Dr. Diamond, is better resolution (i.e., a sharper image). You would see better because you wouldn't have to go through all that "wiring" to get to the photoreceptors. However, the human eye is already diffraction-limited in resolution. You cannot improve on the resolution that the eye has, given the constraints of the size of the pupil and the size of the globe under usual daylight conditions.
If you had a larger eye, you could improve the resolution. But that would be impractical for other reasons. Enlarging pupillary size could help resolution. However, it would reduce depth of focus markedly and bombard the rods and cones with too much light under sunny conditions. So given the size of the eye, changing the retinal design could not improve "sharpness." In other words, the separation of the foveal cones is about the same as a wavelength of light. When you have a 2 1/2 millimeter pupil in the front of the eye (as on a normal bright day), that distance is such that even if you improved the optical quality of the eye, you would have wasted your effort. It is the diffraction property of light that is the limiting factor here. The eye's design appears as if it were optimized around the visible light spectrum.
If one is going to improve on image sharpness (without making the eye larger), he would have to start by using smaller wavelengths of light to achieve his higher resolution. To do that, he would have to change the visible spectrum. The photoreceptors would have to detect light in and beyond the violet and blue end of the spectrum. To do that, of course, he would have to alter the eye's filtering to let in more ultra-violet light, which has more phototoxicity. This would lead to a more rapid destruction of the photoreceptors, cataract formation, and acceleration of free-radical formation. So, I think I will take the design that we have, instead of any "modern" revision.
Dr. Joseph L. Calkins, M.D.
202 Butler Avenue
Lancaster, PA 17601
Dr. Joseph L. Calkins is the Assistant Professor of Opthalmology at Johns Hopkins University in Baltimore, Maryland. His article originally appeared in the March 1986 BSN.
Copyright Â© 1992 Bible Science Newsletter. Creation Moments, Inc. PO Box 839 Foley, MN 56329