From the archives: Why the retina is cool

I’m taking a page out of Natalie’s book and giving myself a week off, where I re-post posts from my old blog every day. This doesn’t mean I won’t post other stuff (see previous post), but I’m giving myself permission to do only that if other things get in the way. Today, I’m re-posting another really old sciencey post, after last week’s really old sciencey post got such a good response. Also: long time readers, what posts from my old blog did you really enjoy, and think readers of this blog should read?

For this first Science Sunday, I’ve picked a weird goal: persuading you, my readers, that the human retina is one of the coolest things known to science. When I learned about it in my Neurobiology I course late last fall, it blew my fucking mind. Thus, I’ve decided to spread the gospel of the human retina, as odd and difficult as that task seems.

I guess it starts with a post from my old blog which, in retrospect, looks depressingly ignorant, though listen up, you could learn from my mistakes. I may as well reproduce the whole thing:

In my readings in the neuroscience literature, I’ve yet to find any account of how information is sent from the eyes to the brain–what form the nerve signals take. Visual images are complex, so it would follow that the system has to be pretty complex, and it would be a real interesting topic. Knowing how the nerve impulses go would tell us something about the brain: it would tell us that its set up to process information in that particular format. It seems to me like that would be a huge step in finding out how the brain works.Since this doesn’t get talked about at all, as far as I’ve seen, one of two things must be true: the research has been done and has been ignored (in which case a lot of scientists are being stupid), or the research hasn’t been done. I don’t think scientists would ignore something as big as this, so I can only guess the research hasn’t been done. On the other hand, it would seem in principle simple to do: put a sheep eyeball with intact optic nerve into an apparatus with chemicals able to keep it functioning in the short term, a small video screen, and sensor able to give a very precise account of what’s happening at the end of the optic nerve. I can only assume, then, that the technical aspects of building such a device would be too difficult, and therefore no such device has been built. If someone did figure out how to build the thing, though, the hard part of the research would be over, they’d be able to gather some amazing data, publish it, and be a real contender for the Nobel.

All of the above is definite half-assed speculation territory for me. I’d love to have someone who really knows what they’re talking about shed some light on this one. But if the above is correct, might it suggest we should start giving scientists engineering training in the interest of being able to develop better experimental apparatuses? Or perhaps, at least, having scientists work closer with engineers?

I got a couple comments saying the stuff I was looking for had already been done, but it wasn’t quite what I had been hoping for. Now, however, that I’ve taken some real neuroscience courses, I can answer all my own questions.

First, it wasn’t scientists who were being stupid, ignoring the research I was interested in. It was philosophers and popular science writers. In retrospect, I don’t blame them. The fine details of how nerve signals are transmitted aren’t all that sexy, in and of themselves. Much better to write a book on high-level concepts we don’t really have pinned down.

Second, you know that thing about the technological difficulties of building a device to tell you how nerve signaling works? Turns out, key work was done by a pair of guys named David H. Hubel and Torsten N. Wiesel, who did electronics work in WWII, and used those skills to make really fine electrodes to poke the nervous system with. Hooray for being interdisciplinary and working with technologists. Oh yeah, and they got the Nobel prize in 1981.

Now to the actual science. Nerve signaling works like this: nerve cells generally have a electrical charge difference across their membranes. It’s a matter of different ion concentrations, and can be altered by changing those ion concentrations. Contrary to what some popular portrayals may make you think, movement of free electrons as little lightning bolts has basically nothing to do with the nervous system. Neurons signal each other by releasing neurotransmitters, which bind to proteins in the surface of the target cell and either directly or indirectly alter the cells permeability to ions, allowing the concentration to change. Decrease the charge difference across the membrane, and you’re moving towards having the cell fire a signal by setting off a chain reaction of ion movement along its length. Thus, logically, if you increase the charge difference it makes it less likely the neuron will fire. These processes are called “inhibition” and “excitation,” respectively.

What does that have to do with the retina? Well, you’re eyes light-sensitive cells are hooked up to neurons, and these neurons use patterns of inhibition and excitation to pull off a cool trick: a patch of photoreceptors will generally have neurons running to adjacent patches. These have an inhibitory effect. Thus, if a receptor isn’t getting much light, but its neighbors are, it will be stimulated even less than if nobody was getting much light. Similarly, if a receptor is getting a lot of light when its neighbors aren’t, then it will be stimulated even more strongly than it would be if everyone was getting a lot of light, since when everyone is getting a lot of light there’s some inhibition caused by it’s neighbors. This underlies such phenomena as the checkerboard illusion and grid illusions.

But being able to appreciate these illusions isn’t of much practical benefit. So why would the retina do this? Simple: edge detection. Being especially sensitive to contrast lets you tell where the edges of objects are. Dividing up our visual field into distinct objects is kind of important. Most of the time, there is some contrast to be found at the edges of objects. (Fun fact: on the off chance the contrast is lacking, the edge of the object will seem to magically vanish. Good for camouflage, but I’ve encountered this by accident.)

If you understand this trick, it’s a short step to understanding tricks for more complicated processing. For example, the retina system has a big circle-little circle pattern, or target-bullseye if you prefer. But string several target-bullseye systems together in a straight line, and you’ve got a system for detecting lines, and lines in that particular orientation. We know the brain has such systems. We also know the brain has cells sensitive to overall direction of movement. I’m don’t know whether anyone knows the wiring for those, but in general time information is easy enough to handle: to compare things at slightly different times, put them through pathways of various lengths until you’ve got them matching up. We know the auditory system does this.

This all sounds fairly trivial. But when we trace the stream of visual information to the temporal lobe, we find neurons with very strange response patters, responding to certain well-defined shapes but not others. How does this happen? Very plausibly, by more complicated versions of the circuitry for detecting contrast and edges. We don’t really know, since the complexity of the behavior in the cells makes them hard to figure out, but that’s a very, very good guess.

This paves the way for understanding things like reading, face recognition, and getting horny when you see a hot chick. Really important features of our mental life. And oddly, it all starts in the humble retina.

  • Martin

    Isn’t there a bunch of other preprocessing that goes on in the retina… Direction detection, motion direction, basic shape recognition?

    So the signals that reach the brain are not about patches of light but of objects and their (changing) relationship to each other.

    • http://www.facebook.com/chris.hallquist Chris Hallquist

      No, that stuff happens in the early visual cortex. But it runs on the same general principles, though.