So this implies that for a good coding system, its input output function, this

function here, should be determined by the distribution of natural inputs.

So here's a classic study in which this idea was tested directly.

In the early 1980's, Simon Laughlin went out into the fields with a camera, and

measured the typical contrasts, that is deviations in the light level, divided by

the mean light level, that would be experienced in the natural world, for

example, by a fly. So, that's this distribution here.

If the response does indeed follow the distribution of natural inputs...

Then the response curve, here, should look like the cumultive probability

determined by integrating p of c. And in fact, that's a very good match to

what he did actually observe in the response properties of the fly large

mono-polar cells, the neurons that integrate signals from the fly's

photo-receptors. Now, a study like this poses a challenge.

While it makes sense that our sensory systems would, over evolution or

development, set up response codes that are adjusted to natural input statistics.

It seems that much more work is needed to handle the problems posed by this huge

natural variation, that stimuli take as one moves from indoors to outdoors or

even moves one's eyes around a room. The contrast distribution is varying

widely. Might sensory systems rather adjust

themselves on much shorter timescales to take these statistical variations into

account. So let's take a patch of the image, and

look at the, the variations in contrast in that image.

Here for example, that contrast distribution might take, might be narrow

like this. Wheras over here, it might be much

broader. What our code should do is take the

widths of these distributions into account in setting up a local.

Input, output curve, that accommodates this structure of the, currently measured

statistics of the input. So that's the question that we tested

here, in the h1 neuron. In this experiment, we took a white-noise

input, of the type that you used in the problem sets, so some s of t.

Looks like that. And we multiplied it by some time

varying, slowing time varying envelope. Call that sigma of t.

And that's what you see here. So we repeated the same sigma of t.

This is a 90 second long chunk of stimulus.

Repeated the same sigma of t. In every trial, but we changed the

specific white noise. Stimulus.

And that allowed us to pick out spikes that occurred at different time points

throughout this presentation of, of sigma of t, where in every trial the cell would

have seen a different specific stimulus. And to calculate the input output

function described by those spikes, in those different, in those different

windows of time. So now one, when one analyzes spikes

across these different windows, and pulls out their input output function using the

methods that we talked about in week two, one finds that for example, here in this

window, one gets a very broad input and output curve.

Where, when the stimulus is varying very little, one finds a very sharp input and

output curve. Now, it turns out that if one normalizes

the stimulus by its standard deviation, or by this envelope sigma of t, all of

these curves collapse onto the same curve.

What that says is that the code has the freedom to stretch its input access such

that it's accommodating these variations in the overall scale of the stimulus.

And it's able to do that in real time as this envelope is varying.

This is being seen in several other systems, including the retna and the

auditory system. But here's an example from rat barrel

cortex. This is somatosensory cortex of the rat.

In particular. The part that encodes the vibrations of

whiskers. So, from extracellular in vivo recordings

of responses to whisker motion, whiskers were stimulated with a velocity signal

again, s of t, that looked like this. So this is a slightly simpler experiment.

The standard deviation was varied between two different values.

And now one can pull out spikes that are generated in these two epochs that

presentation. The high variance case and the low

variance case. And one can compute input output curves

for spikes that occurred under these two different conditions.