0:46

And, you see that this [COUGH] building of a structure at large scale

[COUGH] is like if the whole system is more than the sum of its parts.

So emergence of microscopic property out of a simple

microscopic dynamic is what people call manifestation of complexity.

So cellular automata, they exhibit a lot of complex behavior by having

structure patterns at a bigger scale.

So today, I would like to illustrate another aspect of

complexity out of two rules.

And the first one is the game of life,

which has been proposed by John Conway in the 1960s.

So it's a universe where you have a organism that can be dead or

alive, 0 means dead and 1 means alive.

And a dead organism can come back to life,

provided it has exactly three parents, so it's a bit surprising rule,

but that's the way we get some interesting behavior.

And a living organism, or living cell,

will die if it's surrounded by less than two or more than three neighbors alive.

Meaning that this individual, they don't like isolation and

they don't like surpopulation.

So, we use a square lattice with eight neighbors, so

that's the [INAUDIBLE] neighborhood in only two states, zero and one.

And you can observe on this picture the time evolution

of an initial random situation.

3:02

Let's do it one more.

You will see at the top of the screen you have

this structure that's actually moving.

So those are called gliders, and they are actually organized system

of five cells which you see here in detail, so it's just this glider isolated.

So if you put a time t0, you have this configuration of [COUGH] cells alive.

You can apply the rule, as I just mentioned on the previous slide.

And you see that the next time you get this configuration,

which then turn into this one, this one, and then finally,

after the [COUGH] four step, you get the exact same configuration as initially,

but moved within the space, so it moves back and right, by one cell.

So it takes five iteration to move in the space.

So what's interesting here is that in the rule that we discussed in

the previous slides, there's nothing like movement.

But here you can produce an arrangement of cells which,

by specific structure, has a new potential, it has a potentiality to move.

So this new object which is a combination of elementary [COUGH]

cells has developed a new capability, which is movement.

And that's very interesting in term of understanding complexity,

because you can see that by putting together simple things that can do simple

things, I can create a new object that can do more complicated things,

or the new functionality which are beyond what we put in the system.

And of course, we could be tempted to think that that's the secret of life.

To develop new functionality out of simple component.

But we will of course not go that far.

But we can still explore the game of life.

And discover that in addition to glider, you have other object that you can build.

And if you search Internet you can find an example of what's called a glider gun,

so it's actually this structure of cells that you can see here,

which keep producing gliders.

So if you run this, you will see that every few iteration, you have a new

glider that is produced, and so on which of course keep propagating into space.

But you can go one step further.

You can also find a configuration which is actually a system which produce a gun for

gliders, and so on and so on.

So we can build all kind of structure which are of increasing complexity,

in term of the number of cells and their spatial arrangement, but

their functionality keep increasing.

5:55

And actually what you [COUGH] discover with this game of life,

it's what we call a universal computer, it mean that you always have an initial

condition of the game of life, which can reproduce any electronic circuit.

So it can basically, you could be at a computer,

out of initial condition of your game of life, and

it will simulate this logical gates like AND [COUGH] gate, XOR gate.

And out of this you can do any calculation that a natural computer can do.

Of course, it's much less effective, but

still, there's no limit to the computation you can do with the game of life.

And then if you browse the web, you will also find very nice example where you can

use a game of life to compute the prime numbers.

So it looks a bit surprising, but this a very interesting

illustration of this universality that is behind that.

But anyway, what we want to illustrate here is the fact that out of a very simple

component, we can build more complex component that do more

than the sum of the little element that you use.

[COUGH] The emergence of new functionality, new properties.

And [COUGH] I'd like to illustrate another aspect which I think has

some importance in the scientific methodology,

which is illustrated by the model called Langton's Ant.

And so this ant is just a fictitious animal.

It's a dot on a lattice that will move across a set of cells.

And it has a very simple rule of motion which is illustrated on this picture.

So, the ant is represented as an arrow from the direction in which it moves.

And so when it reach a cell which here is gray,

then the rule is that the ant turns left, so it will go up.

But also it change the color of the cell you just crossed.

Okay.

Now if it enter the cell which is white Then you turn right and also toggle the.

And that's all.

So, with this you can do a simulation and see how your ants will evolve.

So, suppose I take an initial

space where all the cell are wide except one, which is this one.

So, it could be any one.

But just for the example, this one.

And we start with this ants coming here.

So, it's entering a white cell.

So, it should turn right and color the cell.

Now, it's entering a colored cell, so it should turn in the other direction,

which is this, and change the color of the cell.

And it goes on like this.

So, it can play a little bit.

Okay?

9:06

So, you see the yellow dot is the ant.

And the blue are the color of the cells.

So, you see that it's a rather chaotic motion.

And you could think, well, okay, not interesting until suddenly, at some point,

you have a very special movement, which is called a highway.

So, there's a highly organized movement of the ant

which makes it move in a straight line.

Of course, in a bit complicated way, like a sewing machine,

I would say going in a zig-zag.

But still it's a way to go to infinite.

And, again, this is a deterministic rule.

So, if I repeat it I will get exactly the same approach.

9:48

So, that's what you observed in our animation.

So, just chaotic movement.

But suddenly, at this time,

which is around 10,000, then you start this highway.

So, the ant starts building this structure which takes it far away.

Okay?

So, that's maybe not expected from the rule that I gave you.

So, a question we can ask is could I prevent the ant to go to infinity?

Could I prevent this ant to build this highway?

And, of course, the idea would be to put some of this colored cells along it's way

so it will force the ant to stay in a limited space.

So, mathematician, they thought of this problem and

they actually demonstrate that it's impossible to go to infinity.

And the demonstration is just illustrated in this picture.

So, since you have square lattice,

12:30

So, now you can also say, well, I can put more than one ant and

see how they interact.

And the only thing you have to do is to deal with the case where you have two,

three, or four ants on the same cells which are coming from different direction.

So then, it's just a matter of counting how many

ants you have to decide to change another color.

If you have an odd number of cells, of ant on the same cells,

you don't change the color.

If it's an even one you change the color.

So, let me also illustrate this on an animation.

You see that things go much faster.

You start also building highways.

But what's very interesting, I don't know if you noticed that this highway

was actually built by one ant and

then another ant could really move extremely fast on this pass,

then catch up with first ant and destroy the movement.

So, let me try to replay it so you can pay attention to what happened.

The highway's built and suddenly there's ant that will run into this and

destroy the movement.

So, what you observed on this simple simulation

is that with many ants you create the highway much earlier.

So, it's kind of a co-operative dynamics.

But it's all a destructive dynamic because some of this highway can be destroyed.

14:01

So, another thing I would like to illustrate is

we saw that with one ant, you know that the single ant has to go to infinity.

Okay?

What happened when you have, let's say, two ants?

So, this is an example where you see the trajectory in yellow, the two ants.

So, it's a bit intricated, But

you have two trajectory that you can guess on the image.

So, we start to be more and more complicated.

But suddenly it goes back to the initial state.

So, let me play it again.

You see that the trajectory seems to be chaotic.

And at some point you just undo what it was doing, and

we go back to the initial state.

That would mean that if I kept running the simulation I will get

an oscillating system.

Okay?

So then, if I have an oscillating system,

I can definitely prove that with two ants I don't have to go to infinity.

I can stay in a finite region.

So, I think what's the interesting conclusion we can get from this model.

I think, it has some impact on the way you think about science.

So, first, it's a system where you know very well the law that govern the system

because definitely it has been artificially created by a.

So, we basically build this rule according to our convenience.

And so, it mean that we know the fundamental law of nature, okay, for

this system.

But still, it doesn't allow us to predict a detailed aspect of this dynamic.

For instance, we are totally unable to predict, theoretically,

at what time the first highway will appears.

We can observe the system, of course.

But we can have no mathematics that tells us when it will happen.

So it means that even though you know perfectly well

the macroscopic behavior of a system,

you may be totally unable to predict the microscopic behavior.

The only solution to know what will happen is to observe the system.

So for the ant we have just to sit in front of a computer.

And to wait until the first highway appears and then we say oh,

that's the time at which it happened, okay?

So of course it would be a very bad news if it would be exactly the same thing for

all systems, think of the weather forecast.

So we are very happy to be able to predict the forecast for

tomorrow in less time than waiting tomorrow.

But if I apply the same situation as for

the ant, it would mean that if I wanted to know what's the weather tomorrow,

I just have to wait until tomorrow and observe, so.

But here you see that some example,

some problems, they cannot be computed faster than the observation.

It's just too complicated.

And that's an example of a dynamical in system where, so far, either we have not

been smart enough to be able to predict, or there's no way to predict it still.

I would say an open question,

yet there's a few thing that we could guess or know about our ant.

Is the fact that it has to go to infinity, or

that some oscillatory motion is possible.

But if you think of how we could get this info,

it's only from the symmetry of the rule.

So basically, we could find what very fundamental law

are associated to this microscopic rules.

And from those we can derive some very general results about

time irreversibility or the fact that you have to go to infinity.

But you don't know the detail, you don't know when.

So again, we see that it's often the symmetry, or the conservation law,

or some particularity of the rule that can let you do prediction.

But maybe you cannot do everything you want.

And I'd like just to come back to these examples that I showed you

in our first module on cellular automata, and you've seen these patterns.

And the question is are those patterns complex?

Are they also so complex that we cannot

predict what they will be until I'm actually observing them?

And in that case the answer is no.

Actually, I can find an algorithms which compute faster than the itself.

Actually if you run your cellular automata, which is a n by

19:04

It's because observe that if you [COUGH] look at your rule at some

specific time namely, when the time interval is a power of 2,

you see that you exactly reproduce your initial motif with some translation.

So, this parity rule does only one thing at power of 2 iteration time.

You take the initial configuration, you move it left,

right, down and up by a given amount.

And then you do a superposition with your previous image.

So that's why when you have exactly a power of 2 number of iteration,

the image is very clear.

Now if it's not the power of 2, then you just have to decompose your number of

iteration into a sum of power of 2, which is logarithmic operation.

And then you can just sum up everything and you know exactly the result.

So, that's an example where you don't have to wait for

the simulation to finish to know what's the result.

You have mathematics that allows you to go faster.

And of course, that's nice if we want to do prediction.