0:00

We have a model of a robot, we know how the robot can get position information, in

Â this case we used the wheel encoders but there are other ways we talked about

Â compasses and accele accelerometers but the robot also needs to know what the

Â world around. It looks like. And for that you need sensors. And we are not going to

Â be spending too much time modeling different kinds of sensors, and see what

Â is the difference between an infrared and an ultrasonic range sensor. Instead.

Â We're going to come up with an abstraction that captures what a lot of different

Â sensing modalities can do. And, it's going to be based on what's called the, the

Â range sensor skirt. This is the standard sensor suite in a lot, on a lot of robots.

Â And, it's basically a. Collection of sensors that are. The collection that is,

Â gathered around the robot. That measures distances in different directions. So,

Â infra red skirts, ultrasound. LIDAR, which are laser scanners. These are all examples

Â of these range sensors. They're going to show up a lot. Now there are other

Â standard external sensors of course, vision, or tactile sensors, we have

Â bumpers or other ways of physically interacting with the world or "GPS" or I'm

Â putting them in quotation because there are other ways of faking GPS. For

Â instance, in my lab I'm using a motioning, or motion captioning system to pretend

Â that I have GPS. But what we're going to do, mainly. It's assumed that we have this

Â kind of setup. Where a skirt around the robot that can measure distances to, to

Â other to things in the environment. And in fact, here is the Chipera It's a

Â simulation of the Chipera. And the Chipera in this case, has a number of infrared

Â sensors. And Well you see the cones, you have blue and red cones, and then you have

Â red rectangles. The red rectangles are obstacles and what we're going to be able

Â to do is measure the direction and distance to obstacles. So this is what

Â type of information we're going to get out of these range-sensor skirts. over here on

Â the right you see two pictures of the sensing modalities that we had on The

Â self-driving car that was developed at Georgia Tech. And we have laser scanners

Â and radar and vision. but the point is the skirt doesn't always have to be uniform or

Â even homogeneous across the sensors. Here we have a skirt that is heterogeneous

Â across different sensing modalities. But, roughly you have the same kind of

Â abstraction for a car like this, as well as for. Hey, Chipera, little mobile

Â differential drive, robot. Okay, so, that's fine, but we don't actually want to

Â worry about particular sensors. We need to come up with an abstraction of this,

Â sensor skirt, that, that makes sense, that we can reason about when we design our

Â controller. So, what we're going to do is, we're going to do some, or perform what's

Â called a disk abstract. Abstraction. So here's the robot, sitting here in the

Â middle. around it are sensors. And in fact, if you look at this picture here,

Â here are little infrared sensors. And in fact, here are ultrasonic sensors.You see

Â that scattered around this robot are. It's a skirt of range censors. We're, they

Â typically have an effective range, and we're going to extract that and say there

Â is a disk around the robot, of a certain radius, where the robot can see what's

Â going on, right, so this is this, this pinkish disk around the robot and it can

Â detect obstacles that are. Around it. So the two red symbols there are the

Â obstacles. What we can do is we can figure out how far away are the two obstacles.

Â So, D1 is the distance to obstacle one, which is this guy. And this is obstacle

Â two, well, okay. join with ratts of ensure and Pi one is the angle to that obstacle,

Â similarly d2 is the distance to obstacle 2. Phi 2 is the angle to obstacle two. One

Â thing to keep in mind though is that robot has its own coordinate system in the sense

Â that this, if this is the x axis of the robot right now, then Pi one is measure

Â relative to. The robot's x axis, so the robot's heading, right. So we need to take

Â that into account if we want to know globally where the obstacles are. So let's

Â do that. If you have that, and if you know our own pose, so we know x, y and Pi. Then

Â since the measured headings to the obstacles. So this is Pi one which is

Â measuring and we're measuring this relative to our orientation. Lets say that

Â our orientation is this right. So here is phi and here is Pi two say, then of course

Â the actual. direction to obstacle two is going to be Pi 2 plus Pi. So, what we

Â could do, is we could take this into account and compute the global position's

Â of these obstacles if we know where the robot is. So, for instance, the global

Â position for obstacle one x1 and y1. Well, it's the position of the robot plus the

Â distance to that obstacle times cosine and sine of this Pi 1 plus Pi term. So we

Â actually know globally where the obstacles are if we know where The robot actually

Â is. So this is an assumption we're going to make. We're going to assume that we

Â know x, y and Pi. And as a corollary to that, we're going to assume that we know

Â the position of obstacles around this in the environment. So that's the abstraction

Â that we're going to be designing our controllers around. And I just want to

Â show you a. And I'm using an example of this, this is known as the rendezvous

Â problem in multi agent robotics, where you have lots of robots that are supposed to

Â meet at the common location but they're not allowed to talk, they're not allowed

Â to agree on where this would be by chatting instead they have to move in such

Â a way that. They end up meeting in same location and one way of doing this is to

Â assume you have a rain sensor disk around you and then when you see other robots in

Â that disk instead of thinking of them as obsticles we think of them as buddies so

Â what we are going to do is each robot is going to aim toward the center of gravity

Â of all it's neighboors so everyone that is in that disk, Disk, and because of the

Â disk assumption or disk abstraction we just talked about, we can actually compute

Â where the center of gravi ty is of our neighbors. So here's an example of what

Â this looks like. Every robot is shrinking down. Two, all the robots shrink down to

Â meet at the same point, without any communication, simply by taking the disk

Â around them, looking where are my neighbors in that disk, and now we know

Â how to compute that. And, then, computing the center of gravity of my neighbors, and

Â aiming towards said center of gravity. Okay, now we have a robot model. We have a

Â model for figuring out how to know where the robot is, we have a model for how do

Â we know where obstacles and things in environment are. Now we can use these

Â things of course to actually start designing controllers, so that's what

Â we're going to have to do next. I do want to point out though that the model The

Â real encoder, and the disk abstraction. These are but an example of what you can

Â do, and how you should make these kinds of abstractions. But for different kinds of

Â robots, different types of models and abstractions may be appropriate.

Â