Commit 1fef0757 authored by Jeffrey Cheng's avatar Jeffrey Cheng
Browse files

Updated Navigate the Maze section

parent 461da9df
{
"Wheel 1": {
"dictName": "Wheel 1",
"name": "Wheel",
"customName": "right wheel",
"type": "",
"x": 260,
"y": 15,
"z": 0,
"rx": 0,
"ry": 0,
"rz": -1,
"a": 1.57
},
"Wheel 2": {
"dictName": "Wheel 2",
"name": "Wheel",
"customName": "left wheel",
"type": "",
"x": -260,
"y": 15,
"z": 0,
"rx": 0,
"ry": 0,
"rz": -1,
"a": 1.57
},
"camera1": {
"dictName": "camera1",
"name": "Camera",
"customName": "camera",
"type": "add",
"x": 0,
"y": 90,
"z": -370,
"rx": 0,
"ry": 1,
"rz": 0,
"a": 1.57
},
"colsensor": {
"dictName": "colsensor",
"name": "Colour sensor",
"customName": "color",
"type": "add",
"x": 0,
"y": 0,
"z": -370,
"rx": 0,
"ry": 0,
"rz": 1,
"a": 4.71
},
"gps": {
"dictName": "gps",
"name": "GPS",
"customName": "gps",
"type": "add",
"x": 0,
"y": 335,
"z": 0,
"rx": 0,
"ry": 1,
"rz": 0,
"a": 1.57
},
"Distance Sensor 1": {
"dictName": "Distance Sensor 1",
"name": "Distance Sensor",
"customName": "leftDist",
"type": "",
"x": -370,
"y": 335,
"z": 0,
"rx": 0,
"ry": 1,
"rz": 0,
"a": 3.14
},
"Distance Sensor 2": {
"dictName": "Distance Sensor 2",
"name": "Distance Sensor",
"customName": "frontDist",
"type": "",
"x": 0,
"y": 335,
"z": -370,
"rx": 0,
"ry": 1,
"rz": 0,
"a": 1.57
},
"Distance Sensor 3": {
"dictName": "Distance Sensor 3",
"name": "Distance Sensor",
"customName": "rightDist",
"type": "",
"x": 370,
"y": 335,
"z": 0,
"rx": 0,
"ry": 1,
"rz": 0,
"a": 0
}
}
\ No newline at end of file
......@@ -4,32 +4,22 @@ linkTitle: "Navigate the maze"
weight: 8
description: >
In this seminar we cover how to generate a first robot controller to navigate around a maze like environment.
toc_hide: true
---
> *Use parts of this tutorial in the component specific tutorials.*
> *This tutorial could be used after the component tutorials as a bigger project oriented one*
{{% pageinfo %}}
This page is under construction.
{{% /pageinfo %}}
## Introducing Your Robot
In this challenge, the robot is fixed and as Epuc (this is actually a real world robot!) Your robot has a number of components:
* Left and Right wheels. Can be controlled independently to set the speed
* Distance Sensors. A number of distance sensors around the robot to detect walls/objects
* Camera. Two RGB cameras on the front of the robot to provide visual information
In this challenge, the robot is a customizable Epuck. You can design your robot [here](https://robot.erebus.rcj.cloud/), adding the optimal sensors at the optimal positions. From there, you can export your robot as a JSON file, which you can then load into the robot window in Webots.
{{< figure src="real_robot.thumbnail.jpg" class="center">}}
## Field
The field/environment which accompanies this tutorial (and is shown to the right) can be downloaded [here](https://drive.google.com/file/d/1_SvSJFczGxnr_OfCWiB8iGYwRkC1bd2f/view).
Once downloaded, copy the files into `/game/world`.
{{< figure src="field1.png" class="center">}}
We also have a short video showing how to change fields [here](https://youtu.be/VjHYlgjHD74).
For the sample program, we will be working in a field similar to the one shown below. If you are unfamilar with how to find the various fields, visit the "Getting Started" section of the "Tutorials" section.
<figure>
<center>
<img
src="field1.png" width="400" height="400">
</center>
</figure>
## Tasks/Activities
After working on this seminar, work through the following tasks (of increasing difficulty):
......@@ -41,57 +31,55 @@ After working on this seminar, work through the following tasks (of increasing d
Remember, for help/advice, or if you want to share ideas, head to [discord](https://discord.com/invite/6FJxZxk).
## Exemplar Code
Below we step through the example code developed step by step, explaining what is going on. Try and build this up by yourself first, referring back to this if you need. The full uninterrupted program can be found [here](https://github.com/Shadow149/RescueMaze/blob/master/docs/tutorials/code1.py).
Below we step through the example code developed step by step, explaining what is going on. Try and build this up by yourself first, referring back to this if you need. The full uninterrupted program can be found here: <a href="sample.cpp" download>C++</a> | <a href="sample.py" download>Python</a>
---
If you are new to Python, please don't worry, we will try and explain as we go along. One thing to watch out for is that the spacing/tabbing really matters, so be careful with that. This are also many useful Python 'cheat' sheets online such as [this one](https://perso.limsi.fr/pointal/_media/python:cours:mementopython3-english.pdf) to help you.
## Step-by-Step Introduction of the Code
First we initialise the robot and sensors. We import the controller class as we set the timestep and also the maximum velocity of the robot. We also set create a wheel_left and wheel_right object. (More details on the API can be found [here](https://github.com/Shadow149/RescueMaze/wiki/Abstraction-Layer)).
First we initialise the robot and sensors. We import/include the necessary classes as we set the time step and also the maximum velocity of the robot. We also create a wheel_left and wheel_right object. (More details on the API can be found [here](https://cyberbotics.com/doc/reference/nodes-and-api-functions)). Lastly, we initialize an array to store the left and right motor speeds - we start by initalizing these to the max speed.
{{< code-toggle file="moving1">}}
{{</code-toggle>}}
In the next step in the program, we get the distance sensor readings, stored them in the lists and enable the sensors to update with ever time step.
In the next step of the program, we initialize our distance sensors, camera, color sensor, emitter, and GPS. Remember that you can customize your own robot [here](https://robot.erebus.rcj.cloud/). Here is the layout for our sample robot:
1. Three distance sensors facing left, forward, and right
2. One forward facing camera for victim and hazard detection
3. One downward facing color sensor to detect black pits
4. One GPS for the robot's position when reporting victims and hazards
5. An emitter for reporting victims and hazards (not in diagram, on robot by default)
The robot has an number of distance sensors with two front, two left, two right and two backwards (for details about the robot and sensors visit the Webots site [here](https://cyberbotics.com/doc/guide/epuck). The picture here shows the location of these on the robots, and how they are named:
{{< figure src="sensors_and_leds.png" class="center">}}
Here is the JSON file for the sample robot: <a href="SampleRobot.json" download>Sample Robot JSON</a>
{{< figure src="robot.png" class="center">}}
{{< code-toggle file="moving2">}}
{{</code-toggle>}}
Next, we create an array in which we set the speeds - we start by initalizing these to the max speed (which we defined at the beginning). We also set the 'position', we set this to be infinite, as this allows the wheels to turn infinitely, they are not limited by turning a certain amount.
Next, we set the 'position', we set this to be infinite, as this allows the wheels to turn infinitely, they are not limited by turning a certain amount.
{{< code-toggle file="moving3">}}
{{</code-toggle>}}
We setup some functions which we can then call to set the movement of the robot. In each of these we set the speed of the left and right wheel using the speeds list.
We setup some functions which we can then call to set the movement of the robot. In each of these we set the speed of the left and right wheel using the speeds list. We also define a delay function for prolonged movements and a getColor function to retrieve the brightness level seen by the color sensor (grayscaled value from 0 (black) to 255 (white)).
{{< code-toggle file="moving4">}}
{{</code-toggle>}}
{{< python-specific>}}
Here we have the main body of the program. First we setup a while loop that runs whilst the game is running (i.e. for the 8 minutes). We do this using `while robot.step(timeStep) != -1:` - so while the simulation is running and progressing, this will be true, and hence the while loop will run.
We then have a number of if statements, that check the different sensors to detect walls or obstacles. First, we consider the left and right sensors. There are two of each of the sensors so we loop through these using a for loop with a range of 2 `for i in range(2):`. Inside this, we then check to see if the sensors are below 80, and is we then turn the correct way to avoid the wall. Now, how do we decide what sensor value to use? The figure below shows the values that you return from the distance sensors - Y-axis is the sensor value (units), X-axis is the distance from the sensor in meters.
{{</python-specific>}}
We also create a several functions to help detect victims and hazards. The first function accepts a camera image and returns true if it detects a victim or hazard. It searches for contours (a contiguous shape within the image) whose area and width to height ratio fit within a certain range. Note that this function is by no means the optimal solution: it is prone to misdetections, can miss victims, and cannot differentiate between different letters and hazards - it is meant as an introduction into victim detection for you to improve upon.
{{< c-specific>}}
C-specific explanations here.
You can use markdown format.
{{</c-specific>}}
After retrieving the image from the camera sensor, this function uses OpenCV to scan for victims. OpenCV for C++ can be difficult to install, and OpenCV in it of itself can be difficult to learn. Thus, if you feel a different method of victim detection will suffice, by all means comment out this function and implement your own solution. Do note, however, that the best solution essentially requires OpenCV or some other image processing library and that the internet has many useful resources for learning OpenCV.
{{< code-toggle file="moving5">}}
{{</code-toggle>}}
{{< figure src="distance_sensor_lookup_table.png" class="center">}}
As you can see 80 corresponds to around xxx, which is a reasonable choice give the size of the worlds we are considering.
To receieve points for a victim or hazard detection, you must report its location and type to the supervisor. The following function accepts a character for the victim/hazard type and reports it to the supervisor.
Next we also check the two front sensors. Note, we put this **after** the left and right sensors as this is higher priority - we do not want to crash into walls
{{< code-toggle file="moving6">}}
{{</code-toggle>}}
At the end of the while loop, the last two commands then take the speeds we have set in the speeds array, and actually set the wheels to these positions using `wheel_left.setVelocity(speeds[0])` and the equivalent for the right.
Here we have the main body of the program. First we setup a while loop that runs whilst the game is running (i.e. for the 8 minutes). It first checks for walls on the left, right, and front side, and turns away from any detected walls. It then uses the color sensor to check for a black pit, spinning away if detected. Lastly, it checks for a victim in view of the front camera and reports it anything it sees. Since the sample victim detection function is unable to differentiate between letters, it always reports 'T' for a thermal victim as a guess.
{{< code-toggle file="moving5">}}
{{< code-toggle file="moving7">}}
{{</code-toggle>}}
That about concludes the sample program. Again, you can find the interrupted full programs here: <a href="sample.cpp" download>C++</a> | <a href="sample.py" download>Python</a>. Note that this sample program is by no means an optimal solution, it is just meant to introduce you to working in the Webots environment in Rescue Maze. Good luck programming!
\ No newline at end of file
#include <webots/Robot.hpp>
#include <webots/Motor.hpp>
#include <webots/DistanceSensor.hpp>
#include <webots/Camera.hpp>
#include <webots/Emitter.hpp>
#include <webots/GPS.hpp>
#include <opencv2/opencv.hpp>
#include <vector>
using namespace webots;
...
using namespace cv;
using namespace std;
//Inside main:
Robot *robot = new Robot();
int timeStep = (int)robot->getBasicTimeStep(); // Set the time step for the simulation
float max_velocity = 6.28; // Set a maximum velocity time constant
float max_velocity = 6.27; // Set a maximum velocity time constant
Motor *wheel_left = robot->getMotor("left wheel motor"); // Create an object to control the left wheel
Motor *wheel_right = robot->getMotor("right wheel motor"); // Create an object to control the right wheel
DistanceSensor *leftSensors[2]; // Create an empty array to store the left sensor values
DistanceSensor *rightSensors[2]; // Create an empty array to store the right sensor values
DistanceSensor *frontSensors[2]; // Create an empty array to store the front sensor values
\ No newline at end of file
DistanceSensor *leftDist, *frontDist, *rightDist; // Objects for left, front, and right distance sensor
Camera *cam; // Create an object to control the camera
Camera *colorSensor; // Color sensor is a 1 pixel camera
Emitter *emitter; // Used to send messages to supervisor (report victims/hazards)
GPS *gps;
// [left wheel speed, right wheel speed]
float speeds[2] = { max_velocity, max_velocity };
\ No newline at end of file
from controller import Robot
from controller import Robot, Motor, DistanceSensor, Camera, Emitter, GPS
import struct
import numpy as np
import cv2 as cv
timeStep = 32 # Set the time step for the simulation
max_velocity = 6.28 # Set a maximum velocity time constant
robot = Robot()
wheel_left = robot.getMotor("left wheel motor") # Create an object to control the left wheel
wheel_right = robot.getMotor("right wheel motor") # Create an object to control the right wheel
wheel_left = robot.getDevice("left wheel motor") # Create an object to control the left wheel
wheel_right = robot.getDevice("right wheel motor") # Create an object to control the right wheel
leftSensors = [] # Create an empty list to store the left sensor values
rightSensors = [] # Create an empty list to store the right sensor values
frontSensors = [] # Create an empty list to store the front sensor values
#[left wheel speed, right wheel speed]
speeds = [max_velocity,max_velocity]
\ No newline at end of file
frontSensors[0] = robot->getDistanceSensor("ps7");
frontSensors[0]->enable(timeStep);
frontSensors[1] = robot->getDistanceSensor("ps0");
frontSensors[1]->enable(timeStep);
rightSensors[0] = robot->getDistanceSensor("ps1");
rightSensors[0]->enable(timeStep);
rightSensors[1] = robot->getDistanceSensor("ps2");
rightSensors[1]->enable(timeStep);
leftSensors[0] = robot->getDistanceSensor("ps5");
leftSensors[0]->enable(timeStep);
leftSensors[1] = robot->getDistanceSensor("ps6");
leftSensors[1]->enable(timeStep);
\ No newline at end of file
//Inside of main
leftDist = robot->getDistanceSensor("leftDist"); // Grab robot's left distance sensor
leftDist->enable(timeStep); // Enable the distance sensor
frontDist = robot->getDistanceSensor("frontDist");
frontDist->enable(timeStep);
rightDist = robot->getDistanceSensor("rightDist");
rightDist->enable(timeStep);
cam = robot->getCamera("camera");
cam->enable(timeStep);
colorSensor = robot->getCamera("color");
colorSensor->enable(timeStep);
emitter = robot->getEmitter("emitter"); // Emitter doesn't need enable
gps = robot->getGPS("gps");
gps->enable(timeStep);
\ No newline at end of file
frontSensors.append(robot.getDistanceSensor("ps7"))
frontSensors[0].enable(timeStep)
frontSensors.append(robot.getDistanceSensor("ps0"))
frontSensors[1].enable(timeStep)
rightSensors.append(robot.getDistanceSensor("ps1"))
rightSensors[0].enable(timeStep)
rightSensors.append(robot.getDistanceSensor("ps2"))
rightSensors[1].enable(timeStep)
leftSensors.append(robot.getDistanceSensor("ps5"))
leftSensors[0].enable(timeStep)
leftSensors.append(robot.getDistanceSensor("ps6"))
leftSensors[1].enable(timeStep)
#Create objects for all robot sensors
leftDist = robot.getDevice("leftDist") # Get robot's left distance sensor
leftDist.enable(timeStep) # Enable left distance sensor
frontDist = robot.getDevice("frontDist")
frontDist.enable(timeStep)
rightDist = robot.getDevice("rightDist")
rightDist.enable(timeStep)
cam = robot.getDevice("camera")
cam.enable(timeStep)
colorSensor = robot.getDevice("color")
colorSensor.enable(timeStep)
emitter = robot.getDevice("emitter") # Emitter doesn't need enable
gps = robot.getDevice("gps")
gps.enable(timeStep)
\ No newline at end of file
// [left wheel speed, right wheel speed]
float speeds[2] = { max_velocity, max_velocity };
//Inside main
wheel_left->setPosition(INFINITY);
wheel_right->setPosition(INFINITY);
\ No newline at end of file
# [left wheel speed, right wheel speed]
speeds = [max_velocity,max_velocity]
wheel_left.setPosition(float("inf"))
wheel_right.setPosition(float("inf"))
wheel_right.setPosition(float("inf"))
\ No newline at end of file
void turn_right() {
// set left wheel speed
speeds[0] = 0.6 * max_velocity;
// set right wheel speed
speeds[1] = -0.2 * max_velocity;
// set left wheel speed
speeds[0] = 0.6 * max_velocity;
// set right wheel speed
speeds[1] = -0.2 * max_velocity;
}
void turn_left() {
// set left wheel speed
speeds[0] = -0.2 * max_velocity;
// set right wheel speed
speeds[1] = 0.6 * max_velocity;
// set left wheel speed
speeds[0] = -0.2 * max_velocity;
// set right wheel speed
speeds[1] = 0.6 * max_velocity;
}
void spin() {
// set left wheel speed
speeds[0] = 0.6 * max_velocity;
// set right wheel speed
speeds[1] = -0.6 * max_velocity;
// set left wheel speed
speeds[0] = 0.6 * max_velocity;
// set right wheel speed
speeds[1] = -0.6 * max_velocity;
}
void delay(int ms) {
float initTime = robot->getTime(); // Store starting time (in seconds)
while (robot->step(timeStep) != -1) {
if ((robot->getTime() - initTime) * 1000.0 > ms) { // If time elapsed (converted into ms) is greater than value passed in
return;
}
}
}
int getColor() {
const unsigned char* image = colorSensor->getImage(); // Grab color sensor camera's image view
return colorSensor->imageGetGray(image, colorSensor->getWidth(), 0, 0); // Return grayness of the only pixel (0-255)
}
\ No newline at end of file
......@@ -15,3 +15,13 @@ def spin():
speeds[0] = 0.6 * max_velocity
#set right wheel speed
speeds[1] = -0.6 * max_velocity
def delay(ms):
initTime = robot.getTime() # Store starting time (in seconds)
while robot.step(timeStep) != -1:
if (robot.getTime() - initTime) * 1000.0 > ms: # If time elapsed (converted into ms) is greater than value passed in
break
def getColor():
img = colorSensor.getImage() # Grab color sensor camera's image view
return colorSensor.imageGetGray(img, colorSensor.getWidth(), 0, 0) # Return grayness of the only pixel (0-255)
\ No newline at end of file
while (robot->step(timeStep) != -1) {
speeds[0] = max_velocity;
speeds[1] = max_velocity;
bool checkVic(void *img) {
Rect boundRect;
float contArea, ratio;
vector<vector<Point>> contours;
Mat frame(cam->getHeight(), cam->getWidth(), CV_8UC4, img); // Create frame using camera image
// Check left and right sensors to avoid walls
for (int i = 0; i < 2; i++) { // Loop through the two sensors for each of the left and right
//for sensors on the left, either
if (leftSensors[i]->getValue() > 80)
turn_right(); // We see a wall on the left, so turn right away from the wall
//for sensors on the right, either
if (rightSensors[i]->getValue() > 80)
turn_left();
cvtColor(frame, frame, COLOR_BGR2GRAY); // Grayscale image
threshold(frame, frame, 80, 255, THRESH_BINARY_INV); // Inverse threshold image (0-80 -> white; 80-255 -> black)
findContours(frame, contours, RETR_TREE, CHAIN_APPROX_SIMPLE); // Find all shapes within thresholded image
for (int i = 0; i < contours.size(); i++) {
boundRect = boundingRect(contours[i]); // Draw a rectnagle around shape for width to height ratio
contArea = fabs(contourArea(Mat(contours[i]))); // Area covered by the shape
ratio = (float)boundRect.width / boundRect.height; // Calculate width to height ratio
//if the contour area and width to height ratio are within certain ranges
if (contArea > 300 && contArea < 1000 && ratio > 0.65 && ratio < 0.95)
return true;
}
// for both front sensors
if (frontSensors[0]->getValue() > 80 && frontSensors[1]->getValue() > 80)
spin();
wheel_left->setVelocity(speeds[0]); // Send the speed values we have choosen to the robot
wheel_right->setVelocity(speeds[1]);
};
\ No newline at end of file
return false;
}
\ No newline at end of file
while robot.step(timeStep) != -1:
speeds[0] = max_velocity
speeds[1] = max_velocity
# Check left and right sensors to avoid walls,
for i in range(2): # Loop through the two sensors for each of the left and right
#for sensors on the left, either
if leftSensors[i].getValue() > 80:
turn_right() # We see a wall on the left, so turn right away from the wall
#for sensors on the right, either
elif rightSensors[i].getValue() > 80:
turn_left()
#for both front sensors
if frontSensors[0].getValue() > 80 and frontSensors[1].getValue() > 80:
spin()
wheel_left.setVelocity(speeds[0]) # Send the speed values we have choosen to the robot
wheel_right.setVelocity(speeds[1])
def checkVic(img):
img = np.frombuffer(img, np.uint8).reshape((cam.getHeight(), cam.getWidth(), 4)) # Convert img to RGBA format (for OpenCV)
img = cv.cvtColor(img, cv.COLOR_BGR2GRAY) # Grayscale image
img, thresh = cv.threshold(img, 80, 255, cv.THRESH_BINARY_INV) # Inverse threshold image (0-80 -> white; 80-255 -> black)
contours, hierarchy = cv.findContours(thresh, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE) # Find all shapes within thresholded image
for cnt in contours:
x, y, w, h = cv.boundingRect(cnt) # Find width and height of contour
contArea = cv.contourArea(cnt) # Area covered by the shape
ratio = w / h # Calculate width to height ratio of contour
# if the contour area and width to height ratio are within certain ranges
if contArea > 300 and contArea < 1000 and ratio > 0.65 and ratio < 0.95:
return True
return False
\ No newline at end of file
void report(char victimType) {
// Character array to be sent to the supervisor to report victim/hazard
// First four bytes store robot's x coordinate
// Second four bytes store robot's z coordinate
// Last byte stores type of victim
// Victims: H, S, U, T
// Hazards: F, P, C, O
wheel_left->setVelocity(0); // Stop for 1 second
wheel_right->setVelocity(0);
delay(1300);
char message[9];
int posX = gps->getValues()[0] * 100, posZ = gps->getValues()[2] * 100;
memcpy(&message[0], &posX, 4);
memcpy(&message[4], &posZ, 4);
message[8] = victimType;
emitter->send(message, 9);
robot->step(timeStep);
}
\ No newline at end of file
def report(victimType):
# Struct package to be sent to supervisor to report victim/hazard
# First four bytes store robot's x coordinate
# Second four bytes store robot's z coordinate
# Last byte stores type of victim
# Victims: H, S, U, T
# Hazards: F, P, C, O
wheel_left.setVelocity(0) # Stop for 1 second
wheel_right.setVelocity(0)
delay(1300)
victimType = bytes(victimType, "utf-8") # Convert victimType to character for struct.pack
posX = int(gps.getValues()[0] * 100) # Convert from cm to m
posZ = int(gps.getValues()[2] * 100)
message = struct.pack("i i c", posX, posZ, victimType)
emitter.send(message)