1. Gathering data and Training
Gathering face data
To gather face data, use get_face_data.py. The file requires 4 parameters to run:
- Video directory (.mp4)
- Image ID (Int) (naming purposes - make sure it isn't lower that any other image in the target dir to avoid accidental overwrite).
- Target user directory name
- Optional - Rotation (provide None, counter-clockwise or clockwise)
An example command would be:
> python get_face_data.py /project/src/face_videos/user-name/user.mp4 1 user-name-dir None
NOTE: if the script appears to be hanging after entering the command, the video may need rotating, try different rotations until images start to create.
Good videos are typically 10-15 seconds and cover most of the frontal face region of the user.
To train LBPH algorithm with your gathered face data, simply run the follwing, ensuring that your file structure is correct (Shown in Package Overview:
> python src/train_faces.py
2. Running the application
Before running the application, you need to source your console. Within your catkin_ws/devel/ dir:
> catkin_ws/devel/setup.zsh or > catkin_ws/devel/setup.bash (this must be executed in all current/future consoles)
In order to run the application. Nodes must be executed in a specific order. Run the following commands in seperate consoles:
> 1. roslaunch launch/simulated_world.launch world:=example_map --- (simulation only) > 2. roslaunch turtlebot_bringup minimal.launch --- (real-life only) > 3. roslaunch launch/navigation.launch map:=example_map save_costmaps_state:=False --- (WAIT for Odom Recieved before executing 4). > 4. roslaunch turtlebot_rviz_launchers view_navigation.launch You will then be presented with an Rviz map, Use the '2d Pose Estimate'button to estimate the true positon of the TurtleBot. This reduces navigation inacuracy. > 5. python src/run.py (ensure that some locations are provided within the locations.xls)
3. Running the face recognition independently
This is related to face_detect_and_recognise.py (non-ROS node).
The current script is adapted for Kinect 1 use. In order for a kinect to work, you will need to install Libfreenect (kinect driver):
Then you can simply run:
> python face_detect_and_recognise.py
3. Acquiring map coordinates through Rviz
In order to navigate to a new location, the locations.xls must include X & Y coordinates.
To get coordinates for the desired map run:
> roslaunch launch/navigation.launch __map:=desired_map__ --- (WAIT for Odom Recieved before executing 4). followed by: > roslaunch turtlebot_rviz_launchers view_navigation.launch
You will then be presented with an Rviz screen displaying the map you wish to get coordinates for.
- Select Publish Point
- In the bottom left corner, you will see an X and Y coordinate, use these.
(this is a very complicated way to acquire coordinates which future work will look to improve)