Commit 40331335 authored by Francisco Romero's avatar Francisco Romero

use end points of skeleton to order midline when the nose and tail are not given

parent 8c952bb7
# Fish midline extraction form idtracker.ai videos
This repository includes scripts to extract the fish midline from information
stored in the data generated by the tracking software [idtracker.ai](idtracker.ai) after.
This repository includes scripts to extract the posture angles (midline) of number_of_animals tracked with [idtracker.ai](idtracker.ai) after.
The pipeline only requires the *video_object.npy* and the *blobs_collection.npy*
from [idtracker.ai](idtracker.ai) (see data folder).
generated after tracking a video with [idtracker.ai](idtracker.ai) (see data folder).
For blobs that look like fish (head with lower curvarture than tail). We also have a class *FishContour* that detects the nose, and tail. This points can also be included in the skeleton. In particular, the nose is used to order the points of the skeleton from the nose to the tail. This way the interpolation does not have errors when the blob is U-shaped.
The general pipeline is as follows (see GIF)
![GIF_1](fishmidline/scripts/midline_nose.gif)
![GIF_2](fishmidline/scripts/midline.gif)
When the nose is not given the points of the skeleton are not ordered and the interpolation algorithm is noisy.
The head and tail of the animal are extracted per blob from the end points of the skeleton (blue and red points in the top-right panel). In the general case, the tail and the head are indistinguishable from the skeleton or the blob of pixels. Hence, we assign the point which Y coordinate is the lowest to be the head, and the opposite for the tail. I invite users to write their own head and tail detector for the animal species of interest.
For blobs that look like fish, we coded the class *FishContour* that detects the nose, and tail based on the first and second maximum of the curvature of the contour (see top-right panel on the GIF below). In particular, the nose is used to order the points of the skeleton from the nose to the tail before the interpolation.
This way the midline and angles are always with respect to the nose of the animal.
![GIF_1](fishmidline/scripts/midline_nose.gif)
![GIF_2](fishmidline/scripts/midline.gif)
## Requirements
* numpy
* scipy
* skimage
* mahota (for pruning skeletons)
* mahota (for pruning skeleton and detecting end points)
* matplotlib
* tqdm
......@@ -26,5 +29,9 @@ When the nose is not given the points of the skeleton are not ordered and the in
## TODO
* Optimize
* Eignenvalues
* Eignenshapes
## Contributors
Francisco Romero-Ferrero ([email protected])
import numpy as np
from skimage.morphology import skeletonize, opening, square
from skimage.morphology import skeletonize
from scipy import interpolate
from fishmidline.fish_contour import FishContour
......@@ -15,15 +15,15 @@ def get_binary_image(height, width, pxs):
def get_nose_head_tail_fish(cnt):
f_cnt = FishContour(cnt)
nose, tail, _, head = f_cnt.find_nose_and_orientation()
return nose, head, tail
nose, tail, _, _ = f_cnt.find_nose_and_orientation()
return nose, tail
def distance(p1, p2):
return np.sqrt((p2[0] - p1[0])**2 + (p2[1] - p1[1])**2)
def order_midline(midline_x, midline_y, nose, head, tail):
def order_midline(midline_x, midline_y, nose, tail):
points = list(zip(midline_x, midline_y))
midline = []
nose = tuple(nose)
......@@ -90,17 +90,18 @@ def get_midline_angles(video, blob, plot_flag=False,
# Skeletonize segmented image
skeleton = skeletonize(binary_image.astype(bool))
# Prune skeleton
skeleton = pruning(skeleton, 1)
skeleton, end_points = pruning(skeleton, 3)
# Extract midline and order points in midline
midline_y, midline_x = np.where(skeleton == 1)
if use_nose:
# Order midline from nose to tail
nose, head, tail = get_nose_head_tail_fish(blob.contour)
midline_x, midline_y = order_midline(midline_x, midline_y,
nose, head, tail)
nose, tail = get_nose_head_tail_fish(blob.contour)
else:
nose = None
tail = None
# Find cornes skeleton
nose = (end_points[1][0], end_points[0][0])
tail = (end_points[1][1], end_points[0][1])
midline_x, midline_y = order_midline(midline_x, midline_y,
nose, tail)
# Interpolated and compute equidistant points in the midline
midline_eq_x, midline_eq_y, midline_interp_x, midline_interp_y = \
compute_equidistant_points(midline_x, midline_y,
......
......@@ -67,6 +67,7 @@ def endPoints(skel):
ep = ep1+ep2+ep3+ep4+ep5+ep6+ep7+ep8
return ep
def pruning(skeleton, size):
'''remove iteratively end points "size"
times from the skeleton
......@@ -74,5 +75,5 @@ def pruning(skeleton, size):
for i in range(0, size):
endpoints = endPoints(skeleton)
endpoints = np.logical_not(endpoints)
skeleton = np.logical_and(skeleton,endpoints)
return skeleton
skeleton = np.logical_and(skeleton, endpoints)
return skeleton, np.where(np.logical_not(endpoints))
This image diff could not be displayed because it is too large. You can view the blob instead.
This image diff could not be displayed because it is too large. You can view the blob instead.
......@@ -27,6 +27,8 @@ def plotter(frame_number, binary_image, skeleton, nose, tail, contour,
ax_arr[0, 1].set_title('(Zoom)')
# Plot skeleton and nose
ax_arr[0, 2].imshow(skeleton, cmap='gray')
ax_arr[0, 2].plot(nose[0], nose[1], 'ro', markersize=3)
ax_arr[0, 2].plot(tail[0], tail[1], 'bo', markersize=3)
ax_arr[0, 2].set_xlim(x_lim), ax_arr[0, 2].set_ylim(y_lim)
ax_arr[0, 2].invert_yaxis()
ax_arr[0, 2].set_title('Skeleton from\nbinary image')
......@@ -81,6 +83,14 @@ if __name__ == '__main__':
animals with fish-like shapes, e.g. two curvature points)",
type=int,
default=0)
parser.add_argument("-i", "--identity", help="individual identity to plot \
midline steps",
type=int,
default=1)
parser.add_argument("-f", "--frame_number", help="frame number to plot\
midline steps",
type=int,
default=None)
args = parser.parse_args()
session_path = args.session_path
......@@ -93,15 +103,16 @@ if __name__ == '__main__':
print("Loading list_of_blobs from {}".format(blobs_path))
list_of_blobs = np.load(blobs_path, encoding='latin1').item()
FOCAL_IDENTITY = 1
# frame_number = np.random.randint(0, video.number_of_frames)
frame_number = 45#234
if args.frame_number is None:
frame_number = np.random.randint(0, video.number_of_frames)
else:
frame_number = args.frame_number
blobs_in_frame = list_of_blobs.blobs_in_video[frame_number]
identities = [blob.identity for blob in blobs_in_frame]
print(identities)
if FOCAL_IDENTITY in identities:
if args.identity in identities:
blob = [blob for blob in blobs_in_frame
if blob.identity == FOCAL_IDENTITY][0]
if blob.identity == args.identity][0]
binary_image, skeleton, nose, tail, contour,\
midline_xy, midline_interp_xy, midline_eq_xy, _ = \
get_midline_angles(video, blob, plot_flag=True,
......@@ -113,8 +124,4 @@ if __name__ == '__main__':
else:
print("No identity in this frame")
# if use_nose:
# plt.gcf().savefig('pipeline_nose.png')
# else:
# plt.gcf().savefig('pipeline.png')
plt.show()
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment