Skip to content

Object Tracking using detection/segmentation/pose estimation YOLOv8 models

License

Notifications You must be signed in to change notification settings

cr0mwell/YOLOv8_object_tracking

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Object Tracking (YOLOv8)

Current application performs multiple object tracking for the detected people apart from optional segmentation/pose estimation tasks using YOLOv8 models.
It takes the *.mp4 video as an input, preprocesses it depending on the selected model parameters, runs the inference and outputs the resulting video in *.avi format.
It also stores the model output in *.json format.
Simple Online Realtime Tracking (SORT) algorithm was chosen to perform objects tracking.

Multi object tracking example


Pose estimation example


Instance segmentation example

NOTE: videos for the above examples were taken at pexels.com

CLI options

Through CLI interface user can select the model type which in its turn implies the specific task:

  • Detection only
  • Detection and segmentation
  • Detection and pose estimation

It's possible to choose the model size (see the details below) as well as the output visualization options:

  • Show the processed video frame-by-frame during the inference process
  • Show the output video after the input is completely processed

Environment

Current application has been built on:

Installation

  1. Clone this repo
    cd ~
    git clone https://github.com/cr0mwell/YOLOv8_object_tracking.git --recurse-submodules
  2. Follow the instructions in the script to convert the original YOLOv8 model by Ultralytics and save it into ~/YOLOv8_object_tracking/models/yolo folder.
    Keep the following model name convention: YOlOv8<model_size>[-<seg>|<pose>].onnx (F.e. YOLOv8n-seg.onnx, YOLOv8l-pose.onnx)
  3. As an optional step, place your own video to process to the ~/YOLOv8_object_tracking/media directory. Rename it to input.mp4.
  4. Make a build directory in the top level directory: cd ~/YOLOv8_object_tracking && mkdir build && cd build
  5. Compile: cmake .. && make
  6. Run and enter the runtime options from command prompt: ./object_tracking
  7. Processed video can be found in ~/YOLOv8_object_tracking/media/output.avi
  8. ~/YOLOv8_object_tracking/json/session_dump.json file contains all the model output results.
  9. Optionaly run the unit tests: ./test although the test coverage leaves much to be desired ;-)

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

About

Object Tracking using detection/segmentation/pose estimation YOLOv8 models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published