Project Details: ROS Object Tracking

This project implements a comprehensive ROS2 (Humble) object detection and tracking system within a single package (sim_cam_pkg). The pipeline starts with a simulated camera feed from a video file, performs object detection using a YOLOv11n model (via OpenCV DNN), tracks objects across frames using an OpenCV Kalman filter, and visualizes the results through rqt_image_viewer.

To simulate a continuous camera feed from a single video file, a simple loop detection logic is incorporated. Each time the video loops, the tracker IDs are reset to ensure distinct tracking for each pass.

System Architecture & Pipeline

A key feature includes a loop detection logic for continuous camera feed simulation from a single video, resetting tracker IDs upon each loop.

Demonstration

ROS Tracking Demo Full

Key Features:

  • Object Detection: Utilizes YOLOv11 for accurate detection.
  • Object Tracking: Employs OpenCV’s Kalman filter for smooth tracking.
  • Simulated Environment: Works with video files as a simulated camera source.
  • Continuous Loop Simulation: Handles video looping and tracker ID resets.
  • Visualization: Integrates with rqt_image_viewer.
  • Docker file - Base ROS uses OpenCV 4.5

Key Technical Choices:

  • Object Detection: YOLOv11n (COCO pre-trained .onnx model) with OpenCV DNN for optimized CPU inference. Achieved ~10-12 FPS on an i5 13th gen CPU.
  • Tracking Algorithm: Kalman Filter implemented in C++ with a simple greedy approach for track association.
  • Build System: ament_cmake to support both C++ and Python nodes.
  • Custom ROS Messages: Defined for structured inter-node communication (e.g., DetectionArray, TrackedObjectArray).

View on GitHub Β»

Updated: