Robotic Sensing with YDLIDAR OS30A and Mixtile Blade 3

 

Boost your robot’s navigation with YDLIDAR OS30A and Mixtile Blade 3: Capture 3D point clouds and detect objects using ROS1 and YOLO.

 

Things used in this project   Hardware components


Mixtile Blade 3 ×1        Blade 3 Case ×1

YDLIDAR OS30A 3D Depth Camera ×1         Wireless USB Adapter ×1

USB C Hub ×1     SSD, PCIe Gen4 ×4 NVMe 1.4, M.2 ×1

 

Story


はじめに

In the world of robotics, effective navigation is crucial for the successful deployment of autonomous systems. However, relying solely on onboard sensors can limit a robot’s ability to perceive its environment accurately, especially in complex or dynamic settings. To address this, we can enhance a robot’s navigational capabilities by integrating external sensors that provide a more comprehensive understanding of its surroundings.

This series of articles will explore how to achieve this by leveraging a YDLIDAR 3D depth camera, external to the robot, combined with a Mixtile Blade 3 single-board computer running ROS1. The objective is to gather 3D point cloud data and use YOLO (You Only Look Once) for object detection. This setup will allow us to build a more robust sensing system that enhances the robot’s ability to navigate and interact with its environment effectively.

In this first article, we will walk through the process of setting up the YDLIDAR 3D depth camera with the Mixtile Blade 3 and running ROS1 to capture and process the 3D point cloud data. Additionally, we will integrate YOLO for real-time object detection. This foundation will pave the way for more advanced navigation and perception capabilities in subsequent parts of the series.

Hardware Overview: Mixtile Blade 3

The Mixtile Blade 3 is a high-performance single-board computer designed to meet the demanding needs of edge computing applications, including robotics. Powered by the Octa-Core Rockchip RK3588, the Blade 3 delivers robust processing capabilities in a compact Pico-ITX 2.5-inch form factor.

Key features include:

  • Octa-Core Rockchip RK3588: Ensures powerful performance for complex computations and real-time processing.
  • Stackable via Low-latency 4x PCIe Gen3: Offers the flexibility to expand and scale your hardware setup easily.
  • Rich Interface: Provides a wide range of connectivity options, making it versatile for various peripheral integrations.
  • Versatile Edge Computing Unit: Ideal for tasks requiring intensive data processing and quick response times, making it a perfect fit for advanced robotics projects.


To enhance storage capacity and speed, I will add a 500GB SSD (PCIe Gen4 ×4 NVMe 1.4, M.2) using the Mixtile Blade 3 Case. This case is designed specifically for the Mixtile Blade 3, featuring a built-in breakout board that transfers the U.2 port to an M.2 Key-M connector, enabling the installation of an M.2 NVMe SSD.

Hardware Overview: YDLIDAR OS30A 3D Depth Camera

The YDLIDAR OS30A 3D Depth Camera is a sophisticated sensor designed for advanced robotic applications that require accurate depth perception and obstacle detection. Utilizing binocular structured light 3D imaging technology, this camera captures detailed depth information, enabling robots to effectively sense and navigate their environment.

Key features include:

  • Binocular Structured Light 3D Imaging: Provides high-resolution depth images, allowing for precise modeling of the environment.
  • High-Resolution Output: Delivers 1280 x 920 high-resolution depth images, crucial for detailed environment mapping and object detection.
  • Dedicated Depth Computing Chip: Optimized for robot obstacle avoidance, ensuring real-time processing and accuracy in dynamic environments.
  • Compact and Easy to Integrate: The camera’s small form factor and USB2.0 standard output interface make it easy to integrate into various robotic systems, providing flexibility in design and application.
  • Adaptability to Complex Environments: Engineered to perform in diverse lighting conditions, including all-black environments, strong or weak indoor light, backlight, and semi-outdoor settings. This versatility makes it ideal for a wide range of applications, from indoor navigation to semi-outdoor exploration.

The YDLIDAR OS30A is an excellent choice for enhancing a robot’s environmental awareness. When combined with powerful processing hardware like the Mixtile Blade 3, it enables the collection and processing of detailed 3D point clouds, which can be used for tasks such as obstacle avoidance, mapping, and object detection with YOLO. This camera is essential for developing robots that can effectively navigate and interact with their surroundings, making it a critical component of our enhanced robotic sensing setup.

The hardware utilized in this project

Installing Docker on the Mixtile Blade 3


To effectively manage and deploy applications in a containerized environment on the Mixtile Blade 3, we will install Docker, a platform that automates the deployment of applications inside lightweight, portable containers. The following steps outline the installation process for Docker on your Mixtile Blade 3.

Step 1: Set Up Docker’s GPG Key and Repository

First, we need to configure the GPG key and add Docker’s official repository to the list of package sources.

sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

Next, add the Docker repository to your system’s package sources:

echo 
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu 
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | 
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

Step 2: Install Docker

With the repository added, you can now install Docker and its associated components. Before that, ensure some dependencies are installed:

すど apt install libip4tc2=1.8.7-1ubuntu5 libxtables12=1.8.7-1ubuntu5

Now, proceed to install Docker:

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Step 3: Configure Docker Storage

By default, Docker stores its data in `/var/lib/docker`. For better management and to avoid potential storage issues, we’ll move Docker’s data directory to a different location. In this case, we’ll use `/data/docker` (on the SSD).

  • Stop the Docker service:
sudo service docker stop
  • Move Docker’s data:
mkdir -p /data/docker
sudo cp -a /var/lib/docker/ /data/docker
  • Update Docker’s configuration to point to the new data directory:
sudo touch /etc/docker/daemon.json
sudo nano /etc/docker/daemon.json

Add the following content to the `daemon.json` file:

{
"data-root": "/data/docker"
}
  • Remove the old Docker data directory:

sudo rm -rf /var/lib/docker

  • Restart the Docker service:

sudo service docker start

Step 4: Manage Docker as a Non-Root User

To run Docker commands without using `sudo`, you need to add your user to the `docker` group:

sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker

Step 5: Verify Docker Installation

Finally, verify that Docker is installed correctly by running a test container:

docker run hello-world

This command will download and run a small test container, confirming that Docker is up and running on your Mixtile Blade 3.

With Docker successfully installed and configured, you’re now ready to deploy and manage applications in a containerized environment, which is especially useful for running components like ROS1, object detection with YOLO, and other services on your robotics platform.

Building the Project:
ROS SDK and YOLO Object Detection


With Docker installed and configured on the Mixtile Blade 3, the next step is to build the Docker images that will run the ROS SDK for the YDLIDAR OS30A 3D Depth Camera and a node for YOLO-based object detection. This setup allows us to efficiently manage and deploy these components in a containerized environment, ensuring consistency and ease of use.

Step 1: Clone the Project Repository

Start by cloning the project repository, which contains all the necessary Dockerfiles and configurations.

git clone --recursive git@github.com:andrei-ace/docker_ros_ydlidar_os30a.git
cd docker_ros_ydlidar_os30a/ros

Step 2: Build the Docker Images

The project includes a `Makefile` that simplifies the process of building the Docker images. These images will include everything needed to run the ROS environment, the YDLIDAR OS30A SDK, and the YOLO object detection node.

To build the images, simply run:

make build

This command will execute the build target in the Makefile, which consists of the following steps:

  • ROS Core Image: Builds the base image for ROS Noetic on Ubuntu Focal, providing the core ROS functionality.
  • ROS Base Image: Extends the core image with additional tools and libraries required for ROS-based development.
  • Robot Image: Adds robot-specific packages and configurations.
  • Desktop Image: Includes desktop tools and GUI-based applications, useful for development and debugging.
  • Robot Dog 3D Depth Camera Image: Builds the custom image that includes the ROS SDK for the YDLIDAR OS30A and the YOLOv8-based object detection node.

The custom image for the robot is tagged as andreiciobanu1984/robots:robot-dog-3d-depth-camera. This image is built by combining multiple contexts, including the ROS SDK for the camera from eYs3D and the YOLO detector node, which was inspired by mats-robotics/yolov5_ros and updated to use YOLOv8 with NCNN.

Step 3: Clean Up (Optional)

If you need to remove the Docker images for any reason, you can use the `clean` target in the Makefile:

make clean

This command will delete all the Docker images built during the make build process.

The ROS Noetic images used are based on the official Docker images provided by the ROS team at osrf/docker_images. These images are widely used in the ROS community and provide a solid foundation for building ROS-based applications.

The ROS SDK for the YDLIDAR OS30A camera is sourced from the eYs3D ROS repository, which provides the necessary drivers and tools for integrating the camera into your ROS environment.

The YOLO object detection node was customized from the original implementation found in mats-robotics/yolov5_ros, with updates to support YOLOv8 using the NCNN framework, offering improved accuracy and performance for object detection tasks.

This setup ensures that your robotics project is equipped with the latest tools and technologies, allowing for precise sensing and robust object detection capabilities.

Step 4: Running the docker image

xhost +local:docker

docker run -it --rm --privileged -v /tmp/.X11-unix:/tmp/.X11-unix:ro
-e DISPLAY=$DISPLAY --net=host andreiciobanu1984/robots:robot-dog-3d-depth-camera /bin/bash
-c 'source /robot/devel/setup.bash; roslaunch robot_dog robot.launch'

Benchmarks and the Choice of YOLOv8 NCNN for Object Detection


When developing an advanced robotic sensing platform, choosing the right object detection algorithm is critical to achieving real-time performance. For this project, I opted for YOLOv8 using the NCNN framework due to its superior speed and efficiency on edge devices like the Mixtile Blade 3. Below, we present the benchmarks that guided this decision and the rationale behind choosing YOLOv8 with NCNN.

Building and Running the Docker Images

First, we build the ros:noetic-eys3d-ros Docker image, which includes the necessary drivers and libraries to interface with the YDLIDAR OS30A 3D Depth Camera:

docker build --tag=ros:noetic-eys3d-ros --build-context eys3d-ros=../eys3d_ros eys3d-ros/.
xhost +local:docker

Next, we test the camera and the object detection performance using different versions of YOLO:

  • Launching the YDLIDAR camera:
docker run -it --rm --privileged -v /tmp/.X11-unix:/tmp/.X11-unix:ro -e DISPLAY=$DISPLAY --net=host andreiciobanu1984/robots:robot-dog-3d-depth-camera /bin/bash -c 'source /robot/devel/setup.bash; roslaunch dm_preview BMVM0S30A.launch'
  • Testing YOLOv5:
docker run -it --rm --privileged -v /tmp/.X11-unix:/tmp/.X11-unix:ro -e DISPLAY=$DISPLAY --net=host andreiciobanu1984/robots:robot-dog-3d-depth-camera /bin/bash -c 'python3 /robot/src/robot_dog/src/test_v5.py'
  • Testing YOLOv8 with Torch:
docker run -it --rm --privileged -v /tmp/.X11-unix:/tmp/.X11-unix:ro -e DISPLAY=$DISPLAY --net=host andreiciobanu1984/robots:robot-dog-3d-depth-camera /bin/bash -c 'python3 /robot/src/robot_dog/src/test_v8.py'
  • Testing YOLOv8 with NCNN:
docker run -it --rm --privileged -v /tmp/.X11-unix:/tmp/.X11-unix:ro -e DISPLAY=$DISPLAY --net=host andreiciobanu1984/robots:robot-dog-3d-depth-camera /bin/bash -c 'python3 /robot/src/robot_dog/src/test_v8_ncnn.py'

Benchmark Results

From the benchmarks captured (as seen in the images provided), the performance metrics were recorded as follows:

YOLOv5:

  • Inference time: Approximately 1147.8ms to 1201.2ms per image (at 480×640 resolution)
  • Preprocessing time: Between 5.4ms to 12.5ms per image
  • Postprocessing time: Approximately 2.6ms to 3.9ms per image

YOLOv8 with Torch:

  • Similar results to YOLOv5, with some improvements in preprocessing but overall comparable inference times.

YOLOv8 with NCNN:

  • Inference time: Significantly reduced to around 175.9ms per image (at 640×640 resolution)
  • Preprocessing time: As low as 8.7ms to 13.1ms per image
  • Postprocessing time: Slightly reduced, with overall faster processing.

Why YOLOv8 NCNN?

The primary reason for choosing YOLOv8 with NCNN over the Torch implementation or previous versions like YOLOv5 is the drastic improvement in inference speed. On edge devices like the Mixtile Blade 3, which rely on efficient use of computational resources, NCNN provides a much faster alternative for real-time object detection. This is critical for applications where quick decision-making is essential, such as in autonomous navigation and obstacle avoidance.

Moreover, NCNN’s lightweight nature allows it to run efficiently on ARM-based processors, making it an ideal fit for the Mixtile Blade 3’s architecture. The benchmarks clearly show that YOLOv8 with NCNN outperforms other configurations in both speed and efficiency, which directly translates into better performance for real-time robotic applications.

In conclusion, the decision to use YOLOv8 with NCNN in this project was based on its superior speed and efficiency, making it the best choice for enhancing the robot’s perception capabilities without compromising on performance.

Benchmarking YOLO using the Mixtile Blade 3 and YDLIDAR OS30A depth camera

Impact of IR Intensity
on Object Detection and Depth Sensing


In this project, we observed that adjusting the IR intensity setting of the YDLIDAR OS30A 3D Depth Camera significantly affects both object detection and depth sensing capabilities. Here’s a summary of our findings and how to optimize the settings for your specific use case.

Observations:

IR Intensity Set to 0:

  • Impact on Object Detection: With the IR intensity set to 0, the object detection using YOLOv8 performed significantly better. This setting minimizes interference from the camera’s IR sensors, leading to clearer images and more accurate object detection.
  • Impact on Depth Sensing: Disabling IR intensity also disables the camera’s 3D depth sensing capabilities. This means that while object detection accuracy improves, the camera will not provide depth data, which might be critical depending on the application.

IR Intensity Set to 3 (Default):

  • Impact on Object Detection: At the default IR intensity of 3, the 3D depth sensing works well, but it introduces noise that negatively impacts the performance of object detection. The camera’s IR emissions create reflections and artifacts in the captured images, leading to less accurate detections.
  • Impact on Depth Sensing: Depth sensing is fully operational, providing 3D point clouds that can be useful for tasks like obstacle avoidance and environment mapping.

Potential Solutions

To overcome these limitations, a few strategies can be considered:

  • Alternate Between Modes: One approach could be to alternate between modes—switching between high IR intensity for depth sensing and low or zero IR intensity for object detection. By running these modes in sequence, the robot could gather depth data and then switch to a more optimized setting for object detection.
  • Fine-Tune YOLO Weights: Another solution is to fine-tune the YOLO model weights specifically for the environment and the specific characteristics of the YDLIDAR OS30A camera. This could improve the model’s ability to detect objects accurately, even with the IR intensity set at levels that enable depth sensing.

These solutions will be explored in more detail in the next article, where I will focus on refining the object detection capabilities to accurately detect the robot and its surroundings under varying conditions. By fine-tuning the YOLO weights and possibly integrating a mode-switching strategy, we aim to optimize both object detection and depth sensing simultaneously.

Adjusting the IR Intensity

You can adjust the IR intensity on-the-fly using the RQT Reconfigure tool:

rosrun rqt_reconfigure rqt_reconfigure

In the rqt_reconfigure interface, navigate to the /camera_BMVM0530A1_node settings and modify the ir_intensity parameter. Set it to 0 for better object detection or leave it at 3 for depth sensing.

Comparing IR Intensity Settings: YDLIDAR OS30A with YOLO – IR_Intensity 0 vs 3 Performance Test

Conclusion


 

Code


Credits


Andrei Ciobanu

Tech Enthusiast & Engineer, Based in Timișoara, Romania