Skip to main content
Aerial shot of river's edge in the forest with faint images of human figures on riverbank
Figures along a riverbank captured by RPAS / Photo credit: Michael Aibin

For people lost in the backcountry, a quick rescue can be essential to survival.

But search and rescue is a complex operation that often relies on extensive manpower and costly helicopter flights. Remotely piloted aircraft mounted with detection systems are not only less expensive but fly closer to the ground and can reduce the risk to searchers in poor weather conditions.

Dr. Michal Aibin, a faculty researcher in BCIT’s School of Computing and Academic Studies, is using his expertise in the optimization of computer networks and RPAS technology to improve the real-time capacity of RPAS object detection systems to locate people on the ground.

“You Only Look Once”

Working with industry partners Spexi and InDro Robotics and peers at Northeastern University, Dr. Aibin is working on integrating image-processing artificial intelligence using an algorithm called YOLO—“You Only Look Once.” YOLOv4 performs very well when detecting objects on a high-end graphics processing unit (GPU). However, high-end GPUs are not generally present on an RPAS, so to detect humans, the live feed has to be transmitted to other devices with high-end GPUs. The research team’s approach uses the central processing unit (CPU) present in the RPAS and could thus eliminate the reliance on high-end GPUs.

Since searches must be done quickly, real-time video-processing is vital. A system is considered real-time if the person can be detected within one second. Although YOLOv4 is very good at zeroing in on humans in all environmental conditions, it has a low frames-per-second processing rate. To compensate, Dr. Aibin’s team created a frame-skipping algorithm to significantly reduce the number of frames that YOLOv4 must process. Depending on the obscurity of the search area, different frame-skipping settings can be selected. The live video feed collected by the RPAS is fed into this algorithm, and the remaining frames are then fed into the YOLOv4 algorithm, which creates bounding boxes and text in the video when humans are detected (see figure). The GPS in the RPAS can be used to notify rescue operations of sightings.

A second part of the research project involves best-path planning for search and rescue using RPAS. The video Optimization of Drone-Based Search and Rescue explains the focus of the research.

YOLOv4 Object Detection Configuration

Diagram with 4 boxes each showing a person on a beach next to forest. Left box has grid lines, top middle box has bounding boxes, bottom middle has shaded areas labeled class probabilities, and right image has a red box around the person and says final detection
Each cell in the grid calculates a confidence score and the probability that an object exists in the cell / Image credit: Michal Aibin

You can enjoy the spectacular scenery captured during the team’s research RPAS flights by visiting Michal’s personal video site, Remotely Piloted Aircraft Systems (RPAS) Videography.