WildCap
Autonomous Non-Invasive Monitoring of Animal Behavior and Motion
Recent Results
Funding Source: Cyber Valley
Open Source Code: https://github.com/robot-perception-group
Motivation: Inferring animal behavior, e.g., whether they are standing, grazing, running or interacting with each other and with their environment, is a fundamental requirement for addressing the most important ecological problems today. On the other hand, estimating the 3D pose and shape of animals, in real-time, could also directly address several other problems such as disease diagnosis, health profiling and possibly provide very high resolution behavior inference. To do both of these in the wild and without any markers or sensors on the animal is an extremely challenging problem. State-of-the-art methods for animal behavior, pose and shape estimation either require sensors or markers on the animals (e.g., GPS collars and IMU tags), or rely on camera traps fixed in the animal's environment. Not only do these methods pose danger to the animals due to tranquilization and physical interference, but their scope is also difficult to extend to a larger number of animals in vast environments and over longer time periods. In WildCap, we are developing autonomous methods for estimating behavior, pose and shape of endangered wild animals, which will address the aforementioned issues. Our methods does not require any physical interference with the animals. Our novel approach is to develop a team of intelligent, autonomous and vision-based aerial robots which will detect, track and follow the wild animals and perform behavior pose and shape estimation tasks.
Goals and Objectives:
Methodology: Aerial robots with longer autonomy time and payload are critical for continuous and long distance tracking of animals in the wild. To this end, we are developing novel systems, particularly lighter than air vehicles that could address these issues. Furthermore, we are developing formation control strategies for such vehicles to maximize the visual coverage of animals and accuracy in their state estimates. Finally, we are leveraging learning-in-simulation methods to develop algorithms for animal behavior, pose and shape estimation of animals.
Publications:
[1] Price, E., Khandelwal P. C., Rubenstein D. I., and Ahmad, A (2023). A Framework for Fast, Large-scale, Semi-Automatic Inference of Animal Behavior from Monocular Videos, bioRxiv 2023.07.31.551177; doi: https://doi.org/10.1101/2023.07.31.551177