NYU - FRST

Urgently searching and rescuing in an unfamiliar scene could be unexpectedly dangerous for first responders, such as a team of police officers dealing with an active shooter situation or firefighters in a burning apartment. Prompt and accurate monitoring of the first responders in those scenarios is needed for them to safely accomplish their missions. It requires us to develop an intelligent 3D tracking system in assisting the responders in localizing each other. Most existing solutions need pre-installed and carefully calibrated physical sensor infrastructure which could be costly, time-consuming, or infeasible for deployment. To address the challenges above, we propose an infrastructure-independent visual localization system that combines several advanced hardware and software modules to achieve both fast response time and cost-efficiency. Our system takes advantage of multi-modal images, including RGB, thermal, and radar images, for mapping the environment and providing localization services under various challenging scenarios. We envision a two-phase process. Phase one happens before first responders enter the scene. Its goal is to rapidly capture a map of the unknown 3D environment of interest to the first responders, and when necessary, to deploy signal beacons to create a local communication network and improve future localization robustness and accuracy. This could be done by our teleoperated or fully autonomous mobile robots. For spaces that are difficult for mobile robots to enter, this step could also be accomplished or augmented by first responders equipped with wearable multi-modal cameras. These cameras are the main sensors we use in Phase two, where the 3D positions and orientations of first responders are localized and tracked in the previously created 3D map. We use the hierarchical localization strategy in our software, combining visual place recognition and visual localization algorithms that are developed and owned by our team. Our early experiments have shown that our system could achieve real-time and accurate localization with an average error of less than 1 meter in a large hospital-like indoor space.

Basics

  • Number of Employees
    Number of Employees
  • Stage
    Company Stage
  • Primary Industry

Description

Urgently searching and rescuing in an unfamiliar scene could be unexpectedly dangerous for first responders, such as a team of police officers dealing with an active shooter situation or firefighters in a burning apartment. Prompt and accurate monitoring of the first responders in those scenarios is needed for them to safely accomplish their missions. It requires us to develop an intelligent 3D tracking system in assisting the responders in localizing each other. Most existing solutions need pre-installed and carefully calibrated physical sensor infrastructure which could be costly, time-consuming, or infeasible for deployment. To address the challenges above, we propose an infrastructure-independent visual localization system that combines several advanced hardware and software modules to achieve both fast response time and cost-efficiency. Our system takes advantage of multi-modal images, including RGB, thermal, and radar images, for mapping the environment and providing localization services under various challenging scenarios. We envision a two-phase process. Phase one happens before first responders enter the scene. Its goal is to rapidly capture a map of the unknown 3D environment of interest to the first responders, and when necessary, to deploy signal beacons to create a local communication network and improve future localization robustness and accuracy. This could be done by our teleoperated or fully autonomous mobile robots. For spaces that are difficult for mobile robots to enter, this step could also be accomplished or augmented by first responders equipped with wearable multi-modal cameras. These cameras are the main sensors we use in Phase two, where the 3D positions and orientations of first responders are localized and tracked in the previously created 3D map. We use the hierarchical localization strategy in our software, combining visual place recognition and visual localization algorithms that are developed and owned by our team. Our early experiments have shown that our system could achieve real-time and accurate localization with an average error of less than 1 meter in a large hospital-like indoor space.