Upcoming Events
Comparison of Model Complexity, Representative Capabilities, and Performance for Self-Supervised Multi-Sensor Wildfire and Smoke Segmentation and Tracking: Initial Results and a Path Forward
Abstract: Earth observing instruments from NASA and NOAA have long provided comprehensive observations of wildfires and smoke plumes from wildfires. An increasing number of other governmental and commercial entities are also launching orbital platforms to help improve wildland fire and smoke observational capabilities. It is crucial that we maximize the usability of the data from this ever-increasing set of instruments and use them each as parts of larger wildland fire and smoke observation, measurement, tracking, and forecasting systems. At present, JPL’s Segmentation, Instance Tracking, and data Fusion Using multi-SEnsor imagery (SIT-FUSE), utilizes self-supervised deep learning (DL) to segment and track instances of objects like wildfires and smoke plumes across single and multi-sensor scenes of geolocated radiance data from NASA’s orbital and suborbital instruments with minimal human intervention, in low and no label environments. This allows us to create a sensor web of pre-existing and historic instruments and add new instruments to the sensor web when their data becomes available.
As we transition this technology towards operational usability, it is crucial that we build a framework that can leverage architectures across the fast-paced domain of deep learning. With the goal of building a “system-of-systems” deep learning framework where per-instrument encodings are generated, fused where appropriate, and further downstream components are built to leverage the initial encodings, comes the need to systematically analyze and select encoders capable of providing ample information and a diverse enough representation, while also selecting sub-architectures that keep the resources needed for training and inference as low as possible. Here we provide an analysis of representative capabilities and performance for varying complexities of encoders in the context of self-supervised segmentation and tracking of wildfires and smoke plumes using measurements from NASA, NOAA, KMA, and Planet Labs instruments as input.
Speaker Biography
Dr. LaHaye is a data scientist at NASA’s Jet Propulsion Laboratory in Pasadena, California. As a member of the Machine Learning and Instrument Autonomy Group, Dr. LaHaye supports active and future missions, as well as application-driven research that aims to aid researchers and decision-makers. Currently, Dr. LaHaye leads an effort to leverage self-supervised models and multi-sensor datasets to segment and track object instances (wildfire fronts, plumes, harmful algal blooms, palm oil farms) in low and no-label environments. This effort strives to integrate both on-the-ground research and decision-making applications, as well as next-generation onboard retrieval capabilities. He also currently works on data-driven retrieval techniques for 3D cloud tomography and data-driven hydrological retrieval and validation techniques for Sentinel-6 and SWOT. Previously, Dr. LaHaye split his time between research and the development, generalization, and maintenance of pieces of the operational science data systems for missions like MISR, Jason-3, and SWOT, as well as airborne programs like AirMSPI and HyTES.
Harnessing the power of interdisciplinary expertise to solve modern environmental problems
You can find recordings of past webinars in our Youtube Channel
We invite you to register and participate in the webinar