Generative Diffusion for Reconstructing Compressive LiDAR Imaging of Forested Landscapes

Researcher(s)

  • Patrick Young, Physics, University of Maine

Faculty Mentor(s)

  • Gonzalo Arce, Electrical and Computer Engineering, University of Delaware

Abstract

Currently, LiDAR imaging systems can measure forested landscapes to varying degrees of precision and scale: low-altitude systems, such as those on planes and drones, have high resolution, but are extremely localized— they can’t cover as much ground. On the other hand, LiDAR systems deployed on satellites allow global scale sampling, but the cost is sparse line coverage. In general terms, satellite LiDAR samples have low resolution and sparsity, but span the globe. This project introduces Generative Diffusion Models (GDMs) to reconstruct compressive satellite LiDAR imaging of forested landscapes; GDMs work by slowly deconstructing images with gaussian noise in the forward process, and then learning step by step to estimate noise to reverse the process— the input image in a GDM is the same as its output. Guided Diffusion, the GDM in this project, inputs 3 channels of information from an image dataset to learn a distribution. In training the GDM, the three channels were different features extracted from HyperHeight Data Cubes by percentile: canopy height (95th percentile), 50th percentile, and digital terrain (2nd percentile.) Training data came from NASA’s G-LiHT missions— low altitude, high resolution LiDAR images. Applying these models to simulated sparse LiDAR samples for reconstruction shows the promise of GDMs for attaining higher resolution imaging from more compressive satellite LiDAR data.