From 2D to 3D: Enhancing Photogrammetry with Machine Learning

Researcher(s)

  • Matthew O'Donnell, Electrical Engineering, Rowan University

Faculty Mentor(s)

  • Ken Barner, Electrical Engineering, University of Delaware

Abstract

Photogrammetry, the process of generating a 3D model from 2D images, is increasingly used in today’s environment, especially in Virtual and Augmented Reality. Currently, photogrammetry is a very slow process that is computationally expensive, necessitating technology to speed up this process. This research investigates the potential of using machine learning, particularly a neural network, to speed up the process. The primary goal is to develop a machine learning model that can predict 3D point clouds from a 2D image, dramatically speeding up the photogrammetry process. Prior studies have indicated that this technology is feasible, but incorporating neural networks explicitly with aerial photogrammetry data is an area that remains a relatively new topic. To investigate the feasibility of predicting new point clouds, it was crucial to start with the most basic machine learning problem: segmentation. This research capitalizes on data acquired from a DJI Mavic drone on the University of Delaware campus and leverages the 3D-MiniNet architecture to classify and segment point clouds. Preliminary results using test data show significant progress has been made in incorporating machine learning and photogrammetry. Further research in this field using real-world data is still a necessity to substantiate these findings.