
Scene analysis and reconstruction from incomplete spatial data
Project data
Department in charge
Due to the rapid progress of sensor technology, processing a large amount of incomplete 3D spatial data is becoming a critical issue. In environment perception tasks (e.g., remote surveillance, navigation of autonomous vehicles, medical diagnostics), reliable decisions must be made even if the object shapes are only partially visible: they are occluded, or only one side of them is visible from the sensor’s viewpoint. Further challenges in data analysis are caused by the measurement noise of the sensors, their sensitivity to external lighting or weather conditions, and their limited spatial or temporal resolution.
Spatial data is also often used for synthetizing detailed virtual 3D environment models. Relying on incomplete 3D measurements from multiple data sources, further crucial issues are assembling and fusing data from different types of sensors, captured at possibly different times, as well as virtually completing the missing scene segments in a realistic manner.
The present research project addresses various important challenges of the above detailed real-world problems, by proposing new methods based on tools of computer vision, machine learning, 3D modeling, model generation, and automated model completion. Three main tasks are focused: 1) New methods are developed to recognize objects and dynamic events from partial spatial measurements. 2) New change detection and data fusion methods are proposed for cases where accurate measurement registration is not possible. 3) Machine learning based solutions are developed to provide realistic virtual augmentation of incomplete 3D point cloud models obtained by 3D scanning procedures.
Project coordinator
Manager
Members
