Self-Supervised Method and System for Removing Smeared Points

 

VALUE PROPOSITION

Computer vision is roughly composed of three phases: image acquisition, image processing, and image analysis/understanding. During the phase of image acquisition, depth sensors are used to gather digital information about the scene being observed, usually requiring post-processing of data to transform it into a useable form. This data is then applied to algorithms to infer low-level information on the content of the scene observed, such as edge detection, segmentation, and classification of objects in the scene leading to three-dimensional scene mapping and object recognition.  This new technology can improve the imaging process by removing smeared point errors in depth maps/point clouds using a self-supervised machine learning model.  Evaluation data demonstrate that the performance  is better than pre-existing heuristic models or supervised models.

 

DESCRIPTION OF TECHNOLOGY

This technology is an algorithm and a method to correct smeared point depth errors in depth maps/point clouds utilizing self-supervised machine learning to automatically identify smeared points and remove them. Smeared point removal is performed using two components of the model: a point annotator and a point classifier.  To train the point annotator, multiple images of a scene containing objects are used. Then, each point from each scene is tested using multiple tests, and a point is then self-classified as either smeared or unsmeared given the results of each of the three tests. These three tests are then used to label each point in the depth map/point cloud as smeared or unsmeared. This evidence is used to automatically label each point, and this data is then used to train the neural network used to identify points. From that point onward, the point classifier can then look at a single image and from there determine if a point is smeared or valid given via a neural network.

 

BENEFITS

  • Use self-supervised algorithm as opposed to supervised algorithm
  • Performs similarly to manually-annotated supervised systems
  • Doesn’t require complex optical sensing models
  • Real-time implementation

 

APPLICATIONS

  • Agriculture and Forestry
  • Ground or Air based Search and Rescue
  • Augmented Reality
  • Military Drones
  • Underwater navigation
  • Autonomous Service Vehicles
  • Medical Imaging (tracking of patient motion)
  • Manufacturing Control systems
  • Mining

 

IP Status

US Patent Pending

LICENSING RIGHTS AVAILABLE

Full licensing rights available

Inventors: Daniel Morris, Miaowei Wang

 

Tech ID: TEC2023-0060

 

 

 

For more information about this technology,

Contact Raymond DeVito, Ph.D. CLP at Devitora@msu.edu or +1-517-884-1658

 

 

Patent Information:

For Information, Contact:

Raymond Devito
Technology Manager
Michigan State University
devitora@msu.edu
Keywords: