top of page

MAP CONSTRUCTION

Mapping is used to detect the boxes on the path via the camera input.

First, the camera must be calibrated. This calibration will be found in a file with the extension yaml. Video streaming will be provided with stream to the mapping application.

It uses several different libraries. This library provides all the support of our main library ORB-SLAM3 mapping algorithm. ORB-SLAM3 library works with Opencv 3 and above. It is used in our project with Opencv 4. The Pangolin library is shown to show the mapping image. Also, our programming language is C++11.

ORB-SLAM3 examines the frames one by one from the video stream and locates the objects in real-time. After the visual location is found, the location is determined with the sensor data. We are working with Raspberry pi integrated camera. It is a monocular camera with low image quality. Therefore, there is a possibility that its accuracy is somewhat low. The working logic of the algorithm is below.

  1. Feature detection and matching.

  2. Arrangement of initial mapping.

  3. Localization map and local key point optimization.

  4. Closing correction and loop correction.

  5. Re-localization.

 

ORB-SLAM performs feature detection and matching using ORB (Oriented FAST and Rotated BRIEF) features. First, it identifies feature points in images from the camera and matches them to predict motion and structure.

In the initial mapping phase, it calculates the initial 3D structure and the movement of the camera using the observational geometry between the initial two frames.

During the local mapping phase, ORB-SLAM adds new key points and map points to the local map. It also optimizes map and camera movement with local node optimization.

Closing detection allows ORB-SLAM to detect these loops and correct errors on the map when it returns to previously visited areas. This provides low drift and high scale accuracy.

The re-localization helps ORB-SLAM to estimate its position and orientation relative to the previously learned map when it loses track or arrives at an initially unknown location.

Resim3.jpg

Calibration

In mapping, there is a scene that is defined in frame, and you would like to know where each point lies in this coordinate frame. When we record this scene, what we have is images of the scene where points are measured as pixels. So, to go from images to full metric reconstruction, we need two things. The first is the position and orientation of the came with respect to the world, also called external parameters. The second is how the camera maps the projection points in the scene onto its image plane, also called internal parameters. Determining these parameters of the camera is called camera calibration. And the way we do this is that we take a single picture of an object of known geometry, and that is all we need to fully calibrate the camera.

Another purpose of the camera calibration is to remove distortion. Distortion refers to the systematic error that occurs when the camera lens fails to project the image accurately onto the camera sensor or film. It can lead to inaccuracies in measurements and errors in calculations of distances, areas, and volumes.

There are several tools that enable users to make camera calibration more accurately. Two of these are OpenCV and Kalibr. We will make use of these tools in our project. In the calibration process, a known object (e.g., chess board) is introduced to the camera. Parameters such as grid dimensions and rectangle sizes are given. After that, the calibration process is run, and the chess board is shown to the camera from different angles. After the calibration is finished, camera matrix and distortion coefficients are calculated and saved for future use.

300-3009022_hand-eye-robot-world-calibration-robot-hand-eye.png
bottom of page