# Project Tree

Get3D Mapper is the main user interface, the software project is a tree-like structure, which contains multiple blocks under the project. Different reconstruction frameworks can be created under the blocks according to the requirements. And multiple different types of product results can be submitted under the reconstruction frameworks.

<figure><img src="https://saas.bk-cdn.com/t/8731847e-01c8-438b-9575-3484e4c58acf/u/69b2a8d0-cbbf-402c-adb4-9fad53a9f967/1715390916351/image.png" alt=""><figcaption><p>Project Tree</p></figcaption></figure>

## 1 Data Manage

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FEgEDQk8lGH2il7RneQgG%2F%E5%9B%BE%E7%89%87%2067.png?alt=media&#x26;token=b9981d2b-ae89-4f16-8dc8-ea7042bd5b9f" alt=""><figcaption><p>Data Manage</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FdEact2uzj0XV3aHgaofp%2FUser%20Manual-Online_01.png?alt=media&#x26;token=0dd9f47f-c19e-4371-ba6b-2300a96efd97" alt=""><figcaption><p>Data Manage Overview</p></figcaption></figure>

### 1.1 Import Data

1. **Add Images**

Get3D Mapper supports importing aerial photos and ground photos, you can organize your data according to the **file specification**, and choose to import as needed according to the actual scene.

<figure><img src="https://saas.bk-cdn.com/t/8731847e-01c8-438b-9575-3484e4c58acf/u/69b2a8d0-cbbf-402c-adb4-9fad53a9f967/1715647906816/image.png" alt=""><figcaption><p>File Specification</p></figcaption></figure>

After importing the data, the images need to be further positioned to restore their pose. Get3D Mapper supports positioning methods such as EXIF (Exchangeable Image File Format) data and pose files.

<figure><img src="https://saas.bk-cdn.com/t/8731847e-01c8-438b-9575-3484e4c58acf/u/69b2a8d0-cbbf-402c-adb4-9fad53a9f967/1716276632693/%E6%B7%BB%E5%8A%A0%E5%BD%B1%E5%83%8F%E7%95%8C%E9%9D%A2.png" alt=""><figcaption><p>Import Photos and Pose Setting</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FGjnC99gvu1fL4ZipAEB5%2FUser%20Manual-Online_01(1).png?alt=media&#x26;token=e12cc588-7e13-44e2-9401-6c0fcb1527e3" alt=""><figcaption><p>Import Photos and Pose Setting Inter face Overview</p></figcaption></figure>

**EXIF Positioning**

<mark style="color:yellow;background-color:yellow;">**1) Add photos**</mark>

Select the root directory of the file to import all the photos under the folder.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F632QKCSpT2A6oSm8LAdE%2F%E5%9B%BE%E7%89%87%2091.png?alt=media&#x26;token=e055c026-0e8e-48eb-af80-de4de36e4116" alt=""><figcaption><p>Import Photos</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**2) Read Exif location**</mark>

Check the Exif information column, select the "exist" photo group, right click to open the menu bar, click **Read Exif** button.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FXm0IVY3dVyTa73MOv3zP%2F%E5%9B%BE%E7%89%87%2092.png?alt=media&#x26;token=7f5fb92c-eda7-4db5-b562-e8d0c7b2c22a" alt=""><figcaption><p>Read Exif</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**3) Query**</mark>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FJ5TUWYGaoC2aacyM11Pv%2F%E5%9B%BE%E7%89%87%2093.png?alt=media&#x26;token=6107a231-8700-4394-96a3-8c29f3252054" alt=""><figcaption><p>Quiry</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**4) Progress bar**</mark>&#x20;

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FpKTPUUtzeZCg6S4GQqMU%2F%E5%9B%BE%E7%89%87%2094.png?alt=media&#x26;token=2fdab663-c966-46fa-9081-14d45ec1b1d0" alt=""><figcaption><p>Progress Bar</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**5) Positioning completion**</mark>

After the positioning is completed, "0/2250" is switched to "2250/2250", i.e. each photo in the photo group has positioning information corresponding to it.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FXiBMsIW5yfIhbOrGYDh3%2F%E5%9B%BE%E7%89%87%2095.png?alt=media&#x26;token=b9dc0256-0cca-415e-bec4-1baef26bc202" alt=""><figcaption><p>After Positioning</p></figcaption></figure>

**Pose File Positioning**

<mark style="color:yellow;background-color:yellow;">**1) Add photos**</mark>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FdiA9vBFK2t1bwWdB2taJ%2F%E5%9B%BE%E7%89%87%2096.png?alt=media&#x26;token=8b1961e8-0f47-44fc-9ec2-608a0b50ccd1" alt=""><figcaption><p>Import Photos</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**2) Automatic matching of photo groups and pose files**</mark>

* a) Automatic Matching

Pose files are stored in the way shown in the following figure, Pose file name contains keywords such as POS; GPS; RTK; PPK, etc. After importing photos, the photo group will automatically correspond to the pose file.

Tip: Pose file does not contain (POS; GPS; RTK; PPK) keywords, you can set the keywords manually, select the photo group, right-click and use the menu bar, automatically search by keywords.

* b) Manual Correspondence

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2Fy4sCRUuZcNXow0oAoWbQ%2F%E5%9B%BE%E7%89%87%2097.png?alt=media&#x26;token=0c00e086-a892-4faa-9c86-94ed0ded1149" alt=""><figcaption><p>Select Pose File</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**3) Selection of the Pose file**</mark>

Batch select multiple groups of photos by right-clicking to open **Select Pose File** in the menu bar.

<mark style="color:yellow;background-color:yellow;">**4) Define the pose format template**</mark>

Set the spatial reference of the Pose, the separator, and the actual meaning of each column ("photo name, longitude, latitude, elevation" or "photo name, east coordinate, north coordinate, elevation").

<figure><img src="https://saas.bk-cdn.com/t/8731847e-01c8-438b-9575-3484e4c58acf/u/69b2a8d0-cbbf-402c-adb4-9fad53a9f967/1716278832234/%E8%AE%BE%E7%BD%AE%E7%BB%8F%E7%BA%AC%E5%BA%A6%E6%A0%87%E9%A2%98%E5%B9%B6%E5%BA%94%E7%94%A8%E5%88%B0%E7%85%A7%E7%89%87%E7%BB%84.png" alt=""><figcaption><p>Define Template for Pose File</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**5) Apply to photo groups**</mark>

After defining the template, click **Apply to Group** to select the photo group.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FuOpLMjyAkobni1Ic5f3M%2F%E5%9B%BE%E7%89%87%2099.png?alt=media&#x26;token=cb4dc592-9965-48de-b931-2f80826c6525" alt=""><figcaption></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**6) Positioning completion**</mark>

After the positioning is completed, "0/2250" is switched to "2250/2250", i.e., each photo in the photo group has positioning information corresponding to it.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FxsOj0jN4tRse0QaB5kmW%2F%E5%9B%BE%E7%89%87%20100.png?alt=media&#x26;token=a64496ac-7d98-4397-984f-3b2689d51e8b" alt=""><figcaption><p>Apply Successfully</p></figcaption></figure>

2. **Add Video**

Get3D Mapper supports adding videos to get the corresponding video frames and add them to the block image. In the Data Manage interface, click the **Add Video** button, set the input and output file paths, and set the start and end of the extracted video frames as well as the interval time, then you can extract the imported video frames directly.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FPyWpyVUCJY07sVGi1oe1%2FUser%20Manual-Online_01(3).png?alt=media&#x26;token=a44eb37d-8228-4ca2-a563-d8e542dd38e8" alt=""><figcaption><p>Add Video Interface Overview</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2Fh2EUpoF3wtq6hu2qFLpF%2F%E5%9B%BE%E7%89%87%20101.png?alt=media&#x26;token=e5cca5df-d782-4c39-b36a-f6820024a785" alt=""><figcaption><p>Select Video File</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FUIJOH9QbcGG4GZnq7YSP%2F%E5%9B%BE%E7%89%87%20102.png?alt=media&#x26;token=72b2750c-c6e7-4089-8b31-f8c9ade61208" alt=""><figcaption><p>Set Video Time</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FXBuQTyUE0A2x8xp3nOw8%2F%E5%9B%BE%E7%89%87%20103.png?alt=media&#x26;token=00362d9e-fb6d-4ac2-8fb8-62bdeddcdaf3" alt=""><figcaption><p>Importing</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FgaytBGAZ8eWz2HYgwBhn%2F%E5%9B%BE%E7%89%87%20104.png?alt=media&#x26;token=3b635b2d-e1e6-46a7-ad0d-c293af79f663" alt=""><figcaption><p>After Importing</p></figcaption></figure>

3. **Add Mobile Scan**

Get3D Mapper supports the addition of laser point cloud and track line data collected by mobile laser equipment. The software supports laser point cloud modeling alone and laser point cloud fusion modeling with photos.

Support vehicle-mounted, airborne, backpack, and handheld laser point clouds. Laser point cloud supported formats: \*.las, \*.ptx, \*.pts, \*.e57. Trajectory line supported formats: \*.txt, \*csv.

<mark style="color:yellow;background-color:yellow;">**1) Open the Add Mobile Point Cloud interface**</mark>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F1EmdrNYy5jcRaTH5rOu0%2F%E5%9B%BE%E7%89%87%20102.png?alt=media&#x26;token=b76e1c7d-f5fd-4c33-a771-b516a03aa6dd" alt=""><figcaption><p>Add Mobile Scan</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FFLj1nxaXRv7SWoOLmagB%2F%E5%9B%BE%E7%89%87%20106.png?alt=media&#x26;token=38a57b8e-5e83-443e-a501-0242955d01a1" alt=""><figcaption><p>Add Mobile Scan Interface</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F4e1dYBDAmThYadPD7B2y%2FUser%20Manual-Online_01(4).png?alt=media&#x26;token=1d33c51b-f4e3-4cc3-8529-27167b789ac2" alt=""><figcaption><p>Add Mobile Scan Overview</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**2) Setting up the laser point cloud spatial reference**</mark>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F8MkLa7FdPMku5dJ7M7Bg%2F%E5%9B%BE%E7%89%87%20107.png?alt=media&#x26;token=aff077c1-a860-419a-a90c-3781252d6705" alt=""><figcaption><p>Select Coordinate System for Point Cloud</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**3) Selection of mobile laser point cloud data**</mark>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FIMCcmLjbd7AyRkttlORS%2F%E5%9B%BE%E7%89%87%20108.png?alt=media&#x26;token=93bcd0d3-2ef2-41bb-ba28-897e6ab48f41" alt=""><figcaption><p>Select Point Cloud File</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**4) Importing trajectory lines**</mark>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2Fk1wlobSQC92Vpwljua8j%2F%E5%9B%BE%E7%89%87%20106.png?alt=media&#x26;token=96ceaded-7ca0-41dc-a1c7-f50dddbcf06e" alt=""><figcaption><p>Select Trajectory File</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**5) Define the trajectory line**</mark>

Set the spatial reference of the track line, the separator, and the actual meaning corresponding to each column ("GPS time, longitude, latitude, elevation" or "GPS time, east coordinate, north coordinate, elevation").

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F9vZUuaoSpUfykPAcvGN9%2F%E5%9B%BE%E7%89%87%20107.png?alt=media&#x26;token=914c714e-c3c7-4fb2-9ffb-5a2ca4c3ac21" alt=""><figcaption><p>Define Trajectory Fields</p></figcaption></figure>

&#x20;<mark style="color:yellow;background-color:yellow;">**6) Enquiry**</mark>

Whether you need to check for data matching.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F5xZXK5XPjrR2wrbJeYHc%2F%E5%9B%BE%E7%89%87%20111.png?alt=media&#x26;token=7315b2a8-e55c-4237-8189-e3cd5e01723e" alt=""><figcaption><p>Quiry</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**7) Check the matching results**</mark>

Laser point cloud check contains laser point position and track line position matching; laser point cloud time and track line time matching. And display the matching status.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FT1X3kO0Ud1OzrrcB421t%2F%E5%9B%BE%E7%89%87%20112.png?alt=media&#x26;token=f020e519-2f8b-4819-89f7-5b701e98649f" alt=""><figcaption></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**8) Data Manage interface display**</mark>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FViamUy33x0oafl7JSGAy%2F%E5%9B%BE%E7%89%87%20110.png?alt=media&#x26;token=a0e46434-f281-4079-945d-1ad879b21f3b" alt=""><figcaption><p>After Importing Point Cloud</p></figcaption></figure>

4. **Add Static Scan**

Get3D Mapper supports adding laser point cloud and center data collected by a stationary laser equipment. The software can support laser point cloud modeling alone and laser point cloud and photo fusion modeling. The following formats are supported: \*.las, \*.ptx, \*.pts, and \*.e57. The station center supports manual input or import of \*.txt format files.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FJ7HnXaaUyhmIz5AsIixV%2F%E5%9B%BE%E7%89%87%20111.png?alt=media&#x26;token=54db207c-780d-43d9-9fea-af83e2fbf021" alt=""><figcaption><p>Add Static Scan</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FI0btZq3ri5VvpP8QIndF%2F%E5%9B%BE%E7%89%87%20115.png?alt=media&#x26;token=9ef243e4-ba92-4f62-b1b9-b895538a8617" alt=""><figcaption><p>Add Static Scan Interface</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FS8Uowfddl2CEFi4AYhaB%2FUser%20Manual-Online_01(5).png?alt=media&#x26;token=57f3776c-c1f2-4422-a789-f73b2e04d1c3" alt=""><figcaption><p>Add Static Scan Overview</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**1) Click Add Static Scan**</mark>

<mark style="color:yellow;background-color:yellow;">**2) Select the point cloud coordinate system**</mark>

<mark style="color:yellow;background-color:yellow;">**3) Add Point Cloud**</mark>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F0UrCt6gHoVkXqlORQheV%2F%E5%9B%BE%E7%89%87%20116.png?alt=media&#x26;token=618b0cd5-2205-4892-8a6e-a2129833978d" alt=""><figcaption><p>Select Point Cloud File</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**4) Add Station Center:**</mark>

There are two ways to add station center coordinates:

One is to manually add station center coordinates on the right side of the point cloud list.

The other is to click **Add Station Center**, import the center of the station coordinates file. File samples such as the figure below, including the point cloud name, x, y, z arrangement, separated by a space in the middle:

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2Fx9T4RNFbwe3Ro5GgPu2H%2Fimage.png?alt=media&#x26;token=388aef60-a1f2-48ff-b58f-2141078fa974" alt=""><figcaption><p>Station Center File</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2Fd4wOwpDnoJpTGxkvk9yW%2F%E5%9B%BE%E7%89%87%20118.png?alt=media&#x26;token=7be04178-1959-4e16-9ed5-bbac3fc51baf" alt=""><figcaption><p>Select Station Center File</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FSI8dbGIoS2BVvUmay1pC%2F%E5%9B%BE%E7%89%87%20119.png?alt=media&#x26;token=ee509c6b-e2da-46bb-9872-057be6c0e7ce" alt=""><figcaption><p>Station Center Setting</p></figcaption></figure>

If there is no station center coordinates, the software can automatically calculate, uncheck the Define station center.

### 1.2 Import Reference

1. **Add Control Points**

Add control points to the **Data Manage** view, as a block split reference, or as an imported photo data integrity reference.

<mark style="color:yellow;background-color:yellow;">**1) Open Add Control Point**</mark>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FIjDFC7Tu9J8uMzVc5JjO%2F%E5%9B%BE%E7%89%87%20120.png?alt=media&#x26;token=e8f4d422-4622-4e65-ba69-5dbbac914146" alt=""><figcaption><p>Add Control Point</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**2) Selection of control point file**</mark>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FWMtT20oK1Ugb62HKuJhs%2F%E5%9B%BE%E7%89%87%20113.png?alt=media&#x26;token=38e8b172-673f-43d4-9f73-4a6e53239d04" alt=""><figcaption></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**3) Define the control point file format**</mark>

Set the spatial reference and delimiter. Define the control point file as "point name, longitude, latitude, elevation" or "point name, east coordinate, north coordinate, elevation".

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F8oGKcr59l1YA3Sxe0lBL%2F%E5%9B%BE%E7%89%87%20122.png?alt=media&#x26;token=50d3c4fe-3e97-4995-826a-712b90951018" alt=""><figcaption><p>Define Control Point Fields</p></figcaption></figure>

The following is an introduction to the interface functions:

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FTLCwTIIMaGdlIeg624fe%2FUser%20Manual-Online_01(6).png?alt=media&#x26;token=68bd4f07-f054-42c1-89bf-56ff835223e8" alt=""><figcaption><p>Add Control Point Interface Overview</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**4) Successfully importing control points**</mark>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2Fo0gstb8xp1z0jd2k8rRA%2F%E5%9B%BE%E7%89%87%20115.png?alt=media&#x26;token=af339a69-0392-4298-9bce-079b52d51207" alt=""><figcaption><p>After Importing Contro Points</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**5) Clear control points**</mark>

The added control points can be clear by click **Clear Control Points**.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2Fq4t2urdbo4Mv3BabFisj%2F%E5%9B%BE%E7%89%87%20116.png?alt=media&#x26;token=a5db7d7f-48e1-4d3c-b435-3a725345b78e" alt=""><figcaption><p>Clear Control Points</p></figcaption></figure>

2. **Add KML**

* **KML, which stands for Keyhole Markup Language and was originally developed by Keyhole Corporation, is a coding specification based on XML syntax and formatting for describing and preserving geographic information such as points, lines, images, polygons, and models.**&#x20;

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F6RzdJkS82INWcPFAm5OA%2F%E5%9B%BE%E7%89%87%20117.png?alt=media&#x26;token=1f361db3-924b-4c58-b151-d42af8425557" alt=""><figcaption><p>Add KML</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**a. Add the KML file**</mark>

Add a range line to the Data Manage view as a reference to the data integrity of the imported photos.

Range line reference data can be imported after tilt or other multi-source data is imported in the data management interface.

<mark style="color:yellow;background-color:yellow;">**b. Select the background KML file**</mark>

<figure><img src="https://saas.bk-cdn.com/t/8731847e-01c8-438b-9575-3484e4c58acf/u/69b2a8d0-cbbf-402c-adb4-9fad53a9f967/1716279098545/%E8%AE%BE%E7%BD%AE%E6%96%87%E4%BB%B6%E6%89%80%E5%9C%A8%E8%B7%AF%E5%BE%84%E5%B9%B6%E9%80%89%E6%8B%A9.png" alt=""><figcaption></figcaption></figure>

<figure><img src="https://saas.bk-cdn.com/t/8731847e-01c8-438b-9575-3484e4c58acf/u/69b2a8d0-cbbf-402c-adb4-9fad53a9f967/1716279161933/%E6%88%90%E5%8A%9F%E6%B7%BB%E5%8A%A0%E8%8C%83%E5%9B%B4%E7%BA%BF.png" alt=""><figcaption><p>After Adding KML</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**c. Delete KML line**</mark>

The imported range line data can be cleared by clicking **Clear KML**.

<figure><img src="https://saas.bk-cdn.com/t/8731847e-01c8-438b-9575-3484e4c58acf/u/69b2a8d0-cbbf-402c-adb4-9fad53a9f967/1716279122939/%E7%82%B9%E5%87%BB%E6%B8%85%E9%99%A4%E8%8C%83%E5%9B%B4%E7%BA%BF%E6%8C%89%E9%92%AE%E4%B8%80%E9%94%AE%E6%B8%85%E7%A9%BA%E5%AF%BC%E5%85%A5%E7%9A%84%E8%8C%83%E5%9B%B4%E7%BA%BF.png" alt=""><figcaption><p>Delete KML File</p></figcaption></figure>

### 1.3 Edit

After adding photos and pose data, in the Data Manage view each photo is expanded according to the corresponding position (x y z), and editing operations such as selecting, truncating, framing, and deleting can be performed.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FaRJaj9SeK3K7CawPChKC%2FUser%20Manual-Online_01(8)(1).png?alt=media&#x26;token=84882db8-f9a6-4510-9883-932a024a1fbe" alt=""><figcaption><p>Edit Function Overview</p></figcaption></figure>

### 1.4 View

The photo visualization interface in the Data Manage, in the photo group list view, you can rename, block and other operations on the positioned photo group data.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FOCqi6OUngmu9NIe9Ut7s%2FUser%20Manual-Online_01(8)(2).png?alt=media&#x26;token=7b13a25c-b4d3-4aa9-9700-b7c3404f11fb" alt=""><figcaption><p>View Function Overview</p></figcaption></figure>

## 2 Aerial Triangulation

Aerial Triangulation consists of two parts, the Relative Orientation and the Absolute Orientation.

* ***Relative Orientation:*** Relative Orientation, refers to the work of restoring or determining the relative relationship of the image pairs at the time of photography, i.e. solving the work of the relative orientation elements of the stereo image pairs. It is through the observation of the image of the object in different viewpoints of the position of the relationship to determine the internal and external parameters of the camera, the relative orientation of the stereo image pairs is to restore the photography of the two neighbouring image beams of the interrelationship between the two, so that the same name of the light pairs to the intersection of the light pairs.
* ***Absolute Orientation*****:** Absolute Orientation is through the measurement of the ground control points, the image and the ground coordinate system for contact. It uses the known ground control points to translate, rotate and scale the relatively orientated three-dimensional geometrical model to incorporate it into the ground photogrammetric coordinate system, and to improve the absolute accuracy of AT through the control points.

### 2.1 Aerial Triangulation Settings

After the photos are imported into the software, the first step is relative orientation to recover the internal and external parameters of each photo. The second step is absolute orientation to improve the absolute accuracy of the Aerial Triangulation.

**Aerial Triangulation parameters setting:**

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F56GdJgXuB92SZ93IG3Vt%2FUser%20Manual-Online_01(10).png?alt=media&#x26;token=0f9b7c5b-23c0-440b-b20d-458abbf3d4d6" alt=""><figcaption><p>AT Basic Settings Overview</p></figcaption></figure>

The preset options, which contain more settings, are shown below:

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FHL330E9dYfBMhxBNWQTF%2FUser%20Manual-Online_01(11).png?alt=media&#x26;token=9f295c76-d773-45d4-8036-86c4db5c1f14" alt=""><figcaption><p>AT Advanced Settings Overview</p></figcaption></figure>

For a detailed description of the AT Setting, please see the **Aerial Triangulation Setting Instruction** document.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FFWlz7o2P2kDeZTGbRl7P%2F%E5%9B%BE%E7%89%87%20136.png?alt=media&#x26;token=e889afd4-38ca-4b85-8e07-016d2494f9ba" alt=""><figcaption><p>AT Setting Tips</p></figcaption></figure>

### 2.2 View Task Details

To check the detailed progress of the AT, including the time for each sub-task and the total time consumed.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FCmkuMtla8vgY1JVHGsZS%2F%E5%9B%BE%E7%89%87%203.png?alt=media&#x26;token=df036d90-e128-43bd-bc63-f0591d7e7dc3" alt=""><figcaption></figcaption></figure>

### **2.3 Aerial Triangulation 3D View**

The content displayed in the Aerial Triangulation 3D view includes information such as photo pose, tie points, control points, laser point cloud, and laser trajectory lines.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FRddEnJm9ZQtGl1YNbeFQ%2FUser%20Manual-Online_01(12).png?alt=media&#x26;token=fac9695b-49c1-4194-a853-e0d09976bd58" alt=""><figcaption><p>AT 3D View Overview</p></figcaption></figure>

### **2.4 AT Report**

For projects that have completed Aerial Triangulation, an Aerial Triangulation report can be generated by clicking on **View AT Report** in the block view.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2Fd5eEUSLXCvphVM8BFHTm%2F%E5%9B%BE%E7%89%87%20158.png?alt=media&#x26;token=2ca6b810-b928-42ee-9080-f15814a46e2d" alt=""><figcaption><p>View AT Report</p></figcaption></figure>

For projects with large image data, the process of generating this report takes some time.

The AT report mainly includes the following parts: **project overview, camera calibration, photo information, tie point information,** and **control points**.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FvfXK5Hq5vbTQATilhUlV%2F%E5%9B%BE%E7%89%87%20156.png?alt=media&#x26;token=a967d119-4ffd-4068-bb7c-0af7175003d2" alt=""><figcaption><p>AT Report Overview</p></figcaption></figure>

**Project overview:** It contains the project where Aerial Triangulation is located, the running time, the number of photos, the number of control points, the number of photos entering the network, the photo ratio in net, the number of tie points, the average resolution of tie points, and the re-projection error of tie points.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FhTH9unHbDE67kkS9bBnJ%2F%E5%9B%BE%E7%89%87%20229.png?alt=media&#x26;token=23e7bcc8-c3ad-442b-979c-dcf472a22891" alt=""><figcaption><p>Block Overview</p></figcaption></figure>

**Camera calibration:** A detailed list of information on the internal and external parameters and distortion coefficients of the camera set, which are used to assess the calibration accuracy of the camera.

**Photo information**: Schematic of the offset distance of the camera position from the original POS data, schematic of the re-projection error for each photo, the number of connection points observed by the camera, and the scene coverage. Corresponding connection points, re-projection error for each photo.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F6uwYjYYKl7gjHpsO20Fz%2F%E5%9B%BE%E7%89%872.png?alt=media&#x26;token=95917787-84b4-49a4-b799-a0f220216624" alt=""><figcaption><p>Photo Information</p></figcaption></figure>

**Tie point information**: Observation count of tie points, re-projection error of tie points.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FRy8UnOINvC6bNC3yhSYd%2F%E5%9B%BE%E7%89%873.png?alt=media&#x26;token=0b9584aa-d09a-4746-a648-213a92149258" alt=""><figcaption><p>Tie Points Information</p></figcaption></figure>

**Control points**: coordinates and error information of each control point.

### 2.5 AT Indicators

#### 2.5.1 Rate of Calibrated Photos

Rate of calibrated photos is an important indicator in AT, but not the sole criterion for evaluating AT quality. The results should be assessed based on the actual data. Here are common reasons for photos not calibrated:

① Water Surface Photo Loss

Blue points in the left image show photos that weren’t calibrated. In the right image, the red box highlights the missing photos.

<div><figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F42yTGwaL7WdCSKVA1vfs%2F%E5%9B%BE%E7%89%87%204.png?alt=media&#x26;token=411f26e7-948a-4390-80ae-7ae6dcc5df1e" alt=""><figcaption></figcaption></figure> <figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FYQQR6ewIqvmv730gEibF%2F%E5%9B%BE%E7%89%87%205.png?alt=media&#x26;token=4f1d96cf-4fe4-4e56-8e76-e3e1fc15bdd7" alt=""><figcaption></figcaption></figure></div>

② Mountainous Area Photo Loss

Blue points in the left image show photos at the edge of the flight area. It is too close to the ground. And the terrain is forest. All these factors may cause some photos not calibrated, which is normal and does not affect modeling within the survey area.

<div><figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2F9fp6ZkQWHO3Ypnuhhpsl%2F%E5%9B%BE%E7%89%87%206.png?alt=media&#x26;token=8cfdbb60-77c7-42d6-bfc1-1a3268e6a1ef" alt=""><figcaption></figcaption></figure> <figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2Fdhffp3sq8ahFk4wGajxq%2F%E5%9B%BE%E7%89%87%207.png?alt=media&#x26;token=d1b0f919-ebf8-4c2c-a477-b51ece14b513" alt=""><figcaption></figcaption></figure></div>

#### 2.5.2 Control Point Accuracy

The absolute orientation in AT uses control point information to adjust camera position and orientation, improving the absolute accuracy of the AT. Check the result of absolute orientation. If the error of X/Y/Z is under 0.01m, the absolute accuracy is high.

#### 2.5.3 Checkpoint Accuracy

Checkpoints assess the accuracy of the AT results. By comparing checkpoint coordinates with AT results, the positional error can be calculated, providing a quantitative evaluation of the AT accuracy. The accuracy requirements of the checkpoints are based on the absolute accuracy requirements of the actual model. For example, if the required planar accuracy is better than 0.1m and the elevation accuracy is better than 0.15m, the error of the checkpoints (X/Y/Z) should meet these requirements.

## 3 Reconstruction

Create a **New 3D Reconstruction** or **New 2D Reconstruction** spatial frame on Aerial Triangulation (set up spatial references, area of interest ranges, tile divisions, reconstruction memory estimates, and more settings) under which one or more products can be submitted.

<figure><img src="https://saas.bk-cdn.com/t/8731847e-01c8-438b-9575-3484e4c58acf/u/69b2a8d0-cbbf-402c-adb4-9fad53a9f967/1716280307501/%E9%87%8D%E5%BB%BA%E5%8F%82%E6%95%B0%E8%AE%BE%E7%BD%AE%E4%B8%BB%E7%95%8C%E9%9D%A2.png" alt=""><figcaption><p>Reconstruction Spatial Framework</p></figcaption></figure>

### 3.1 Reconstruction Setting

The 2D reconstruction setting is same with 3D reconstruction.

**(1) Spatial Reference System**

The spatial reference system is set to the ENU coordinate system (Local East-North-Up system) by default. Users can change it to an existing or custom-defined coordinate system. Using a consistent spatial reference ensures seamless alignment of tile boundaries in subsequent reconstruction processes.

**Note:** The reconstruction coordinate system supports only projected coordinate systems, not geodetic coordinate systems.

**(2) Region of Interest**

There are multiple ways to set the region of interest: quickly setting the range, editing the region in the view, or importing KML boundary lines.

<mark style="color:yellow;background-color:yellow;">**Quickly set the range:**</mark>

**Adjust to Tie Points**: The area of interest ranges to the largest outer enclosing box that contains all tie points.

**Adjust to Camera**: The area of interest ranges to the largest outer enclosing box containing all cameras.

<mark style="color:yellow;background-color:yellow;">**Edit the region in the view:**</mark>

Select **Edit Region**, then drag the faces of the bounding box to define the area.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2Fas2GKLJJqFe1If5gfxIm%2F%E5%9B%BE%E7%89%87%20145.png?alt=media&#x26;token=c33710c1-c1fe-4491-a269-a6bd3feba052" alt=""><figcaption><p>Edit Region</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**Importing KML boundary lines:**</mark>

Select **Import KML**, and import the the pre-drawn KML boundary line.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FOhUfkJaMYKPjWZh3Izdf%2F%E5%9B%BE%E7%89%87%20146.png?alt=media&#x26;token=d6cd4203-3492-4b09-884d-b78da4739be9" alt=""><figcaption><p>Import KML</p></figcaption></figure>

**(3) Tiling**

<figure><img src="https://saas.bk-cdn.com/t/8731847e-01c8-438b-9575-3484e4c58acf/u/69b2a8d0-cbbf-402c-adb4-9fad53a9f967/1716280336803/%E7%93%A6%E7%89%87%E5%88%92%E5%88%86%E6%A8%A1%E5%BC%8F.png" alt=""><figcaption><p>Tilling Mode</p></figcaption></figure>

* Split Mode: Planar grid division, 3D grid division, no division.
* Tile Size: Set the tile length. There is a proper tile edge length by default (related to AT resolution, related to GSD), user can customize the tile size.
* Tile Overlap: Set the overlap length of neighbouring tiles. When using tile reconstruction, in order to avoid gaps between tiles, set the width of overlap between tiles.
* Origin XYZ: Set the coordinates of the origin of the tiles.
* Preview in Real Time: After modifying the parameters, display the updated result in the 3D view in real time.
* Discard Empty Tiles: Tiles without tie points are not reconstructed.

<figure><img src="https://saas.bk-cdn.com/t/8731847e-01c8-438b-9575-3484e4c58acf/u/69b2a8d0-cbbf-402c-adb4-9fad53a9f967/1715761618750/image.png" alt=""><figcaption><p>Planar Grid Division</p></figcaption></figure>

<figure><img src="https://saas.bk-cdn.com/t/8731847e-01c8-438b-9575-3484e4c58acf/u/69b2a8d0-cbbf-402c-adb4-9fad53a9f967/1715761625265/image.png" alt=""><figcaption><p>3D Grid Division</p></figcaption></figure>

**(4) Save and Load**&#x20;

Save: save the reconstruction space frame settings, including space reference, tile division origin, tile size, tile overlap.

Load: load the reconstruction space frame settings.

**(5) Memory Estimation**&#x20;

Single tile using the maximum memory value prediction, tile reconstruction requires the size of the computer memory related to the tile size (the larger tile, the larger the computer hardware memory required), according to the computer memory to determine the size of the tile division, the prediction of too much memory will lead to the failure of the tile reconstruction.

For example, if you have 64GB of computer memory, you can adjust the tile length, set the tile estimated memory to 30GB, and actually divide the largest tile.

**(6) Copy Setting**&#x20;

Copy other reconstruction parameters under the same project contains space reference, tile division origin, tile size, and so on.

### 3.2 More Settings

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FR3tzZ6XYg5LdzXahy8QK%2F%E5%9B%BE%E7%89%87%20219.png?alt=media&#x26;token=d70cf950-fbad-4e59-8edf-7820f2d52b41" alt=""><figcaption><p>More Reconstruction Settings</p></figcaption></figure>

More Reconstruction Settings interface allows you to set the **Point Cloud Generation**, **Mesh Generation**, **Texture Fill** and **2D Reconstruction Hole-Fill Mode** in the reconstruction step. Different preset reconstruction modes can be set in **More Setting** for different modeling scenarios:

l General: The default mode is a balance between model quality and processing efficiency and is suitable for most reconstruction scenarios.

* **General**: The default mode is a balance between model quality and processing efficiency and is suitable for most reconstruction scenarios.
* **High Quality**: High quality mode generates more structural details, consumes more memory and time (about 4 times longer than general mode) to process the same data, and is suitable for fine modelling of small scenes.
* **High Efficiency**: High-efficiency mode pursues high-efficiency modeling quality, the quality of the model will be reduced, the demand for memory and storage resources is small, and the time consumed is short, suitable for large scenes, and the geometric accuracy requirement is not high for rapid modeling scenarios.
* **Define**: Users can choose suitable reconstruction parameters according to their own needs.

<mark style="color:yellow;background-color:yellow;">**① Point Cloud Generation**</mark>

Mainly image dense matching strategy, photo selection strategy, matching strategy, matching accuracy.

<mark style="color:yellow;background-color:yellow;">**② Mesh Generation**</mark>

**Topology Construction:**

* 3D mesh: suitable for most model reconstruction cases.
* 2D mesh: suitable for downward looking image modeling.

**Mesh Simplification:**

* Light: low simplification effort, denser triangular mesh, more (valid and invalid) geometric details, larger data volume.
* Normal: moderate strength of simplification, with a balance of data size and (valid and invalid) geometric details.
* Strong: strong simplification, sparser triangular mesh, largely filtered (valid, invalid) geometric details, smaller data size.

**Mesh Optimization：**

If selected, the 3D model contains more details.

**Seam Optimization：**

Topology optimisation of the geometry between tiles can effectively avoid gaps between tiles.

<mark style="color:yellow;background-color:yellow;">**③ Texture Fill**</mark>

* Automatic Interpolation: Automatically fills texture gaps using surrounding textures.
* AI Repair: Utilizes AI algorithms to intelligently restore missing textures.
* Solid Color: Fills missing areas with a uniform color.

<mark style="color:yellow;background-color:yellow;">**④ 2D Reconstruction Hole-Fill Mode**</mark>

DSM/DOM Fill Hole Mode: Adaptive fill and full fill

### 3.3 Reconstruction 3D View

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FfzUzq0INGFwd6uilhVtC%2F%E5%9B%BE%E7%89%87%20151.png?alt=media&#x26;token=5b1b50cf-d41a-4c56-95e6-16177c8d9e7a" alt=""><figcaption><p>Reconstruction 3D View</p></figcaption></figure>

* Layers show/hide: Show/hide region of interest, tile, camera, tie point, point cloud, control point.
* Home: Reset the 3D view.
* Camera +: increase the size of the camera in the 3D view.
* Camera -: increase the size of the camera in the 3D view.
* Edit Region: Manually adjust the reconstruction range in the 3D view, select and then determine the area of interest, i.e. the modeling area, by extruding and stretching the box.
* Import KML: Import KML range for reconstruction range constraints, output 2D and 3D model results can be output by range line shape.
* Export Tile Boundary KML: For the selected tile, export tile boundary as KML file.

## 4 Product

### 4.1 3D Reconstruction--Product

Get3D Mapper can output 3D model products in OSGB and OBJ formats, 3D point cloud products in LAS, TXT, ASC and E57 formats, DOM products in TIFF and JPG formats, and DSM products in TIFF and ASC formats.

Several different types of products can be submitted under the same reconstruction framework.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FFJYmdPcF9DUz7ny4NmB9%2F%E5%9B%BE%E7%89%87%20221.png?alt=media&#x26;token=c19248e4-fe89-4f93-9f23-0fa0c076f426" alt=""><figcaption><p>Product Interface</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**① 3D model product**</mark>

After the Aerial Triangulation processing is completed and the reconstruction settings are finished, you can submit the 3D model products in OSGB and OBJ formats for generation.

Select the output format, texture source, texture quality, maximum texture size, and the way to adjust texture.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FGdB5Gd0xQ8IOgDTPdNQD%2FUser%20Manual-Online_01(14).png?alt=media&#x26;token=212af5d5-d31d-41d2-aac5-34319e82f758" alt=""><figcaption><p>3D Model Settings</p></figcaption></figure>

* **OBJ**: OBJ is a 3D model file format for model editing and browsing. The .obj file mainly contains information about the vertices, normals, texture coordinates, and faces (triangles or polygons formed by vertices) of the 3D model. The \*.mtl file defines the properties of the materials used in the model, such as colour, gloss, texture mapping, etc. The \*.jpg file is the texture coordinates of the material. \*.jpg files are texture maps.
* **OSGB**: OSGB is a binary format for 3D model data, suitable for applications such as model browsing and real-time rendering.One of the features of the OSGB format is that it can support model representation with multiple Levels of Detail (LOD, Levels of Detail), which allows the system to dynamically load and display model levels of corresponding fineness according to the distance of the observer.

<figure><img src="https://saas.bk-cdn.com/t/8731847e-01c8-438b-9575-3484e4c58acf/u/69b2a8d0-cbbf-402c-adb4-9fad53a9f967/1715844582432/image.png" alt=""><figcaption><p>Structure of Model Formats</p></figcaption></figure>

**Use Multiple Origins:**

When submitting a product, import a multi-origin planning file. It will automatically export the different tiles by origin.

**Select Tiles:**

Select the tiles of interest to be generated into a 3D model or point cloud.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FpCw6eEI1v939YLFHmL2s%2F%E5%9B%BE%E7%89%87%20223.png?alt=media&#x26;token=28dab986-3e1e-4b0e-bdaf-dc4b856ecaa5" alt=""><figcaption><p>Select Tiles</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2Fy0MAWYXFsml95znFWX9X%2FUser%20Manual-Online_01(16)(1).png?alt=media&#x26;token=1980f15c-ad8f-494b-b520-c5fe34d08320" alt=""><figcaption><p>Tile Selection Overview</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**② 3D point cloud product**</mark>

After generating a 3D model, the model is sampled to produce a 3D point cloud product.

The software interface allows you to set the point cloud product output format, point cloud source, and resolution.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FS9YiiflR5CjU4bEY17Lo%2FUser%20Manual-Online_01(16)(2).png?alt=media&#x26;token=a9687e78-25cd-4d44-b16c-9ef1d66244de" alt=""><figcaption><p>Point Cloud Settings</p></figcaption></figure>

<mark style="color:yellow;background-color:yellow;">**③ 2D product**</mark>

After generating the 3D model, the model is sampled to generate DOM and DSM products.

The software interface allows you to set the sampling distance, image dimension, DOM product format, and DSM product format.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FrW6eCRNP2PGYfMrIUtjo%2FUser%20Manual-Online_01(18).png?alt=media&#x26;token=4dd391b1-f432-4819-8daf-5c69e6373707" alt=""><figcaption><p>2D Product Settings</p></figcaption></figure>

**Select Grids:**

Select the grids, which correspond to the grid divisions set up for the 3D reconstruction spatial frame.

Selection method is same with selecting tiles. The view shows areas that have not been reconstructed, have been reconstructed, and have been selected.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FAF9TrmCBRKLubAiHWh5p%2F%E5%9B%BE%E7%89%87%20224.png?alt=media&#x26;token=d4fc824d-1145-413e-9589-364652d15b12" alt=""><figcaption><p>Select Grids</p></figcaption></figure>

### 4.2 2D Reconstruction--Product

The software interface allows you to set the resolution, compression type, DOM product format, and DSM product format.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FNNteC9VhoHMAAe7zV3tM%2F%E5%9B%BE%E7%89%87%2048.png?alt=media&#x26;token=0e6435b8-048b-438a-9573-7c4eba5bc4d3" alt=""><figcaption><p>2D Product Interface</p></figcaption></figure>

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FoPeVEupBWUzyzbvBP7i5%2FUser%20Manual-Online_01(19).png?alt=media&#x26;token=51a334c8-d81b-46b1-85f1-3091eb1efb0f" alt=""><figcaption><p>Products of 2D Reconstruction</p></figcaption></figure>

**Select Grids:**

Select the grids, which correspond to the grid divisions set up for the 2D reconstruction spatial frame.

Selection method is same with selecting tiles. The view shows areas that have not been reconstructed, have been reconstructed, and have been selected.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FAF9TrmCBRKLubAiHWh5p%2F%E5%9B%BE%E7%89%87%20224.png?alt=media&#x26;token=d4fc824d-1145-413e-9589-364652d15b12" alt=""><figcaption><p>Select Grids</p></figcaption></figure>

### 4.3 Product Report

In the product interface, you can click **View Product Report**.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FjWm8hSmh3QSvZw7GEfuy%2F%E5%9B%BE%E7%89%87%20160.png?alt=media&#x26;token=63aa68ed-9a2c-4340-be24-e8ddf106fd24" alt=""><figcaption><p>View Reconstruction Report</p></figcaption></figure>

The product report contains **Project Overview Information** and **Tile Information**.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FoCtUxltF5SzQUwJTUTOR%2F%E5%9B%BE%E7%89%87%20157.png?alt=media&#x26;token=0ed11760-5306-4028-b8da-e8bd56f4a121" alt=""><figcaption><p>Product Report Overview</p></figcaption></figure>

The product overview contains information about the coordinate system and origin of the modeled tiles, the size of the modeling area, and the time spent on the modeling.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FZr94Gj0c5yJvcnIotE7U%2F%E5%9B%BE%E7%89%87%20233.png?alt=media&#x26;token=1e251fb4-fccc-4e54-b962-a4fb219f09ca" alt=""><figcaption><p>Product Overview</p></figcaption></figure>

The tile information includes the runtime and status of each tile.

<figure><img src="https://2468521665-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXWiwWKjbM3enZNnTnsmE%2Fuploads%2FTlhREjPBMbKK0j5w5ViV%2F%E5%9B%BE%E7%89%87%20234.png?alt=media&#x26;token=d9e61074-0f78-4261-b2f5-18e53d7d873a" alt=""><figcaption><p>Tile Information</p></figcaption></figure>
