GitHub Logo GitHub Repository Software Version 1.0: Release V1.0 I-Scan

Introduction

In the evolving landscape of 3D digitization, the widespread adoption of advanced scanning technologies is frequently impeded by two primary factors: the prohibitive cost associated with the integration of an excessive number of components in commercial systems, and a pervasive lack of transparency regarding their control mechanisms. While open-source initiatives, such as the HSKAnner from Karlsruhe, demonstrate the potential for community-driven development, they often rely on fixed multi-camera arrays, which, despite their open nature, can still contribute to elevated costs and architectural rigidity.
HSKAnner 3D Scanner
HSKAnner 3D Scanner from Karlsruhe

This project introduces I-Scan, a novel 3D scanner designed to address these limitations by prioritizing universality, cost-efficiency, and unparalleled modularity. I-Scan is engineered for broad sensor compatibility, supporting a diverse range of imaging devices, including legacy USB and web cameras (which are currently implemented), and is extensible to integrate various other measurement units such as Lidar or general Time-of-Flight (ToF) sensors, or indeed any sensor where precise spatial positioning is advantageous.

A core design principle is its adaptable architecture, where modules possess spatial awareness and are reconfigurable to suit specific use-case requirements, thereby obviating the need for a singular, fixed setup. The integration of movable modules along the Z-axis, coupled with servo-controlled adjustable camera angles, facilitates comprehensive image acquisition across varying object heights and perspectives, enhancing data capture flexibility.
First sketch of movable modules
First sketch of movable modules

The operational backbone of I-Scan is a robust Python based application. This software orchestrates critical functions, including the import and configuration of cameras via JSON files (supporting COM and HTTP interfaces), precise calculations for Z-axis module movement, and servo alignment for camera orientation, all managed through REST APIs. Furthermore, the application provides capabilities for defining complex scan workflows, visualization of scanner settings, rigorous input validation (mathematical and JSON syntax), automated dependency management, and comprehensive debug output. This holistic approach positions I-Scan as a highly adaptable, cost-effective, and transparent solution, poised to democratize access to advanced 3D digitization capabilities.

3D Scanner derivative


The Concept

The conceptual foundation of this 3D scanner is a modular, highly adaptable structure that integrates movable and stationary modules for precise, customizable, and efficient object digitization. Central is the dynamic interaction between modules, each with spatial awareness and distinct degrees of freedom, enabling the system to overcome the limitations of conventional fixed array scanners.
Movable modules traverse the Z-axis with high positional accuracy, guided by user defined or algorithmically determined center points in 3D space. At each increment, these modules reorient their sensors (e.g., cameras) so their optical axes converge on the current target center. This is achieved through coordinated actuation of stepper motors (linear displacement) and servo motors (angular adjustment), all managed via a REST API. The mathematical logic ensures that, regardless of Z-axis position, the sensor maintains optimal focus and perspective.
Fixed modules are strategically positioned and, while stationary, can dynamically target new center points, mirroring the adaptive behavior of movable modules. This combination enables flexible and efficient scan paths, accommodating a wide variety of object geometries and sizes. This modularity enhances coverage and resolution and allows individualized scan trajectories tailored to the object's morphology.
All control operations from positioning to sensor orientation are abstracted through a unified REST API, ensuring integration, extensibility, and remote operability.
The mathematical framework enables each module to compute the necessary transformations for precise alignment with dynamically assigned center points. This approach makes the scanner a versatile platform for advanced 3D digitization in research and industrial applications.

Primary Goal of the Concept

By vertically displacing along the Z-axis and dynamically adjusting sensor angles, the movable modules generate significantly more perspectives than rigid fixed array setups (where all sensors are locked in pre-defined positions and orientations, offering no adaptability during operation).
This eliminates "blind spots" and enables gapless digitization of complex geometries, overcoming the physical constraints of static camera positions.

Such systems fundamentally limit perspective coverage to their initial hardware configuration, causing unavoidable blind spots on non convex surfaces.
(Non-convex surfaces exhibit cavities, undercuts, or reentrant angles where direct "line of sight" is obstructed – e.g., gear teeth, hollow sculptures, or organic structures like tree roots).
Prototyp1 Animation
Prototype 3D Printed

Secondary Goal of the Concept

Through its modular architecture, the system achieves exceptional flexibility, enabling seamless integration and adaptation of diverse sensors without requiring full hardware reconfiguration. Future implementations will leverage sensor fusion pipelines to optimize 3D data acquisition and processing for instance, by combining cameras with LiDAR or ToF sensors. This extensibility inherently supports advanced techniques like structured light scanning, where projected patterns and multi angle triangulation reconstruct complex surface geometries with sub millimeter accuracy.
Structured Light Example
Structured Light Scanning

Calculation of Measurement Angle

Right-Angled Triangles

In this chapter, we show how to calculate the angle α in a right-angled triangle when one side is variable.
For our example:

  • Side A: Zdist
  • Side B: DistanceToCenter (150 cm as defined in the JSON configuration)
Angle Calculation Diagram

In a right-angled triangle, the tangent of an angle is defined as the ratio of the opposite side to the adjacent side.
Since α is the angle opposite Side A and Side B is the adjacent side, it follows:

tan(α) = Zdist / DistanceToCenter

To calculate α, use the arctangent (inverse tangent) function:

α = arctan(Zdist / DistanceToCenter)

Example:
With Zdist = 150 cm and DistanceToCenter = 150 cm:

α = arctan(150 / 150) = arctan(1) = 45°

This method allows you to substitute any value for Zdist to calculate the corresponding angle α in a right-angled triangle.


Geometric Angle Calculation

  • Step size:
    step_size = SCAN_DISTANCE / (NUMBER_OF_MEASUREMENTS - 1)
  • For each measurement point:
    • y_position = i * step_size
    • dx = TARGET_CENTER_X - SCANNER_MODULE_X
    • dy = TARGET_CENTER_Y - y_position
    • Angle calculation:
      angle_rad = atan2(dx, dy)
      angle_deg = angle_rad * 180 / π
    • Hypotenuse:
      hypotenuse = sqrt(dx² + dy²)
    • Store:
      angles.append({ ... })
For each measurement point:
Calculate y-position, dx, dy, angle (deg), hypotenuse and store in the result array.
GitHub Logo Source code: calculations.py

Servo Interpolation with Physical Correction

To ensure that each servo can be installed regardless of its individual angular range, the servo parameters are configured accordingly. By specifying the installation angle of the servo, we can align it precisely. The servo’s cone of rotation is combined with the installation angle and then matched with the theoretically calculated angle to the object.

This process yields the exact angle at which the servo must be controlled to achieve the desired orientation as previously determined by the geometric calculations.

For each measurement point, the algorithm calculates the geometric angle, determines the target angle in the coordinate system, checks if the target is within the servo’s physical range, and maps this to the actual servo angle. All relevant values are stored for further control and visualization.
Servo Cone Detail Fullscreen View
Servo Cone Detail 0-90
  • Calculate the geometric angle for the point:
    geometric_angle = calculate_geometric_angle(y_pos)
  • Determine target angle in the coordinate system:
    target_coord_angle = atan2(dy, dx) (in degrees)
  • Check reachability:
    is_reachable = (COORD_MAX_ANGLE ≤ target_coord_angle ≤ COORD_MIN_ANGLE)
  • Physical servo angle:
    • If reachable:
      physical_servo_angle = linear mapping from coordinate angle to min°–max°
    • Otherwise:
      physical_servo_angle = min° or max° (nearest limit)
  • Servo coordinate angle:
    servo_coordinate_angle = geometric_angle - SERVO_NEUTRAL_ANGLE
For a single measurement point:
Calculate target angle, check reachability, map to servo angle, calculate visualization angle, and store all values in the result object.

Different configurations


Create Scan Workflow

The user interface provides a powerful way to create and manage scan workflows.
Each step of the 3D acquisition process can be defined in detail.
A workflow consists of multiple steps that are executed sequentially to ensure a structured and repeatable scanning procedure.

Each step in the workflow is individually configurable.
This includes not only Z-axis movement, servo alignment, and camera control,
but also the integration of multiple cameras into the process.
Cameras can be added to the workflow as needed,
and each camera can be fully configured through the interface.
Camera parameters can be adjusted, active cameras for specific steps can be selected,
and their operation can be coordinated with other system components.

Future features such as lighting control can also be integrated into the workflow,
providing even more flexibility and control.
This approach allows all relevant settings for each step to be reviewed, adjusted, and optimized.

All required dependencies will be installed through the script. If anything is missing, the console will indicate what is required.


Scan config

In this window, as described in the previous chapter, you can set the variables for the scan.
The visualization mode generates a full visual evaluation of all results, while silent mode creates a CSV file based on the parameters.
The CSV file is automatically imported into the queue window.
The current command field helps you understand how to use the math engine if you want to integrate it elsewhere.

You can also select Cone details to view the configuration in detail.
GitHub Logo Math engine : main.py

Scan Workflow Overview

  • Run Queue executes the scan process. The queue contains all scan steps and can be edited, reordered, or cleared as needed.
  • You can also import or export the queue for backup or reuse in other projects.
  • In the PhotoControl section, the Config button opens the Camera JSON Configurator, allowing you to adjust camera settings directly.

This workflow ensures a flexible and repeatable scan process, with full control over each step and camera configuration.


Camera JSON Configurator

The Camera JSON Configurator is located at the bottom right of the software interface.
It allows you to conveniently adjust camera settings via a JSON file.
Parameters such as resolution, frame rate, and exposure can be modified directly in the configurator.
Changes are immediately reflected in the camera's live view.

Cameras can currently be imported into the software either via a COM port (e.g., USB camera) or via a stream (e.g., IP camera).
The configuration stores important camera information, which can later also be used in external software such as Meshroom.
To ensure reliability, the JSON file is automatically checked for correct syntax with every change.

If there are errors with the camera connection, an error message will be displayed in the console window.
Additionally, the UI will show an error message describing the issue.

Meshroom

Meshroom is an open-source software for photogrammetric 3D reconstruction,
developed by AliceVision.
It enables the automatic creation of detailed 3D models from a series of photos of an object or scene.
Meshroom provides a graphical user interface
where the entire workflow, from image selection and feature detection,
camera calibration, dense point cloud calculation, to mesh and texture generation is visually represented as a pipeline.

The software uses advanced image processing algorithms
and is completely free to use.
Meshroom also includes algorithms that can infer camera positions
by analyzing the overlap between images.

MeshRoom Basic Pipeline
The image shows the default pipeline in Meshroom. Each node represents a processing step in the photogrammetry workflow, from image input to 3D model output.

Fullscreen View


Examples | generated with Meshroom


Screen Recording Vape Pole White

Fullscreen View

This example shows a coordinated scan of a metallic object placed on a white surface, with illumination coming from one side (as indicated by the visible shadow). Despite the typical complications associated with scanning reflective or metallic materials, the system successfully detected and reconstructed all non-metallic features of the object.

Module | Software & Parts


📋 Class Diagram anzeigen
API Client Functions
Function Description Parameters Return Values Endpoint
make_request Sends an HTTP request to the specified API endpoint endpoint (str): API endpoint
params (dict, optional): Parameters
base_url (str): Base URL
timeout (int): Timeout in seconds
str: API response or error message Variable
set_servo_angle Sets the servo angle via the API angle (int): Angle 0-90 degrees
base_url (str): Base URL
str: Confirmation message with result /setServo
move_stepper Controls the stepper motor via the API steps (int): Number of steps
direction (int): Direction (1=up, -1=down)
speed (int, optional): Speed
base_url (str): Base URL
str: Confirmation message with result /setMotor
set_led_color Sets the LED color via the API color_hex (str): Hex color code (e.g. "#FF0000")
base_url (str): Base URL
str: Confirmation message with result /hexcolor
set_led_brightness Sets the LED brightness via the API brightness (int): Brightness 0-100%
base_url (str): Base URL
str: Confirmation message with result /setBrightness
get_button_state Queries the button status via the API base_url (str): Base URL
nocache (bool): Prevent caching
str: Button status response /getButtonState
is_button_pressed Checks if button is pressed based on API response response: API response to check bool: True if pressed, False otherwise -
Device Control Functions
Function Description Parameters Return Values Usage
servo_cmd Executes servo command directly None (uses GUI values) None (logs result) Direct servo control
servo_auto_position_cmd Automatic servo positioning based on Y-position None (uses current position) None (logs result) Automatic alignment
update_servo_target_center Updates target center for servo calculations center_x (float): X coordinate
center_y (float): Y coordinate
None Configuration
stepper_cmd Executes stepper motor command directly None (uses GUI values) None (logs result) Direct motor control
led_cmd Sets LED color directly None (uses GUI values) None (logs result) LED color control
bright_cmd Sets LED brightness directly None (uses GUI values) None (logs result) LED brightness control
button_cmd Queries button status directly None None (logs status) Button status query
home_func Executes home function (reference movement) None None (logs result) Initialization
Servo Angle Calculator Functions
Function Description Parameters Return Values Purpose
calculate_servo_angle_from_position Calculates servo angle based on Y-position current_y_position (float): Current Y position int: Servo angle 0-90° Position calculation
calculate_targeting_angle Calculates direct targeting angle to target center current_y_position (float): Current Y position tuple: (angle_in_degrees, servo_angle) Target acquisition
get_angle_info Returns detailed information about angle calculation current_y_position (float): Current Y position dict: Detailed angle information Debug/Analysis
update_target_center Updates target center coordinates new_x (float): New X coordinate
new_y (float): New Y coordinate
None Configuration
validate_servo_angle Checks if servo angle is valid angle (int): Angle to check bool: True if valid (0-90°) Validation
Camera Configuration Functions
Function Description Parameters Return Values Purpose
load_config Loads configuration from JSON file None bool: True on success Initialization
save_config Saves configuration to JSON file None bool: True on success Persistence
create_default_config Creates default configuration None None Fallback
get_cameras Gets all camera configurations None List[Dict]: Camera list Query
get_enabled_cameras Gets only enabled cameras None List[Dict]: Enabled cameras Filtering
get_camera_by_index Gets camera by index index (int): Camera index Optional[Dict]: Camera or None Single query
add_camera Adds new camera index (int): Index
verbindung (str): Connection
beschreibung (str): Description
name (str, optional): Name
bool: True on success Configuration
update_camera Updates camera configuration index (int): Index
**kwargs: Properties
bool: True on success Modification
remove_camera Removes camera from configuration index (int): Index to remove bool: True on success Management
parse_verbindung Parses connection string verbindung (str): Connection string Dict: Parsed connection data Processing
Queue Operations Functions
Function Description Parameters Return Values Purpose
add Adds operation to queue operation_type (str): Operation type
params (dict): Parameters
description (str): Description
None Queue management
clear Empties the queue None None Reset
import_from_csv Imports operations from CSV file file_path (str): Path to CSV file bool: True on success Import
export_to_csv Exports operations to CSV file file_path (str): Target path bool: True on success Export
remove Removes operation by index index (int): Index to remove None Single removal
execute_all Executes all operations in queue base_url (str): API URL
widgets (dict): GUI widgets
position_var: Position
servo_angle_var: Servo angle
last_distance_value: Last distance
run_in_thread (bool): Threading
None Batch execution
execute_single_operation Executes single operation operation: Operation
base_url (str): API URL
Additional parameters like execute_all
None Single execution
pause_queue Pauses queue execution None None Control
resume_queue Resumes queue execution None None Control
stop_queue Stops queue execution None None Control

📊 Activity Diagram anzeigen

⚙️ Execution Servo anzeigen

Example Usage

# Using the API Client
from api_client import ApiClient
# Set servo to 45°
result = ApiClient.set_servo_angle(45, "http://192.168.137.7")
# Move motor 100 steps upward
result = ApiClient.move_stepper(100, 1, 50, "http://192.168.137.7")
# Set LED to red
result = ApiClient.set_led_color("#FF0000", "http://192.168.137.7")