# Raccoon Wiki > Open-Source Robotics Platform for Botball competitions. Raise the floor, don't lower the ceiling. ## Docs - [Raccoon CLI](/04-raccoon-cli/06-connect-disconnect/) - [Architecture Overview](/06-firmware/architecture/) - [Starting the Web IDE](/03-web-ide/00-starting-the-web-ide/) - [Architecture Overview](/02-programming/00-overview/) - [Line Following](/02-programming/algorithms/line-following/) - [Dashboard](/01-botui/00-dashboard/) - [Available Steps](/05-api-reference/01-available-steps/) - [Your First Robot Program](/02-programming/00a-first-robot-program/) - [Interface Overview](/03-web-ide/01-interface-overview/) - [SPI Communication Protocol](/06-firmware/spi-protocol/) - [Lineup](/02-programming/algorithms/lineup/) - [Project Structure](/02-programming/01-project-structure/) - [create](/04-raccoon-cli/01-create/) - [Sensors & Actors](/01-botui/01-sensors-actors/) - [connect](/04-raccoon-cli/02-connect/) - [Mission Panel (Left)](/03-web-ide/02-mission-panel/) - [Motor Control](/06-firmware/motor-control/) - [IR Sensor Calibration (K-Means)](/02-programming/algorithms/ir-calibration/) - [Robot Definition](/02-programming/02-robot-definition/) - [Programs](/01-botui/02-programs/) - [Synchronizing Two Robots](/02-programming/03a-synchronizing-robots/) - [Flowchart Editor (Center)](/03-web-ide/03-flowchart-editor/) - [run](/04-raccoon-cli/03-run/) - [Sensor Reading](/06-firmware/sensors/) - [Missions](/02-programming/03-missions/) - [Wait for Light](/02-programming/algorithms/wait-for-light/) - [Settings](/01-botui/03-settings/) - [Data Pipeline](/06-firmware/data-pipeline/) - [Step Library (Right)](/03-web-ide/04-step-library/) - [update](/04-raccoon-cli/04-update/) - [Wall Alignment](/02-programming/algorithms/wall-alignment/) - [Steps DSL](/02-programming/04-steps/) - [Build and Flash](/06-firmware/build-flash/) - [Settings Modal](/03-web-ide/05-settings-modal/) - [Custom Steps](/02-programming/05-custom-steps/) - [Stop Conditions](/02-programming/04a-stop-conditions/) - [Floating Panels](/03-web-ide/06-floating-panels/) - [Sensors](/02-programming/06-sensors/) - [Running a Mission](/03-web-ide/07-running-a-mission/) - [Drive System](/02-programming/07-drive-system/) - [Projects List](/03-web-ide/08-projects-list/) - [Odometry](/02-programming/08-odometry/) - [Servos](/02-programming/09-servos/) - [Calibration](/02-programming/10-calibration/) - [Advanced Topics](/02-programming/11-advanced/) - [UI Steps & Screens](/02-programming/12-ui-steps/) - [Configuration Reference](/02-programming/13-configuration-reference/) - [IMU](/02-programming/14-imu/) - [Making Your Robot Competition Ready](/02-programming/15-competition-ready/) --- # Raccoon CLI URL: /04-raccoon-cli/06-connect-disconnect/ # connect & disconnect These two commands are responsible for connecting to the wombat and removing the existing saved connection. ## connect ```bash raccoon connect [OPTIONS] ADDRESS ``` Connect to an wombat via his IP-Address or hostname. | Option | Description | Default value | |--------------------------|-------------------------------------|---------------| | `-p, --port PORT_NUMBER` | `Pi server port` | pi | | `-u, --user USERNAME` | `SSH username` | 8421 | | `--save / --no-save` | `Save connection to project config` | save | ### Examples ```bash raccoon connect 192.168.4.1 #Connect to Wombat with default credentials to 192.168.4.1 raccoon connect -u fox1 --port 8222 192.168.4.1 #Connect to Wombat with custom port and user ``` ## disconnect Removes the current connection state and disconnects active connection to wombat. ```bash raccoon disconnect ``` --- # Architecture Overview URL: /06-firmware/architecture/ ## Why Two Processors? The Raspberry Pi runs Linux, which is a general-purpose preemptive operating system. Linux processes can be interrupted by the scheduler at any time. For most tasks — path planning, vision, networking, user code — this is fine. For motor control it is a problem. Back-EMF based position tracking requires the STM32 to briefly stop every motor, wait exactly 500 µs for the motor back-EMF signal to stabilise, then sample the ADC. This cycle runs every 5000 µs (200 Hz). If any step in that cycle is delayed by even a few hundred microseconds, the BEMF reading picks up noise from the PWM switching rather than the actual back-EMF, and position tracking degrades or breaks entirely. Linux cannot reliably guarantee sub-millisecond scheduling latency without a real-time patch (PREEMPT-RT), which the Wombat image does not use. The STM32 has no operating system. Its interrupt controller runs at known, deterministic latencies. Timer-triggered interrupts fire within nanoseconds of their programmed time. ## Responsibility Split ### STM32F427 (firmware, `Firmware-Stp/`) The STM32 owns everything that touches physical hardware at high frequency: - **PWM generation** for four DC motors (TIM1 and TIM8, ~25 kHz, 0–399 duty range) - **PWM generation** for four servos (TIM3 and TIM9, 50 Hz, 600–2600 µs pulse width) - **Back-EMF (BEMF) sampling** via ADC2 (eight channels, differential pair per motor) - **Motor control loop** execution triggered by each BEMF completion (200 Hz) - **Analog sensor ADC scanning** via ADC1 (six general-purpose ports + battery voltage) - **Digital input scanning** (ten general-purpose pins + one on-board button) - **IMU data acquisition and fusion** via SPI3 to the MPU-9250/AK8963 - **Position accumulation** (integrating BEMF ticks into an odometer) - **Odometry computation** (velocity and position in world frame, if kinematics config is loaded) - **SPI slave interface** to the Raspberry Pi (SPI2, DMA-driven, circular mode) The STM32 does not run any user code. It executes only the firmware compiled from `Firmware-Stp/`. ### Raspberry Pi (user code, libstp, stm32-data-reader) The Pi handles everything that is not timing-critical: - Running the `stm32-data-reader` C++ process that drives the SPI master transaction, reads the `TxBuffer` from the STM32, writes the `RxBuffer` to the STM32, and republishes all sensor data as LCM messages - Running `raccoon-transport` as an LCM message bus for inter-process communication - Hosting libstp Python bindings that user code imports - Running vision inference (YOLO), UI rendering, mission logic, calibration steps - Computing high-level motion profiles, odometry integration from BEMF, kinematics ## Hardware The STM32F427VIT6 is an ARM Cortex-M4F running at 180 MHz (HSI oscillator, PLL configured at PLLM=8, PLLN=180, PLLP=DIV2, with Over-Drive enabled). It provides an FPU with hardware single-precision float, which is used throughout the PID and BEMF processing code. The MCU is connected to the Pi via: - **SPI2** (slave, GPIO port B pins 12–15): the main communication channel - **UART3** (GPIO port B pins 10–11): optional debug/log output read by the Pi ## Startup Sequence When the STM32 powers on or resets: 1. `HAL_Init()` initialises the HAL tick (SysTick at 1 ms). 2. `SystemClock_Config()` configures the PLL for 180 MHz operation. 3. All peripherals are initialised: GPIO, DMA, ADC1, ADC2, SPI2, SPI3, TIM1, TIM3, TIM6, TIM8, TIM9. 4. `systemTimerStart()` starts TIM6 (the 1 µs system tick) and arms the BEMF and analog sampling schedules. 5. `initPiCommunication()` arms the first SPI2 DMA transfer — from this point the STM32 is ready to receive commands and send sensor data. 6. `initMotors()` initialises all PID controller state. 7. `setupImu()` runs the MPU-9250 self-test to calibrate gyro/accel biases, initialises the InvenSense Motion Processing Library (MPL), loads the DMP firmware, and starts the sensor fusion pipeline at 50 Hz. The main loop then runs forever, handling: - Servo position updates (10 Hz) - PID gain updates from the `updateFlags` bitmask set by the SPI interrupt - IMU orientation matrix updates - IMU polling (`readImu()`) - SPI buffer refresh (`updatingAnalogValuesInSpiBuffer()`, `updatingMotorsInSpiBuffer()`) The motor control loop itself does **not** run in the main loop — it is triggered exclusively by the ADC2 conversion-complete interrupt (`HAL_ADC_ConvCpltCallback`) which fires after each BEMF sample. --- # Starting the Web IDE URL: /03-web-ide/00-starting-the-web-ide/ ## Starting the Web IDE From inside your project folder: ```bash raccoon web ``` This starts a local server and opens the Web IDE automatically in your browser. The URL will look like: ``` http://localhost:4200/WebIDE/projects/ ``` --- # Architecture Overview URL: /02-programming/00-overview/ # Architecture Overview LibSTP is a modular robotics SDK written in C++20 with Python bindings. It runs on a Raspberry Pi inside the Wombat controller. You write mission code in Python; the heavy lifting (control loops at 100 Hz, kinematics math, motor drivers) happens in compiled C++ underneath. ## The Layer Cake Every piece of robot code sits on a specific layer. Higher layers are simpler to use but less flexible. Lower layers give you full control. ```mermaid graph TB subgraph "Your Code" M["Missions"] S["Steps (DSL)"] end subgraph "Motion Layer" MO["Motion Controller
LinearMotion, TurnMotion, ArcMotion"] end subgraph "Control Layer" DR["Drive
Velocity PID + Feedforward"] KI["Kinematics
Differential / Mecanum"] OD["Odometry
Fused / STM32"] end subgraph "Hardware Abstraction (HAL)" HAL_M["Motor"] HAL_S["Servo"] HAL_IMU["IMU"] HAL_IR["IR Sensor"] HAL_D["Digital Sensor"] HAL_A["Analog Sensor"] end subgraph "Platform" WOM["Wombat Driver
LCM-based IPC to firmware"] MOCK["Mock Driver
For testing without hardware"] end M --> S S --> MO MO --> DR DR --> KI KI --> HAL_M DR --> HAL_IMU MO --> OD OD --> HAL_IMU OD --> KI S --> HAL_S S --> HAL_IR S --> HAL_D S --> HAL_A HAL_M --> WOM HAL_S --> WOM HAL_IMU --> WOM HAL_IR --> WOM HAL_D --> WOM HAL_A --> WOM HAL_M --> MOCK HAL_S --> MOCK HAL_IMU --> MOCK style M fill:#4CAF50,color:#fff style S fill:#66BB6A,color:#fff style MO fill:#42A5F5,color:#fff style DR fill:#42A5F5,color:#fff style KI fill:#42A5F5,color:#fff style OD fill:#42A5F5,color:#fff style HAL_M fill:#AB47BC,color:#fff style HAL_S fill:#AB47BC,color:#fff style HAL_IMU fill:#AB47BC,color:#fff style HAL_IR fill:#AB47BC,color:#fff style HAL_D fill:#AB47BC,color:#fff style HAL_A fill:#AB47BC,color:#fff style WOM fill:#FF7043,color:#fff style MOCK fill:#FF7043,color:#fff ``` ## What Each Layer Does ### Missions & Steps (Green) — Your Code This is where you spend 90% of your time. Missions define *what* the robot should do. Steps are the building blocks — `drive_forward(25)`, `turn_right(90)`, `servo(arm, 160)`. You compose them into sequences without worrying about control theory. ### Motion Layer (Blue) — Trajectory Execution When you call `drive_forward(25)`, the motion layer plans a trapezoidal velocity profile (accelerate → cruise → decelerate) and feeds velocity targets to the drive system at 100 Hz. You don't interact with this layer directly unless you're tuning performance. ### Control Layer (Blue) — Velocity & Position - **Drive**: Takes a desired chassis velocity (forward, lateral, angular) and uses PID + feedforward control to achieve it - **Kinematics**: Translates chassis velocity into individual wheel speeds (and back). Knows the geometry of your drivetrain — differential (2 wheels) or mecanum (4 wheels) - **Odometry**: Tracks the robot's position on the field by integrating wheel encoder ticks and IMU heading ### Hardware Abstraction Layer (Purple) Abstract interfaces (`Motor`, `Servo`, `IMU`, `IRSensor`, etc.) that hide platform-specific details. The same mission code works whether you're running on a real Wombat or a mock driver for testing. ### Platform Layer (Orange) Concrete drivers that talk to actual hardware. The Wombat driver communicates with the STM32 firmware over LCM (a lightweight inter-process messaging protocol). The mock driver returns simulated values for testing. ## Module Map LibSTP is split into 21 independent C++ modules, each with its own headers, source, and optional Python bindings: | Module | Layer | Purpose | |--------|-------|---------| | `libstp-foundation` | Core | Types (`Pose`, `ChassisVelocity`), PID, math, logging | | `libstp-hal` | HAL | Abstract hardware interfaces | | `libstp-platforms` | Platform | Wombat and Mock drivers | | `libstp-drive` | Control | Chassis velocity controller | | `libstp-kinematics` | Control | Differential and mecanum models | | `libstp-odometry` | Control | Position tracking (fused, STM32) | | `libstp-motion` | Motion | Linear, turn, and arc motion profiles | | `libstp-sensor-ir` | Sensor | IR line sensor with calibration | | `libstp-sensor-et` | Sensor | Additional sensor types | | `libstp-servo` | Actuator | Servo control | | `libstp-button` | Input | Digital button handling | | `libstp-cam` | Sensor | Camera interface | | `libstp-calibration-store` | Support | Persistent calibration data | | `libstp-timing` | Support | Timing and synchronization | | `libstp-async` | Support | Async/coroutine utilities | | `libstp-kmeans` | Algorithm | K-means clustering | | `libstp-debug` | Support | Debug utilities | | `libstp-screen` | Output | Display rendering | | `libstp-step` | Framework | Step execution framework (Python) | | `libstp-mission` | Framework | Mission lifecycle (Python) | | `libstp-robot` | Framework | Robot integration point (Python) | ## Data Flow: What Happens When You Call `drive_forward(25)` ```mermaid sequenceDiagram participant You as Your Mission participant Step as DriveForward Step participant Motion as LinearMotion participant Drive as Drive Controller participant Kin as Kinematics participant Motor as Motors (HAL) You->>Step: drive_forward(25) Step->>Motion: start(target=0.25m) loop Every 10ms (100 Hz) Motion->>Motion: Compute velocity profile Motion->>Drive: setDesiredVelocity(vx=0.2, vy=0, wz=0) Drive->>Drive: PID + Feedforward Drive->>Kin: applyCommand(corrected velocity) Kin->>Motor: setVelocity(left_wheel, right_wheel) end Motion-->>Step: Target reached Step-->>You: Step complete, next step runs ``` ## Getting Started If you haven't set up a robot yet, follow the [Quick Start]({{< ref "00-quick-start" >}}) guide first. It walks you through creating a project, configuring hardware in the YAML file, generating code, calibrating, and running your first mission. ## Key Design Decisions **Python-first API, C++ core**: You never need to write C++ to build a competition robot. The Python API exposes everything you need. C++ is used where performance matters (100 Hz control loops, real-time odometry). **DSL with factory functions**: Instead of `DriveForward(cm=25, speed=0.8)`, you write `drive_forward(25, 0.8)`. The underlying classes are hidden — you interact with clean factory functions that return builder objects. This keeps the API surface small and discoverable. **Platform abstraction**: The HAL layer means your code doesn't depend on the Wombat hardware. You can test mission logic against mock drivers, and the same code runs on any future platform that implements the HAL interfaces. **Composition over inheritance**: Missions are composed of steps. Steps are composed of smaller steps via `Sequential` and `parallel`. There's no deep class hierarchy to learn — just combine building blocks. --- # Line Following URL: /02-programming/algorithms/line-following/ # Line Following Line following keeps the robot tracking along a black line (or its edge) using PID-controlled steering corrections. The system reads calibrated IR sensor probabilities each control cycle and adjusts the robot's heading or lateral position to stay on course. ## Quick Start ```python # Two-sensor: follow a line for 50 cm Defs.front.follow_right_edge(cm=50) # Single-sensor: follow the right edge of a line for 30 cm follow_line_single( sensor=Defs.front.right, side=LineSide.RIGHT, distance_cm=30, ) # Follow until another sensor triggers follow_line( left_sensor=Defs.front.left, right_sensor=Defs.front.right, distance_cm=100, ).until(on_black(Defs.rear.right)) ``` ## How It Works ### Sensor Error Signal After [calibration]({{< ref "10-calibration" >}}), each IR sensor provides `probabilityOfBlack()` — a float from 0.0 (pure white) to 1.0 (pure black), linearly interpolated between the calibrated thresholds. **Two-sensor mode** computes the error as the difference between left and right: ``` error = left.probabilityOfBlack() - right.probabilityOfBlack() ``` - Positive error → left sees more black → steer left - Negative error → right sees more black → steer right - Zero → centered on line **Single-sensor mode** tracks the line *edge* by targeting a reading of 0.5 (half on, half off): ``` error = sensor.probabilityOfBlack() - 0.5 ``` The sign is flipped when following the opposite edge (`LineSide.RIGHT`). ### PID Steering A PID controller converts the sensor error into a steering correction each cycle: ``` correction = Kp * error + Ki * integral(error) + Kd * d(error)/dt ``` | Term | Default (two-sensor) | Default (single) | Effect | |------|:--------------------:|:-----------------:|--------| | `kp` | 0.4 | 1.0 | Sharpness of response to current error | | `ki` | 0.0 | 0.0 | Eliminates steady-state drift over time | | `kd` | 0.1 | 0.3 | Dampens oscillation around the line edge | The correction is applied as an angular velocity override on top of the robot's forward motion. The result is smooth, proportional steering — not bang-bang switching. ### Velocity Profiling The profiled variants (`follow_line`, `follow_line_single`) use a **trapezoidal velocity profile** for forward motion: the robot accelerates smoothly, cruises at the target speed, then decelerates as it approaches the target distance. This prevents overshoot at the end of a line-follow segment. ## Variants The system provides two families of line-follow steps: ### Profiled (Forward/Backward) Best for straight-line following where you know the distance: ```python follow_line( left_sensor=Defs.front.left, right_sensor=Defs.front.right, distance_cm=50, speed=0.5, ) follow_line_single( sensor=Defs.front.right, side=LineSide.RIGHT, distance_cm=30, speed=0.4, ) ``` These use `LinearMotion` with trapezoidal profiling and odometry-based distance tracking. ### Directional (Any Heading + Strafe) For robots that can move laterally (omni/mecanum wheels), directional variants allow following a line while moving in any direction: ```python # Follow line while driving forward — correct with rotation directional_follow_line( left_sensor=Defs.front.left, right_sensor=Defs.front.right, heading_speed=0.5, distance_cm=40, ) # Follow line while driving forward — correct with strafing (heading stays locked) strafe_follow_line( left_sensor=Defs.front.left, right_sensor=Defs.front.right, speed=0.5, distance_cm=40, ) ``` **Angular correction** (default) applies the PID output as rotational velocity — the robot rotates to stay on the line. **Lateral correction** (`strafe_follow_line`) keeps the heading locked using a secondary gyro-hold PID and instead strafes left/right to correct position. This is useful when the robot must maintain a specific orientation while following a line. ## Stopping Line following stops when either condition is met (whichever comes first): 1. **Distance reached** — the robot has traveled `distance_cm` from the start 2. **Stop condition triggered** — a composable `.until()` condition fires ```python # Stop after 50 cm follow_line(..., distance_cm=50) # Stop when another sensor sees black follow_line(..., distance_cm=100).until(on_black(Defs.rear.right)) # Stop on timeout follow_line(...).until(after_seconds(5)) ``` If neither is provided, the step uses a very large default distance (effectively "follow forever until cancelled"). ## Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `left_sensor` / `right_sensor` | IRSensor | Required | Two-sensor variants: the sensors straddling the line | | `sensor` | IRSensor | Required | Single-sensor variants: the sensor tracking an edge | | `side` | `LineSide` | `LEFT` | Single-sensor: which edge of the line to track | | `distance_cm` | float | None | Distance to follow before stopping | | `speed` | float | 0.5 | Forward speed as fraction of max velocity (0.0–1.0) | | `kp`, `ki`, `kd` | float | See above | PID gains for steering correction | ## Tips 1. **Start with default PID gains.** Only tune if the robot oscillates (lower `kp`, raise `kd`) or drifts off the line (raise `kp`). 2. **Use two sensors when possible.** Two-sensor following is inherently more stable because the error signal is differential — ambient noise affects both sensors equally and cancels out. 3. **Single-sensor edge tracking works best at moderate speed.** At high speeds the sensor crosses the edge too quickly for accurate readings. 4. **Calibrate on the actual surface.** Line following accuracy depends directly on calibration quality — see [Calibration]({{< ref "10-calibration" >}}). --- # Dashboard URL: /01-botui/00-dashboard/ # Dashboard The dashboard is the main page where you can redirect yourself to the sensor data, programs and the robot settings. It is the starting point after the robot boots from where your operations can begin. ![IMG: Dashboard Screen on the Bot UI](/images/botui/botui-dashboard.png) --- # Available Steps URL: /05-api-reference/01-available-steps/ # Available Steps All steps available in the LibStp DSL, grouped by category. These are the building blocks you use to compose robot missions. {{< dsl-steps >}} --- # Your First Robot Program URL: /02-programming/00a-first-robot-program/ # Your First Robot Program You've set up your robot, connected it, and `raccoon run` works. Now what? This page walks you through writing your first real program — step by step, one concept at a time. By the end, your robot will drive, move servos, react to sensors, and run a proper mission sequence. > **Prerequisites:** You've completed the [Quick Start]({{< ref "00-quick-start" >}}) and can run `raccoon run` successfully. Your hardware is configured in `raccoon.project.yml`. --- ## The Programming Workflow This is what you'll do over and over: 1. **Edit** your Python mission file on your laptop 2. **Run** `raccoon run` in the terminal 3. **Watch** the robot execute, read the terminal output 4. **Iterate** — tweak values, add steps, fix what went wrong That's it. There's no separate compile step, no firmware flashing, no manual file copying. `raccoon run` handles syncing, code generation, and execution in one command. > **Don't edit `defs.py` or `robot.py`** — these are generated from your YAML config every time you run. Your changes will be overwritten. Edit `raccoon.project.yml` (or the `config/*.yml` files) instead, then `raccoon run` regenerates them. --- ## Step 1: The Minimal Mission Open your first mission file. If you created a project with `raccoon create project`, you already have `src/missions/m01_mission.py` (or similar). If not, create one: ```bash raccoon create mission MyFirstMission ``` Replace the contents with: ```python from libstp import * from src.hardware.defs import Defs class M01MyFirstMission(Mission): def sequence(self) -> Sequential: return seq([ drive_forward(10), ]) ``` That's a complete mission. It drives the robot forward 10 cm, then stops. Run it: ```bash raccoon run ``` **What to expect:** The robot drives forward a short distance. The terminal shows timing info and step execution. If it doesn't move at all, check your motor config (ports, inversion) in the YAML. --- ## Step 2: Drive and Turn Now let's make the robot do something more interesting — drive a square: ```python class M01MyFirstMission(Mission): def sequence(self) -> Sequential: return seq([ drive_forward(25), turn_right(90), drive_forward(25), turn_right(90), drive_forward(25), turn_right(90), drive_forward(25), turn_right(90), ]) ``` **What to expect:** The robot drives a roughly square path and ends up near where it started. It won't be perfect — that's normal without calibration. ### Key drive steps | Step | What it does | |------|-------------| | `drive_forward(cm)` | Drive forward by a distance | | `drive_backward(cm)` | Drive backward by a distance | | `turn_left(degrees)` | Turn left in place | | `turn_right(degrees)` | Turn right in place | | `strafe_left(cm)` | Slide left (mecanum only) | | `strafe_right(cm)` | Slide right (mecanum only) | All of these accept an optional `speed` parameter (0.0 to 1.0, default 1.0): ```python drive_forward(25, speed=0.5) # Half speed — more controlled turn_right(90, speed=0.3) # Slow turn — more accurate ``` --- ## Step 3: Use a Servo Servos are how your robot interacts with the world — arms, claws, pushers, lifters. Let's make one move. ### Define the servo in your YAML In `raccoon.project.yml` (or `config/servos.yml`), add a servo definition: ```yaml definitions: # ... your motors, sensors, etc. my_arm_servo: type: Servo port: 0 # Which servo port on the Wombat (0-3) ``` ### Find the right angles Before writing code, you need to find the servo angles for each position: 1. Open **BotUI → Sensors & Actors** on the robot's touchscreen 2. Move the servo slider to find each position you need 3. Write down the angles (e.g., "down = 20, up = 150") ### Use it as a plain servo The simplest way — just tell it which angle to go to: ```python class M01MyFirstMission(Mission): def sequence(self) -> Sequential: return seq([ servo(Defs.my_arm_servo, 150), # Arm up drive_forward(25), servo(Defs.my_arm_servo, 20), # Arm down drive_backward(25), servo(Defs.my_arm_servo, 150), # Arm back up ]) ``` ### Use named presets (recommended) For servos you use a lot, define named positions. Update your YAML: ```yaml definitions: arm: type: ServoPreset servo: port: 0 positions: up: 150 down: 20 mid: 85 ``` Now in your code, you get named methods: ```python seq([ Defs.arm.up(), # Move to 150° drive_forward(25), Defs.arm.down(), # Move to 20° drive_backward(25), Defs.arm.mid(), # Move to 85° ]) ``` This is much more readable and less error-prone than remembering angle numbers. ### Control servo speed By default servos move at full speed. For gentle or controlled movements, pass a speed in degrees per second: ```python Defs.arm.down(60) # Move down at 60 deg/s (slow and controlled) Defs.arm.up(300) # Move up at 300 deg/s (fast but not instant) ``` --- ## Step 4: React to Sensors The robot has IR sensors that detect black lines on the game table, digital sensors for buttons and limit switches, and analog sensors for light and distance. ### IR sensors — drive until you see a line The most common pattern: drive forward until an IR sensor detects a black line. ```python seq([ # Drive forward until the front-right sensor hits a black line drive_forward(speed=0.8).until(on_black(Defs.front_right_ir_sensor)), # Drive a bit more to center over the line drive_forward(2), ]) ``` If you have a `SensorGroup` defined (recommended), it's even simpler: ```python seq([ Defs.front.drive_until_black(), # Drive forward → stop on black drive_forward(2), # Nudge forward ]) ``` > **IR sensors need calibration** to reliably tell black from white. Add `calibrate(distance_cm=50)` to your setup mission (covered in Step 6 below). Without it, the thresholds may be wrong for your surface. ### Digital sensors — wait for a button press ```python seq([ wait_for_button(), # Wait for the Wombat's built-in button drive_forward(25), # Then go ]) ``` ### Analog sensors — stop on a threshold ```python # Drive forward until a distance sensor reads above 2000 drive_forward(speed=0.5).until(on_analog_above(Defs.distance_sensor, 2000)) ``` --- ## Step 5: Combine Actions with Parallel So far everything runs one step after another. But often you need to do two things at the same time — like driving while moving a servo. ```python seq([ # Drive forward AND lower the arm at the same time parallel( drive_forward(30), Defs.arm.down(60), ), # Now do something with the arm down Defs.claw.closed(), Defs.arm.up(), ]) ``` A more advanced pattern — trigger an action at a specific point during a drive: ```python parallel( drive_forward(50), # Track 1: drive 50 cm seq([ # Track 2: arm control wait_until_distance(30), # wait until 30 cm traveled Defs.arm.down(), # then lower the arm ]), ) ``` `parallel()` finishes when **all tracks** are done. The framework checks that you don't use the same hardware in two tracks — you can't drive forward and turn at the same time. --- ## Step 6: Put It All Together — A Real Mission Here's a complete project with setup, main mission, and shutdown — the pattern every competition robot follows. ### Setup mission (`m00_setup_mission.py`) Runs before the match starts. Calibrates and homes all servos: ```python from libstp import * from src.hardware.defs import Defs class M00SetupMission(Mission): def sequence(self) -> Sequential: return seq([ # Home servos to known positions Defs.arm.up(), Defs.claw.closed(), # Calibrate distance + IR sensors # Place the robot on the board with room to drive 50 cm forward calibrate(distance_cm=50), ]) ``` ### Main mission (`m01_grab_object_mission.py`) The actual task — drive to an object, pick it up, bring it back: ```python from libstp import * from src.hardware.defs import Defs class M01GrabObjectMission(Mission): def sequence(self) -> Sequential: return seq([ # Drive to the object drive_forward(30), turn_right(45), drive_forward(20), # Pick it up Defs.arm.down(), Defs.claw.open(), drive_forward(5, speed=0.3), # Approach slowly Defs.claw.closed(120), # Close gently at 120 deg/s Defs.arm.up(), # Drive back drive_backward(5), turn_left(45), drive_backward(30), # Release Defs.arm.down(), Defs.claw.open(), Defs.arm.up(), ]) ``` ### Shutdown mission (`m99_shutdown_mission.py`) Runs when the match timer expires. Stop everything safely: ```python from libstp import * from src.hardware.defs import Defs class M99ShutdownMission(Mission): def sequence(self) -> Sequential: return seq([ Defs.arm.up(), Defs.claw.open(), fully_disable_servos(), ]) ``` ### Mission order in the YAML In `raccoon.project.yml`: ```yaml missions: - M00SetupMission: setup - M99ShutdownMission: shutdown - M01GrabObjectMission ``` The `setup` mission runs first (before the start signal). After the start signal, missions run in list order. When `shutdown_in` milliseconds expire, the current mission is cancelled and the `shutdown` mission runs. --- ## Step 7: The Development Loop Now you know the basics. Here's how the real workflow goes: ### 1. Start simple, add complexity Don't write your whole mission at once. Start with just the first few steps: ```python return seq([ drive_forward(30), turn_right(45), ]) ``` Run it. Does it go the right distance? Is the angle right? Tweak the values, run again. ### 2. Add steps incrementally Once the first part works, add the next steps: ```python return seq([ drive_forward(30), turn_right(45), drive_forward(20), # NEW — added after verifying the above Defs.arm.down(), # NEW ]) ``` Run again. This way you always know which step is causing a problem. ### 3. Use print for debugging The `run()` step lets you print values mid-mission: ```python seq([ drive_forward(20), run(lambda robot: print(f"IR sensor: {robot.defs.front_right_ir_sensor.read()}")), drive_forward(20), ]) ``` Output appears in your terminal. ### 4. Use wait_for_button() as breakpoints Pause execution at any point to inspect the robot's state: ```python seq([ drive_forward(20), wait_for_button(), # Robot stops — press button to continue turn_right(90), wait_for_button(), # Stops again drive_forward(20), ]) ``` This is invaluable for debugging positioning. --- ## Common Patterns ### Drive to a line and align ```python seq([ Defs.front.drive_until_black(), # Drive forward until you see a line Defs.front.lineup_on_black(), # Square the robot on the line ]) ``` ### Follow a line for a distance ```python Defs.front.follow_right_edge(cm=50) # Follow right edge of the line for 50 cm ``` ### Follow a line until another sensor triggers ```python Defs.front.follow_right_edge(999).until( on_black(Defs.side_sensor) ) ``` ### Wall align Drive backward into a wall to reset your position/heading: ```python seq([ wall_align_backward(accel_threshold=0.3), # Back into wall until impact drive_forward(2), # Pull away from wall ]) ``` ### Grab and release ```python seq([ Defs.arm.down(), Defs.claw.open(), drive_forward(3, speed=0.3), Defs.claw.closed(120), # Slow close — controlled grip Defs.arm.up(120), # Slow lift ]) ``` --- ## See a Complete Project The code snippets on this page are intentionally minimal — each one teaches one thing. Once you understand the pieces, it helps to see them all working together in a real project. **[raccoon-example](https://github.com/htl-stp-ecer/raccoon-example)** is a clean reference robot that demonstrates everything on this page in a single, readable codebase: - `m00_setup_mission.py` — servo homing, `calibrate()`, `wait_for_button()` - `m01_navigate_to_object_mission.py` — `mark_heading_reference()`, `parallel()`, `.until()` with a sensor stop condition - `m02_collect_object_mission.py` — reusable custom step, `wall_align_backward()` - `m03_deliver_object_mission.py` — `strafe_follow_line_single()`, combined stop conditions - `steps/arm_steps.py` — `seq()` composition and `defer()` for runtime decisions It is built with the same patterns as competition robots, but written for clarity rather than speed. The competition robots ([drumbot](https://gitlab.com/fallgame2025/drumbot), [conebot](https://gitlab.com/fallgame2025/conebot), [packingbot](https://gitlab.com/fallgame2025/packingbot)) show what the platform looks like under real pressure — raccoon-example shows what the code *should* look like when you have time to write it properly. ```bash git clone https://github.com/htl-stp-ecer/raccoon-example.git ``` --- ## What to Learn Next | Topic | When you need it | |-------|-----------------| | [Stop Conditions]({{< ref "04a-stop-conditions" >}}) | Combining sensor conditions with OR, AND, THEN | | [Custom Steps]({{< ref "05-custom-steps" >}}) | Packaging reusable behaviors as step functions | | [Calibration]({{< ref "10-calibration" >}}) | Making distances and turns accurate | | [Sensors]({{< ref "06-sensors" >}}) | All sensor types and usage patterns | | [Drive System]({{< ref "07-drive-system" >}}) | PID tuning, when the robot drifts or overshoots | | [Odometry]({{< ref "08-odometry" >}}) | Tracking position on the field | | [Servos]({{< ref "09-servos" >}}) | Advanced servo patterns, shake, slow servo | | [IMU]({{< ref "14-imu" >}}) | Using the gyroscope for heading correction | --- ## Quick Reference Card ```python # === DRIVING === drive_forward(cm) # Drive forward drive_forward(cm, speed=0.5) # Half speed drive_backward(cm) # Drive backward turn_left(degrees) # Turn in place turn_right(degrees) # Turn in place # === SERVOS === servo(Defs.my_servo, angle) # Move to angle Defs.arm.up() # Named preset (full speed) Defs.arm.down(60) # Named preset (60 deg/s) # === SENSORS === drive_forward(speed=0.8).until(on_black(Defs.front.right)) drive_forward(speed=0.5).until(on_analog_above(Defs.sensor, 2000)) Defs.front.drive_until_black() Defs.front.lineup_on_black() Defs.front.follow_right_edge(cm=50) # === CONTROL FLOW === seq([step1, step2, step3]) # Run in order parallel(track1, track2) # Run at the same time wait_for_seconds(1.0) # Pause wait_for_button() # Wait for button press wait_until_distance(cm) # Wait until driven distance run(lambda robot: print("hi")) # Run Python code # === CALIBRATION === calibrate(distance_cm=50) # Calibrate motors + sensors auto_tune() # Auto-tune PID + motion profiles ``` --- # Interface Overview URL: /03-web-ide/01-interface-overview/ ## Interface Overview ![Web IDE tour](/images/web-ide/webide-tour.gif) The Web IDE has a **three-panel layout**: | Panel | Location | Purpose | |-------|----------|---------| | **Mission Panel** | Left | List of all missions, add/remove missions | | **Flowchart Editor** | Center | Visual flowchart builder for the selected mission | | **Step Library** | Right | Searchable list of all available steps, drag into flowchart | The **top bar** shows the connected device name, IP address, and a live connection indicator (green = online, red = offline). --- # SPI Communication Protocol URL: /06-firmware/spi-protocol/ ## Overview The STM32 and Raspberry Pi communicate exclusively over SPI2. Every transaction is a full-duplex exchange: the Pi sends an `RxBuffer` (commands to the STM32) while simultaneously receiving a `TxBuffer` (sensor data from the STM32). Both buffers are raw packed C structs with no framing, no length field, and no checksum — the only integrity mechanism is a `transferVersion` field and a parity bit in the update-flags word. The canonical shared definition of both structs lives in: ``` stm32-data-reader/shared/spi/pi_buffer.h ``` This file is compiled into the `stm32-data-reader` process on the Pi and is mirrored (with minor version differences) in `Firmware-Stp/Firmware/include/Communication/pi_buffer_struct.h`. Both sides must agree on `TRANSFER_VERSION` (currently `15` in the shared header) or the STM32 ignores incoming commands. ## SPI Physical Layer | Parameter | Value | |---|---| | Peripheral | SPI2 (STM32 pins PB12–PB15) | | STM32 role | Slave | | Pi role | Master | | Mode | CPOL=0, CPHA=0 (Mode 0) | | Data width | 8-bit | | Bit order | MSB first | | NSS | Hardware (PB12) | | DMA | Circular, both TX and RX (DMA1 stream 3/4, channel 0) | | Interrupt priority | Highest (preempt 0, sub 0) | The DMA runs in **circular mode**, which means that after each transfer of `BUFFER_LENGTH_DUPLEX_COMMUNICATION` bytes the DMA automatically restarts and fires `HAL_SPI_TxRxCpltCallback`. The transfer size is the larger of `sizeof(TxBuffer)` and `sizeof(RxBuffer)`, so both sides always transfer the same number of bytes. ## TxBuffer (STM32 → Pi) The `TxBuffer` contains all sensor readings that the STM32 exposes. It is declared `__attribute__((packed))` to eliminate alignment padding and ensure byte-exact layout on both ARM and aarch64/x86. ```c typedef struct __attribute__((packed)) TxBuffer_tag { uint8_t transferVersion; // Must match TRANSFER_VERSION uint32_t updateTime; // Microsecond timestamp (from TIM6) MotorData motor; // BEMF readings, positions, done flags int16_t analogSensor[6]; // 12-bit ADC readings, ports 0–5 int16_t batteryVoltage; // 12-bit ADC reading, ADC1 channel 10 uint16_t digitalSensors; // Bits 0–9: DIN0–DIN9, bit 10: onboard button ImuData imu; // Full IMU fusion output OdometryData odometry; // World-frame position/velocity (if configured) } TxBuffer; ``` ### MotorData ```c typedef struct __attribute__((packed)) { int32_t bemf[4]; // Filtered instantaneous BEMF (arbitrary units) int32_t position[4]; // Accumulated BEMF ticks (odometer) uint8_t done; // Bit N set when motor N reached its position goal } MotorData; ``` ### ImuData ```c typedef struct __attribute__((packed)) { SensorData gyro; // World-frame angular velocity (rad/s × scaling) SensorData accel; // World-frame acceleration (m/s²) SensorData compass; // Raw magnetometer counts (AK8963) SensorData linearAccel; // World-frame linear acceleration (gravity removed) SensorData accelVelocity; // Integrated linear velocity (experimental) QuaternionData dmpQuat; // DMP 6-axis quaternion (w, x, y, z) float heading; // Heading in degrees (0–360, CW from North) float temperature; // IMU die temperature (°C) } ImuData; ``` Each `SensorData` carries a `{x, y, z}` float vector and an `int8_t accuracy` (0 = least accurate, 3 = most accurate), matching the InvenSense MPL accuracy levels. ### OdometryData ```c typedef struct __attribute__((packed)) { float pos_x; // meters, world frame float pos_y; // meters, world frame float heading; // radians, CCW-positive (ENU convention) float vx; // m/s, body frame float vy; // m/s, body frame float wz; // rad/s, body frame } OdometryData; ``` Odometry is only meaningful once the Pi has sent a `KinematicsConfig` (see below). Until then, all odometry fields are zero. ## RxBuffer (Pi → STM32) ```c typedef struct __attribute__((packed)) { uint8_t transferVersion; // Must match TRANSFER_VERSION uint32_t updates; // Bitmask of one-shot update flags uint8_t systemShutdown; // Bit 0: disable servos, bit 1: disable motors uint16_t motorControlMode; // 3 bits per motor (motors 0–3) int32_t motorTarget[4]; // PWM duty / velocity setpoint / speed limit int32_t motorGoalPosition[4]; // Absolute target position (MTP mode) uint8_t servoMode; // 2 bits per servo (servos 0–3) uint16_t servoPos[4]; // Servo PWM value in timer ticks (600–2600) MotorPidSettings motorPidSettings; // Full PID configuration block int8_t imuGyroOrientation[9]; // Row-major 3×3 orientation matrix int8_t imuCompassOrientation[9]; KinematicsConfig kinematics; // Inverse kinematics matrix + tick-to-rad } RxBuffer; ``` ### Motor Control Mode Encoding `motorControlMode` packs 3-bit mode values for four motors into a 16-bit word. Motor `n` occupies bits `[3n+2 : 3n]`: | Mode value | Name | Behaviour | |---|---|---| | `0b000` | `MOT_MODE_OFF` | PWM off, H-bridge pins LOW (coast) | | `0b001` | `MOT_MODE_PASSIV_BRAKE` | H-bridge pins both HIGH (short brake) | | `0b010` | `MOT_MODE_PWM` | Direct duty cycle, `motorTarget[n]` is duty (0–399) | | `0b011` | `MOT_MODE_MAV` | Velocity PID, `motorTarget[n]` is BEMF-tick velocity setpoint | | `0b100` | `MOT_MODE_MTP` | Position PID, `motorGoalPosition[n]` is target, `motorTarget[n]` caps speed | ### Servo Mode Encoding `servoMode` packs 2-bit values for four servos into a single byte: | Mode value | Name | Behaviour | |---|---|---| | `0` | `SERVO_FULLY_DISABLED` | PWM stopped, 6 V rail may be cut | | `1` | `SERVO_DISABLED` | PWM stopped, 6 V rail remains on | | `2` | `SERVO_ENABLED` | PWM active at position `servoPos[n]` | If all four servos are `SERVO_FULLY_DISABLED` the firmware cuts the 6 V servo power rail by driving `SERVO_6V0_ENABLE_Pin` low. This saves power and prevents servo jitter when no servos are in use. ### Update Flags The `updates` field in the `RxBuffer` is a bitmask of one-shot commands. The STM32 reads this field on every SPI completion and acts on bits that are set. Bit 7 (value `0x80`) is a **parity bit** — the firmware checks whether `rxBuffer.updates` is odd (parity bit set implies the lower 7 bits contain an even number of set bits; the Pi must encode this correctly) before trusting the update flags. | Bit | Constant | Effect | |---|---|---| | 0 | `PI_BUFFER_UPDATE_MOTOR_PID_SPEED` | Copy `motorPidSettings` into the velocity PID controllers | | 1 | `PI_BUFFER_UPDATE_MOTOR_PID_POS` | Copy `motorPidSettings` into the position PID controllers | | 2 | `PI_BUFFER_UPDATE_IMU_ORIENTATION` | Apply new gyro/compass orientation matrices to the MPL | | 3 | `PI_BUFFER_UPDATE_SAVE_IMU_CAL` | (Reserved — save IMU calibration) | | 4 | `PI_BUFFER_UPDATE_KINEMATICS` | Load new `KinematicsConfig` for odometry | | 5 | `PI_BUFFER_UPDATE_ODOM_RESET` | Reset odometry accumulators to zero | | 7 | `PI_BUFFER_UPDATE_PARITY_BIT` | Parity check bit | ### Shutdown Safety If `systemShutdown & SHUTDOWN_MOTOR` (bit 1) is set in the `RxBuffer`, the STM32 firmware calls `motors_forceOff()` immediately in the SPI completion callback, before any main-loop processing. This is the primary hardware safety interlock: the Pi sets this bit on process exit, and the STM32 guarantees motors stop within one SPI cycle (< 1 ms at normal SPI speeds) even if the Pi process crashes. ## SPI Interrupt Handling The SPI completion callback (`HAL_SPI_TxRxCpltCallback`) runs at interrupt priority 0 (highest possible). On every completion: 1. Timestamps `txBuffer.updateTime` with the current microsecond counter. 2. Reads digital inputs and writes them to `txBuffer.digitalSensors`. 3. Validates `rxBuffer.transferVersion`. If it does not match `TRANSFER_VERSION`, commands are silently ignored. 4. Validates the parity bit in `rxBuffer.updates`. If valid, copies the flags into `updateFlags` for main-loop processing. 5. Calls `sanitizeMotorCommandsForShutdown()` to enforce the motor shutdown flag immediately. The DMA then automatically restarts the transfer for the next cycle. ## BEMF Calibration in the Protocol The shared header previously included per-motor BEMF calibration fields (`bemfScale`, `bemfOffset`, `nominalVoltageAdc`) in the `RxBuffer`. These have been removed in `TRANSFER_VERSION 15` in favour of the `KinematicsConfig` approach where the Pi sends the full wheel-geometry configuration once at startup. Check `stm32-data-reader/shared/spi/pi_buffer.h` for the current definitive layout. --- # Lineup URL: /02-programming/algorithms/lineup/ # Lineup (Line Alignment) Lineup aligns the robot square on a black line. Unlike iterative correction approaches that repeatedly measure and adjust, the lineup algorithm computes the exact correction angle in a **single drive pass** using basic trigonometry. This makes it fast — alignment completes with almost no time lost. ## Quick Start ```python # Dual sensor (recommended): both sensors find the line forward_lineup_on_black(Defs.front.left, Defs.front.right) backward_lineup_on_black(Defs.front.left, Defs.front.right) # Via SensorGroup shortcut Defs.front.lineup_on_black() # Single sensor: when only one sensor is available forward_single_lineup( Defs.front.right, correction_side=CorrectionSide.RIGHT, ) ``` ## Dual-Sensor Lineup The dual-sensor algorithm is the recommended approach. It uses two IR sensors (left and right) and exploits the **stagger** between their line crossings to compute the robot's angular error. ### How It Works As the robot drives forward across a line, the two sensors don't hit the line at the same time (unless already perfectly aligned). The sensor closer to the line crosses first. The robot keeps driving until the second sensor crosses. The distance driven between the two crossings — the **stagger** — directly encodes the alignment error. ``` ┊ line L ┊ R ●────┤────● Robot approaching at angle ┊ ┊ ─────┼────── ┊ L hits line first ┊ then robot drives distance d ┊ R hits line ┊ angle = atan(d / sensor_gap) ``` The correction angle is computed from pure geometry: ``` angle = atan(stagger_distance / sensor_gap) ``` Where `sensor_gap` is the known physical distance between the two sensors on the robot. ### Algorithm Phases The full `forward_lineup_on_black` sequence has three phases: 1. **Measure** — Drive forward at constant speed. Record odometry when the first sensor crosses the black threshold, then when the second one does. The difference is the stagger distance. 2. **Turn** — Execute a point turn by the computed angle. The sign is determined by which sensor hit first (left first → turn right, right first → turn left). 3. **Clear** — Drive forward at reduced speed until both sensors see white again, leaving the robot just past the line, squared up and ready. All three phases execute back-to-back with no pauses. The measurement phase happens at full approach speed — there is no need to slow down. ### Why This Works So Well - **No iteration.** One pass through the line gives the exact correction angle. No PID convergence, no repeated approaches. - **No tuning.** The algorithm is pure geometry — there are no gains to adjust. - **Fast.** At 1.0 m/s approach speed, a typical lineup completes in well under a second. - **Robust.** Works at any initial angle. If the robot is already aligned, the stagger is ~0 and no turn is needed. ## Single-Sensor Lineup When only one sensor is available, the algorithm uses a different geometric approach: it measures the **apparent line width** as the sensor crosses the line. ### How It Works A line of known width `W` appears wider when crossed at an angle: ``` apparent_width = W / cos(angle) ``` So the correction angle can be recovered: ``` angle = acos(actual_width / apparent_width) ``` The algorithm drives forward, recording odometry at the **leading edge** (sensor confidence rises above `entry_threshold`) and **trailing edge** (confidence drops below `exit_threshold`). The distance between them is the apparent width. ### Limitation A single sensor cannot determine which *direction* the robot is skewed — only the magnitude of the skew. You must specify `correction_side` (LEFT or RIGHT) to tell the algorithm which way to turn. If the correction side is wrong, the robot will turn further out of alignment. Angles below 1 degree are ignored (no correction applied) to avoid unnecessary micro-turns. ## Strafe Lineup For robots with omnidirectional drive (mecanum or omni wheels), `strafe_lineup_on_black` works the same way as the dual-sensor algorithm but uses lateral movement instead of forward motion. The front and back sensors detect the line while the robot strafes sideways. ```python strafe_left_lineup_on_black(Defs.left.front, Defs.left.back) strafe_right_lineup_on_black(Defs.right.front, Defs.right.back) ``` ## Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `left_sensor` / `right_sensor` | IRSensor | Required | Dual-sensor: the two sensors that will cross the line | | `detection_threshold` | float | 0.7 | Sensor confidence (0.0–1.0) needed to register a line hit | | `forward_speed` | float | 1.0 | Approach speed in m/s (negative for backward) | **Single-sensor additional parameters:** | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `sensor` | IRSensor | Required | The single sensor crossing the line | | `line_width_cm` | float | 5.0 | Known physical width of the line | | `entry_threshold` | float | 0.7 | Confidence to detect leading edge | | `exit_threshold` | float | 0.3 | Confidence to detect trailing edge (should be lower than entry) | | `correction_side` | `CorrectionSide` | Required | Direction to turn for correction (LEFT or RIGHT) | ## Tips 1. **Prefer dual-sensor lineup.** It's faster, more accurate, and doesn't require you to know the correction direction. 2. **Mount sensors as far apart as possible.** A wider sensor gap gives better angle resolution — small stagger differences map to larger, more measurable distances. 3. **Use the SensorGroup shortcut** (`Defs.front.lineup_on_black()`) for the most common case. 4. **Lineup works at full speed.** There's no need to slow down before hitting the line. The measurement is taken during continuous motion. --- # Project Structure URL: /02-programming/01-project-structure/ # Project Structure Every robot project follows the same file layout. This page explains what each file does and how the pieces fit together. ## See It First Before reading the rest of this page, it helps to have a real project open in front of you. **[raccoon-example](https://github.com/htl-stp-ecer/raccoon-example)** is a clean reference robot built specifically for documentation purposes — no competition pressure, no half-finished experiments. It demonstrates every concept on this page in a single, readable project: ```bash git clone https://github.com/htl-stp-ecer/raccoon-example.git ``` Open it alongside this page. The file layout, naming conventions, and patterns described below all map directly to files in that repository. --- ## Creating a Project Use the Raccoon CLI to scaffold a new project: ```bash raccoon create project MyRobot ``` This generates the full directory layout, YAML configuration, and starter Python files. See [Raccoon CLI]({{< ref "04-raccoon-cli" >}}) for details on the CLI commands. > **Note:** You can also create the project files manually, but the CLI scaffold saves significant setup time and ensures the correct structure. ## Directory Layout A typical project generated by the Raccoon CLI looks like this: ``` my-robot/ ├── raccoon.project.yml # Project configuration (hardware, drive, missions) ├── racoon.calibration.yml # Sensor and motor calibration data (note: single 'c') ├── config/ # Split config files (included by project.yml) │ ├── robot.yml │ ├── hardware.yml │ ├── missions.yml │ └── connection.yml └── src/ ├── __init__.py # Required for Python imports ├── main.py # Entry point — creates Robot, calls start() ├── hardware/ │ ├── __init__.py # Required for Python imports │ ├── defs.py # Hardware definitions (motors, servos, sensors) │ └── robot.py # Robot class (kinematics, drive, odometry, missions) ├── missions/ │ ├── __init__.py # Required for Python imports │ ├── m00_setup_mission.py # Runs before the match (calibration, homing) │ ├── m01_first_task.py # First autonomous mission │ ├── m02_second_task.py # Second autonomous mission │ └── m99_shutdown.py # Cleanup after timeout ├── service/ │ ├── __init__.py # Required for Python imports │ └── my_service.py # Stateful logic shared across missions └── steps/ ├── __init__.py # Required for Python imports └── my_custom_steps.py # Reusable custom step functions ``` > **Important:** Every directory under `src/` needs an `__init__.py` file (can be empty) for Python imports to work. The Raccoon CLI creates these automatically, but if you add directories manually, don't forget them or you'll get `ModuleNotFoundError`. ## Key Files Explained ### `raccoon.project.yml` — Project Configuration This is the central configuration file. It defines your robot's hardware, drive parameters, physical dimensions, and mission list. The Raccoon CLI and BotUI both read this file. ```yaml name: ConeBot uuid: a1b2c3d4-e5f6-7890-abcd-ef1234567890 robot: !include 'config/robot.yml' missions: !include 'config/missions.yml' definitions: !include 'config/hardware.yml' connection: !include 'config/connection.yml' ``` The `!include` tags split configuration across multiple files to keep things manageable. You can also put everything in a single file — here's what the expanded version looks like (from a real project): ```yaml name: PackingBot uuid: 322a6cb2-b54d-4ad2-bb55-14d157403ae7 robot: shutdown_in: 120 # Emergency stop after 120 seconds drive: kinematics: type: mecanum # or "differential" wheel_radius: 0.0375 track_width: 0.2 wheelbase: 0.125 front_left_motor: front_left_motor front_right_motor: front_right_motor back_left_motor: rear_left_motor back_right_motor: rear_right_motor # ... PID and velocity config ... physical: width_cm: 23.5 length_cm: 29.6 rotation_center: x_cm: 11.75 y_cm: 18.5 definitions: front_left_motor: type: Motor port: 1 inverted: false imu: type: IMU front_right_light_sensor: type: IRSensor port: 4 # ... more hardware ... missions: - M01SetupMission: setup - M99ShutdownMission: shutdown connection: pi_address: 192.168.100.237 pi_port: 8421 pi_user: pi ``` ### `racoon.calibration.yml` — Calibration Data Stores calibration values measured on the actual robot. This file is updated automatically when you run calibration steps. Don't edit it by hand — use the calibration workflow instead (see [Calibration]({{< ref "10-calibration" >}})). ```yaml root: ir-calibration: default: white_tresh: 1469.84 black_tresh: 2490.58 default_port4: white_tresh: 543.45 black_tresh: 3647.12 ``` ### `src/main.py` — Entry Point The simplest file in the project. Creates the robot and starts execution: ```python from src.hardware.robot import Robot robot = Robot() if __name__ == "__main__": robot.start() ``` `robot.start()` handles everything: initializing hardware, running the setup mission, waiting for the start signal, executing main missions in sequence, and running the shutdown mission when time expires. ### `src/hardware/defs.py` — Hardware Definitions (Generated) > **This file is auto-generated** from the `definitions:` section of `raccoon.project.yml` by the Raccoon CLI. **Never edit this file by hand** — it gets overwritten every time code generation runs. Always make changes in the YAML file instead. Here's what the generated code looks like: ```python from libstp import ( DigitalSensor, IRSensor, Motor, MotorCalibration, SensorGroup, Servo, ServoPreset, ) from libstp import IMU as Imu class Defs: # IMU and start button imu = Imu() button = DigitalSensor(port=10) # Drive motors front_left_motor = Motor( port=0, inverted=False, calibration=MotorCalibration( ticks_to_rad=1.947e-05, vel_lpf_alpha=1.0 ), ) front_right_motor = Motor( port=1, inverted=False, calibration=MotorCalibration( ticks_to_rad=1.689e-05, vel_lpf_alpha=1.0 ), ) # Sensors front_right_ir = IRSensor(port=0) front = SensorGroup(right=front_right_ir) # Servos with named positions claw = ServoPreset(Servo(port=2), positions={"closed": 135, "open": 30}) arm = ServoPreset(Servo(port=1), positions={"up": 32, "down": 160}) # List of analog sensors for calibration analog_sensors = [front_right_ir] ``` See [Robot Definition]({{< ref "02-robot-definition" >}}) for details on each component type. ### `src/hardware/robot.py` — Robot Class (Generated) > **This file is also auto-generated** from the `robot:` section of `raccoon.project.yml`. The kinematics type, wheel dimensions, PID gains, axis constraints, physical dimensions, and sensor positions all come from the YAML. **Never edit this file by hand** — it gets overwritten every time code generation runs. Here's what the generated code looks like: ```python from libstp import ( DifferentialKinematics, Drive, FusedOdometry, GenericRobot, PidGains, Feedforward, # ... more imports ... ) from src.hardware.defs import Defs from src.missions.m01_first_mission import M01FirstMission class Robot(GenericRobot): defs = Defs() kinematics = DifferentialKinematics( left_motor=defs.front_left_motor, right_motor=defs.front_right_motor, wheel_radius=0.0345, wheelbase=0.16, ) drive = Drive(kinematics=kinematics, vel_config=..., imu=defs.imu) odometry = FusedOdometry(imu=defs.imu, kinematics=kinematics, ...) shutdown_in = 120 # seconds missions = [M01FirstMission()] setup_mission = M00SetupMission() shutdown_mission = None ``` See [Robot Definition]({{< ref "02-robot-definition" >}}) for the full breakdown. ## How It All Connects ```mermaid graph LR YAML["raccoon.project.yml"] -->|read by| CLI["Raccoon CLI"] YAML -->|read by| UI["BotUI"] CLI -->|generates| DEFS["defs.py"] CLI -->|generates| ROBOT["robot.py"] DEFS -->|imported by| ROBOT ROBOT -->|imported by| MAIN["main.py"] ROBOT -->|references| MISSIONS["missions"] MISSIONS -->|may use| STEPS["steps"] MAIN -->|"robot.start()"| RUN["Runtime"] style YAML fill:#FFA726,color:#fff style MAIN fill:#4CAF50,color:#fff style MISSIONS fill:#66BB6A,color:#fff style STEPS fill:#66BB6A,color:#fff ``` The YAML configuration is the **single source of truth** for your robot's hardware and drive setup. The Raccoon CLI generates `defs.py` and `robot.py` from it automatically — these files are overwritten on every code generation run. **Never edit `defs.py` or `robot.py` by hand.** If you need to change motors, servos, kinematics, or PID config, edit `raccoon.project.yml` and let the CLI regenerate the Python files. Mission files (`missions/*.py`) and custom step files (`steps/*.py`) are yours to write and edit freely — code generation does not touch them. ## Naming Conventions ### Classes — PascalCase (Java-style) All class names use PascalCase with no underscores: | What | Example | Notes | |------|---------|-------| | Mission classes | `M01DriveToConeMission` | Prefix matches file number | | Robot class | `Robot` | Always `Robot` | | Hardware defs | `Defs` | Always `Defs` | | Custom step classes | `WaitForAnalogRange` | Descriptive, verb-first | | Custom screens | `CalibrationScreen` | Suffix with `Screen` | | Services | `HeadingReferenceService` | Suffix with `Service` | | Result types | `CalibrationResult` | Suffix with `Result` | ### Functions and Variables — snake_case (Python-style) All function names, variable names, and hardware definition attributes use lowercase with underscores: | What | Example | Notes | |------|-------------------------------------|-------| | Step DSL functions | `drive_forward()`, `turn_right()` | Verb-first, describes the action | | Custom step functions | `down_cone_container()` | Same pattern as built-in steps | | Hardware attributes | `front_left_motor` | Position + side + component type | | Sensor groups | `front`, `rear` | Short — they're used constantly | | Servo presets | `claw`, `pom_arm`, `shiethe ld` | Named by the mechanism, not the port | | Servo positions | `"open"`, `"closed"`, `"above_pom"` | Descriptive of the physical state | | Calibration sets | `"default"`, `"upper"` | Named by the surface/context | ### File Names — snake_case All Python files use lowercase with underscores. No hyphens, no spaces: | What | Pattern | Example | |------|---------|---------| | Mission files | `m{NN}_{description}_mission.py` | `m01_drive_to_cone_mission.py` | | Setup mission | `m00_setup_mission.py` | Always `m00` | | Shutdown mission | `m99_shutdown_mission.py` | Always `m99` | | Custom step files | `{description}_steps.py` | `cone_container_steps.py` | | Hardware defs | `defs.py` | Always `defs.py` (generated) | | Robot class | `robot.py` | Always `robot.py` (generated) | | Entry point | `main.py` | Always `main.py` | ### Mission Numbering Missions use a two-digit prefix (`m00`–`m99`) as a **naming convention** for readability: - `m00` — Setup mission (calibration, servo homing) by convention. - `m01`–`m98` — Main missions. - `m99` — Shutdown mission by convention. The numbering is purely for human readability and file sorting — it does **not** control execution order. Execution order is determined by the `missions:` list in `raccoon.project.yml`: ```yaml missions: - M01SetupMission: setup # Runs as setup mission - M02GrabPomsMission # First main mission - M03DriveToBasketMission # Second main mission - M99ShutdownMission: shutdown # Runs as shutdown mission ``` The YAML list order defines the sequence. You could name a mission `m42_foo.py` and put it first in the list — it would run first regardless of the number. ### Hardware Naming Pattern Hardware attributes in `Defs` follow a consistent `{position}_{side}_{type}` pattern: ``` front_left_motor # position: front, side: left, type: motor front_right_ir_sensor # position: front, side: right, type: ir_sensor rear_right_light_sensor # position: rear, side: right, type: light_sensor cone_arm_servo # mechanism: cone_arm, type: servo cone_arm_down_button # mechanism: cone_arm, state: down, type: button wait_for_light_sensor # purpose: wait_for_light, type: sensor ``` Be descriptive — when you're debugging at 2am before competition, `front_left_motor` is much clearer than `motor0`. ### YAML Keys YAML configuration keys use `snake_case` to match the Python attributes they generate: ```yaml definitions: front_left_motor: # becomes Defs.front_left_motor type: Motor port: 1 front_right_light_sensor: # becomes Defs.front_right_light_sensor type: IRSensor port: 4 ``` --- # create URL: /04-raccoon-cli/01-create/ # raccoon create Create can be used to either create an entirely new project or a mission for an project. --- ## raccoon create project ```bash raccoon create project raccoon create project --path /path/to/parent/dir raccoon create project --no-wizard ``` - Scaffolds a complete project directory, - initializes a local git history - and unless `--no-wizard` is passed — immediately launches the setup wizard to configure hardware. ### Options | Option | Default | Description | |--------|---------|-----------------------------------------------------------------------------------------------| | `--path PATH` | current directory | Parent directory in which to create the project folder | | `--no-wizard` | off | Skip the setup wizard. Run `raccoon wizard` later to configure the project. (Not recommended) | ### What it does 1. Creates a new directory `` at the target path 2. Renders the project scaffold templates into the directory (see structure below) 3. Assigns a unique UUID to the project in `raccoon.project.yml` 4. Initializes a local git repository with an initial snapshot commit 5. Unless `--no-wizard`: prompts for a Pi connection and launches the [setup wizard]({{< ref "/04-raccoon-cli" >}}) 6. Opens PyCharm if available and prints SSH interpreter setup instructions ### Generated project structure ``` / ├── config/ │ ├── connection.yml # Pi address, port, SSH user │ ├── hardware.yml # Hardware definitions (motors, servos, sensors) │ ├── missions.yml # Ordered list of missions to run │ ├── motors.yml # Per-motor port and calibration settings │ ├── robot.yml # Drivetrain, odometry, motion PID, physical dims │ └── servos.yml # Servo port and named-position settings ├── src/ │ ├── __init__.py │ ├── main.py # Entry point — instantiates Robot and calls start() │ ├── hardware/ │ │ └── __init__.py # Populated by raccoon codegen │ ├── missions/ │ │ ├── __init__.py │ │ └── setup_mission.py # Pre-built setup mission (calibrate + wait for button) │ └── steps/ │ └── __init__.py # Place for reusable step helpers ├── run.sh # Convenience script for local execution ├── upload.sh # Convenience script for uploading to the robot ├── raccoon.project.yml # Main project config — name, UUID, includes ├── .gitignore └── .raccoonignore # Files excluded from raccoon sync ``` ### Config files **`raccoon.project.yml`** — the root config file that raccoon reads for every command. It contains the project name and UUID, and pulls in the four subsections via YAML includes: ```yaml name: MyRobot uuid: 3f2a1b9c-... robot: !include 'config/robot.yml' missions: !include 'config/missions.yml' definitions: !include 'config/hardware.yml' connection: !include 'config/connection.yml' ``` --- **`config/connection.yml`** — stores the Pi's IP address, port, and SSH username. Written automatically by `raccoon connect`. --- **`config/hardware.yml`** — lists every hardware component (motors, servos, IMU, sensors). The `_motors` and `_servos` keys merge in the separate `motors.yml` and `servos.yml` files. --- **`config/motors.yml`** — one entry per drive motor with port, inversion flag, and calibration data (ticks-to-radians conversion, velocity low-pass filter alpha). Populated by `raccoon wizard` and `raccoon calibrate`. --- **`config/robot.yml`** — drivetrain kinematics (type, wheel radius, wheelbase), per-axis velocity PID/feedforward controllers, odometry type, motion PID tuning, and the robot's physical dimensions and start pose. --- **`config/missions.yml`** — the ordered list of missions the robot executes. Entries can include an optional mode key: ```yaml - SetupMission: setup - M01DriveToConeMission - M02CollectConeMission ``` **`config/servos.yml`** — servo port assignments and named positions (e.g. `up: 30`, `down: 160`). Commented-out examples are included in the scaffold. ### Example ```bash raccoon create project ConeBot # Creates ./ConeBot/, scaffolds all files, initializes git, launches wizard raccoon create project ConeBot --path ~/robots # Creates ~/robots/ConeBot/ raccoon create project ConeBot --no-wizard # Scaffolds only — run 'cd ConeBot && raccoon wizard' to configure later ``` --- ## raccoon create mission ```bash raccoon create mission ``` Must be run from inside a project directory (any subdirectory works — raccoon searches upward for `raccoon.project.yml`). ### What it does 1. Converts the name to `snake_case` and `PascalCase` 2. Creates `src/missions/_mission.py` from the mission template 3. Appends the mission class name to the `missions` list in `config/missions.yml` 4. Inserts the corresponding import into `src/main.py` ### Generated mission file ```python from libstp import * from src.hardware.defs import Defs class DriveToConeMission(Mission): def sequence(self) -> Sequential: return seq([]) ``` Fill in the `seq([...])` body with libstp steps. See the [Steps documentation]({{< ref "/02-programming/04-steps" >}}) for available steps. ### Naming conventions raccoon accepts any casing — `PascalCase`, `kebab-case`, or `snake_case` — and normalises it automatically: | Input | Class name | File | |-------|-----------|------| | `DriveToZone` | `DriveToZoneMission` | `drive_to_zone_mission.py` | | `drive-to-zone` | `DriveToZoneMission` | `drive_to_zone_mission.py` | | `drive_to_zone` | `DriveToZoneMission` | `drive_to_zone_mission.py` | The `Mission` suffix is added automatically. If you accidentally include it in the name (e.g. `DriveToZoneMission`), raccoon strips the duplicate and prints a note. **Recommended: use an `M##` prefix** to encode the execution order directly in the name. This keeps both the class names and filenames self-documenting: ```bash raccoon create mission M01DriveToZone raccoon create mission M02CollectSamples raccoon create mission M03ReturnToBase ``` This produces: ``` src/missions/ ├── m01_drive_to_zone_mission.py → class M01DriveToZoneMission ├── m02_collect_samples_mission.py → class M02CollectSamplesMission └── m03_return_to_base_mission.py → class M03ReturnToBaseMission ``` And in `config/missions.yml`: ```yaml - SetupMission: setup - M01DriveToZoneMission - M02CollectSamplesMission - M03ReturnToBaseMission ``` > The `M00` slot is reserved by convention for the setup mission that ships with every new project. ### Example ```bash cd ConeBot raccoon create mission M01DriveToGate # Created: src/missions/m01_drive_to_gate_mission.py # Added: M01DriveToGateMission to config/missions.yml # Imported in src/main.py raccoon create mission M02CollectBall # Created: src/missions/m02_collect_ball_mission.py ``` --- # Sensors & Actors URL: /01-botui/01-sensors-actors/ # Sensors & Actors The first thing you see will be a screen with multiple boxes. Each box represents either one or a group of sensor. ![IMG: Selection of graphs for each sensor and actor](/images/botui/sensors/botui-sensors.png) ## Graph views Graph views display sensor data over time in a visual format. They are used to monitor and analyze values measured by different sensors connected to the system. The graph shown below contains mock data of an analog sensor graph. The x-axis represents the elapsed time in seconds, while the y-axis represents the measured sensor value. ![IMG: Example graph of an analog sensor with mock data](/images/botui/sensors/botui-sensors-graphview.png) Graph views are available on several pages, including: - Analog - Digital - Gyro - Accel - Magneto - Orientation - Temperature - Battery ## Slider Controls Sliders allow users to set specific values for different components. For motors, sliders can be used to adjust power and velocity, while for servos they control the angle. The image below shows an example of a motor power control. ![IMG: Example slider controller from motor power](/images/botui/sensors/botui-sensors-slider.png) For motors, it is also possible to open a graph view that displays **BEMF** (back electromotive force) and the **position over time**. As shown in the image below you can also enter a target position for the motors at different velocities. ![IMG: Adjust the motor](/images/botui/sensors/botui-sensors-motor.png) --- # connect URL: /04-raccoon-cli/02-connect/ # raccoon connect ```bash raccoon connect ``` Connects raccoon to your Wombat robot. ## What it does 1. Checks that the robot's API server is reachable at `:8421` 2. Attempts SSH key authentication 3. If that fails, offers to set up key authentication automatically: - Prompts for the Wombat's password - Generates an SSH key pair and copies the public key to the robot - Retrieves an API access token from the server 4. Saves the connection config to the project's `config/connection.yml` After a successful connect, all other raccoon commands (`run`, `sync`, `codegen`, etc.) use the saved connection automatically. ## Default credentials The Wombat ships with: - **User:** `pi` - **Password:** `raspberrypi` > **Security:** Change the default password once your setup is complete. ## When to re-run - First time setting up a project - When the robot's IP address changes (common when switching between networks or connection modes) - When SSH authentication stops working ## Troubleshooting **"Connection refused" / API unreachable** Make sure your laptop is on the same network as the robot and the IP shown in BotUI matches what you're using. **SSH authentication keeps failing** Run `raccoon disconnect` then `raccoon connect ` again to redo the key setup. --- # Mission Panel (Left) URL: /03-web-ide/02-mission-panel/ ## Mission Panel (Left) Lists every mission in your project. Each mission shows its execution order number and name. The `SetupMission` is always present (order `-1`) and runs before the match starts. - Click a mission to open it in the flowchart editor - Click **+ Add Mission** at the bottom to create a new mission - Drag missions to reorder them (order determines execution sequence during the match) --- # Motor Control URL: /06-firmware/motor-control/ ## PWM Generation Each DC motor is driven by an H-bridge. The STM32 generates a PWM signal for the enable pin of the H-bridge and drives two direction pins (D0, D1) to control direction. Four timer peripherals generate the PWM signals: | Timer | Motors | Channels | PWM frequency | |---|---|---|---| | TIM1 | Motor 0 (CH1), Motor 1 (CH2), Motor 2 (CH3) | PA8, PA9, PA10 | ~25 kHz | | TIM8 | Motor 3 (CH1) | PC6 | ~25 kHz | **PWM frequency calculation for TIM1/TIM8:** - APB2 clock = 90 MHz (HCLK/2) - TIM1/TIM8 input clock = 180 MHz (2× APB2 because APB2 prescaler ≠ 1) - Prescaler = 18 − 1 = 17 - Timer frequency = 180 MHz / 18 = 10 MHz - Period = 400 − 1 = 399 - PWM frequency = 10 MHz / 400 = **25 kHz** The compare register (`__HAL_TIM_SET_COMPARE`) accepts values from 0 (always off) to 399 (always on). This is `MOTOR_MAX_DUTYCYCLE`. The PWM duty cycle in percent is `(duty / 399) × 100`. **Direction control:** The H-bridge direction pins map as follows: | D0 | D1 | Effect | |---|---|---| | LOW | LOW | Coast (motor floats) | | HIGH | LOW | Counter-clockwise (CCW) | | LOW | HIGH | Clockwise (CW) | | HIGH | HIGH | Short brake (motor windings shorted) | Direction is set by `motor_setDirection()` which writes to the GPIO pins directly using HAL. Changing direction does not automatically stop the motor; the duty cycle remains unchanged until explicitly set. ## Back-EMF (BEMF) Measurement The Wombat uses back-EMF rather than quadrature encoders for position feedback. This is an unusual design choice. When a brushed DC motor spins, it generates a voltage proportional to speed (back-EMF). By briefly stopping the PWM and sampling the voltage across the motor terminals, the STM32 can measure motor velocity. Each motor has two dedicated ADC pins connected to the motor terminals (the high and low sides of the H-bridge output). Back-EMF is the differential: `BEMF = ADC_high − ADC_low`. **ADC2 channel assignments (BEMF):** | Channel | GPIO | Motor | |---|---|---| | ADC2_IN0 | PA0 | Motor 0 BEMF high | | ADC2_IN1 | PA1 | Motor 0 BEMF low | | ADC2_IN2 | PA2 | Motor 1 BEMF high | | ADC2_IN3 | PA3 | Motor 1 BEMF low | | ADC2_IN4 | PA4 | Motor 2 BEMF high | | ADC2_IN5 | PA5 | Motor 2 BEMF low | | ADC2_IN6 | PA6 | Motor 3 BEMF high | | ADC2_IN7 | PA7 | Motor 3 BEMF low | ADC2 is configured for 12-bit resolution, software-triggered, 8 conversions per sequence, 84 cycles per channel sample time. DMA2 stream 2 transfers the 8 results to `adc_dma_bemf_buffer[8]`. ### BEMF Measurement Timing The BEMF measurement cycle is driven by TIM6 (`HAL_TIM_PeriodElapsedCallback`): 1. Every `BEMF_SAMPLING_INTERVAL` (5000 µs = 200 Hz), `stop_motors_for_bemf_conv()` is called. 2. This cuts all motor PWM drive (direction pins set to OFF) and transitions state to `WAITING_TO_START`. 3. After `BEMF_CONVERSION_START_DELAY_TIME` (500 µs), `startBemfReading()` is called. 4. This triggers `HAL_ADC_Start_DMA(&hadc2, ...)` and transitions state to `CONVERSION_ONGOING`. 5. When ADC2 completes, `HAL_ADC_ConvCpltCallback` fires. State transitions to `CONVERSION_DONE`. 6. `processBEMF()` computes the differential readings, applies a low-pass filter, updates `motor_data.position`, and then calls `update_motor()` for each channel. 7. `update_motor()` applies the current control mode (PWM / MAV / MTP) to regenerate the PWM command, which restores motor drive. The 500 µs delay between cutting PWM and sampling is necessary because the motor winding inductance and any snubber capacitance on the H-bridge output must settle before the ADC reading is meaningful. ### BEMF Processing ```c bemfRawReadings[0] = adc_dma_bemf_buffer[3] - adc_dma_bemf_buffer[2]; // motor 0 bemfRawReadings[1] = adc_dma_bemf_buffer[1] - adc_dma_bemf_buffer[0]; // motor 1 bemfRawReadings[2] = adc_dma_bemf_buffer[7] - adc_dma_bemf_buffer[6]; // motor 2 bemfRawReadings[3] = adc_dma_bemf_buffer[5] - adc_dma_bemf_buffer[4]; // motor 3 ``` Note the channel ordering within each motor pair (e.g., buffer indices 3 and 2 for motor 0) is hardware-specific and reflects the physical wiring of the H-bridge outputs. Each raw reading is passed through a **first-order IIR low-pass filter**: ```c filtered = alpha * newValue + (1.0f - alpha) * previousValue; // alpha = 0.2f ``` With α = 0.2, this is a relatively aggressive filter that rejects high-frequency switching noise but also somewhat attenuates the actual velocity signal at high speeds. If the filtered value exceeds 1700 (an error guard), the reading is discarded. **Dead zone:** BEMF values with absolute value ≤ 8 are not accumulated into the position counter. This prevents slow drift when the motor is nominally stopped. ### Position Accumulation Each BEMF cycle, the filtered reading (if outside the dead zone) is added to `motor_data.position[ch]`. This makes the position counter a running integral of BEMF, not a true encoder count. The unit of position is therefore "BEMF ticks" — not radians, not millimetres. The `KinematicsConfig.ticks_to_rad[4]` field in the SPI protocol provides the per-motor conversion factor from BEMF ticks to radians of wheel rotation. Position counters are never reset at the firmware level by the main loop. Reset is accomplished by the Pi: the `stm32-data-reader` process maintains software offsets (`positionOffsets_[port]`) and applies them when reading `rx->motor.position[i]`. ## PID Control The firmware implements a discrete-time PID controller without a derivative filter (the derivative term is just the first difference of the error). The `PidController` structure holds state for one motor: ```c typedef struct { float kP; float kI; float kD; float iMax; // Maximum integral contribution float outMax; // Output saturation limit float prevErr; float iErr; // Integral accumulator } PidController; ``` The update function implements positional PID with integral clamping: ```c pErr = goal - current; iErr += pErr; dErr = pErr - prevErr; iTerm = kI * iErr; if (iTerm > iMax) { iTerm = iMax; iErr = iMax / kI; } if (iTerm < -iMax) { iTerm = -iMax; iErr = -iMax / kI; } cmd = kP * pErr + iTerm + kD * dErr; cmd = clamp(cmd, -outMax, outMax); ``` The integral clamping is applied to the *contribution* rather than the raw accumulator, which avoids the issue where back-calculating the accumulator from a saturated term gives an incorrect value when `kI` changes. **Default velocity PID gains:** - `kP = 1.22`, `kI = 0.045`, `kD = 0.0` - `iMax = 399` (full duty), `outMax = 399` **Default position PID gains:** - `kP = 0.01`, `kI = 0.0`, `kD = 0.015` - `iMax = 399`, `outMax = 399` These are starting points. The optimal values depend on the motor, load, and battery voltage. The Pi can override them at any time via the `motorPidSettings` block in the `RxBuffer` combined with the appropriate update flag. ## Motor Operating Modes `update_motor()` is called once per BEMF cycle per motor. It reads `rxBuffer.motorControlMode`, extracts the 3-bit mode for the relevant channel, and implements the corresponding behaviour: ### `MOT_MODE_OFF` (0) Direction pins both LOW, duty cycle 0. Motor coasts. ### `MOT_MODE_PASSIV_BREAK` (1) Direction pins both HIGH, duty cycle 0. Both motor terminals are shorted through the H-bridge, providing regenerative braking. ### `MOT_MODE_PWM` (2) Direct open-loop PWM. `motorTarget[ch]` is the signed duty cycle (negative = reverse). The duty is passed to `applyMotorOutput()` which sets direction and calls `motor_setDutycycle()`. The range is −399 to +399. ### `MOT_MODE_MAV` — Move At Velocity (3) Closed-loop velocity control using the velocity PID. The BEMF filtered reading is the process variable; `motorTarget[ch]` is the setpoint (in BEMF ticks per 5 ms, approximately). The PID output is the signed PWM duty. ```c int32_t pidOut = pid_update(&pidControllers[ch], target, bemf_filtered); applyMotorOutput(ch, pidOut); ``` ### `MOT_MODE_MTP` — Move To Position (4) Cascaded position + velocity control. The outer loop is a proportional-derivative controller on position error; its output is a velocity setpoint. The inner loop is the velocity PID: ```c int32_t velTarget = pid_update(&posPidControllers[ch], goalPos, currentPos); // Clamp velTarget to motorTarget[ch] (speed limit) if (speedLimit > 0) velTarget = clamp(velTarget, -speedLimit, speedLimit); int32_t pidOut = pid_update(&pidControllers[ch], velTarget, bemf_filtered); applyMotorOutput(ch, pidOut); ``` When `|goalPos - currentPos| <= MTP_DONE_THRESHOLD` (50 BEMF ticks), bit `ch` of `motor_data.done` is set. The Pi reads this flag and uses it to determine when a `moveToPosition` command has completed. ### Mode Change Handling When the control mode changes between BEMF cycles, both PID controllers for that motor are reset (`prevErr = 0, iErr = 0`) to prevent integral windup carry-over from a previous command. The done flag for that motor is also cleared. ## Servo Control Four servo outputs are generated by TIM3 and TIM9. These timers run at 1 MHz (prescaler = 90) with period 20 000, giving a 50 Hz PWM with 1 µs resolution — exactly the standard RC servo signal. | Timer | Servos | GPIO pins | |---|---|---| | TIM3 CH2 | Servo 1 | PC7 | | TIM3 CH3 | Servo 0 | PC8 | | TIM9 CH1 | Servo 3 | PE5 | | TIM9 CH2 | Servo 2 | PE6 | A dedicated 6 V regulator powers the servo rail. The firmware controls the regulator's enable pin (`SERVO_6V0_ENABLE_Pin`, PE10). When all servos are fully disabled, the 6 V supply is cut. When any servo transitions to enabled state, the supply is first raised, then the PWM channel is started. The `update_servo_cmd()` function runs at 10 Hz from the main loop. Servo position is specified in timer ticks (600 = 0°, 2600 = 180°, 600 + degrees × 10 = position). The `stm32-data-reader` converts user-facing degrees to this scale before writing to `rxBuffer.servoPos[port]`. The 10 Hz update rate is deliberately slow. Updating servos every millisecond can cause jitter when the SPI bus is transferring large buffers and the timer compare register is written mid-cycle. --- # IR Sensor Calibration (K-Means) URL: /02-programming/algorithms/ir-calibration/ # IR Sensor Calibration — K-Means Clustering IR sensor calibration uses a **K-Means clustering** approach to automatically distinguish black from white surfaces. This technique is based on the research paper [*Applied Machine Learning in Sensor Calibration — A Clustering Technique*](/papers/liu-xie-jiang-2025-ml-sensor-calibration.pdf) by Abigail Liu, Aaron Xie, and Oliver Jiang (Los Altos Community Team 0399, GCER 2025). ## The Problem IR sensors return raw analog values that vary between sensors, surfaces, and environmental conditions. To make decisions like "am I on a black line?", the system needs a threshold separating black readings from white readings. Traditional approaches — such as taking a fixed percentile of the data — are vulnerable to skewed samples. If the robot spends most of its calibration drive on white surface with only a brief pass over black, a percentile-based threshold can land in the wrong place. The paper demonstrates that percentile methods achieve only 92–98% accuracy and are susceptible to false positives in skewed data. ## The Solution: K-Means Clustering (k=2) Instead of relying on percentiles, the calibration system uses **K-Means clustering with k=2** to separate sensor readings into two natural groups: one for white and one for black. **Sampling:** During calibration, the robot drives across the game surface while IR sensors are sampled at **10 ms intervals** (100 Hz). Each sensor accumulates a list of raw analog readings as the robot passes over both white and black areas. **Clustering:** The collected samples are fed into a 1D K-Means algorithm: 1. **Initialize** two centroids at the minimum and maximum of the data 2. **Assign** each data point to its nearest centroid 3. **Recompute** each centroid as the mean of its assigned points 4. **Repeat** for up to 10 iterations (convergence is typically reached within 5, since the data is semi-sorted from the WHITE-BLACK-WHITE driving pattern) 5. **Return** the two centroids in ascending order — the lower one becomes the **white threshold**, the higher one the **black threshold** ``` Samples: [180, 195, 210, 185, 2800, 3100, 2950, 190, 205, ...] └── white cluster ──┘ └── black cluster ──┘ └── white ──┘ K-Means centroids: white = 193.5, black = 2950.0 ``` ## Why Clustering Works Better The paper compares three calibration algorithms: | Algorithm | Approach | Success Rate | Handles Skewed Data? | |-----------|----------|:------------:|:--------------------:| | 90th percentile | Use 90th percentile as BLACK threshold | 92% | No | | Median of 80% range | Average 10th/90th percentile medians | 98% | No | | **K-Means clustering** | Cluster into two groups, threshold at midpoint | **100%** | **Yes** | The key advantage is robustness to **skewed data distributions**. If the robot's calibration drive crosses a black line only briefly, 90% of the samples may be white. Percentile methods get confused — they might place the "black" threshold at a white reading. K-Means correctly identifies even a small cluster of black readings and separates it from the white cluster. ## Validation After clustering, the calibration is validated before being accepted: - **Minimum range check:** The overall spread of readings must exceed 500 units. If all readings are similar, the sensor likely didn't see both surfaces. - **Minimum separation check:** The two centroids must be at least 700 units apart *and* at least 25% of the total data range. This prevents accepting calibrations where the clusters aren't meaningfully distinct. If validation fails, the BotUI shows a warning and lets you retry. ## Soft Classification After calibration, the IR sensor doesn't just return "black" or "white" — it also provides a **probability** via linear interpolation between the two thresholds: ``` probabilityOfBlack: value <= white_threshold → 0.0 value >= black_threshold → 1.0 otherwise → (value - white) / (black - white) ``` This enables more nuanced line-following behavior (e.g., proportional control) rather than binary on/off decisions. See [Line Following]({{< ref "line-following" >}}) for how the PID controller uses these probability values. --- # Robot Definition URL: /02-programming/02-robot-definition/ # Robot Definition Before you write a single mission, you need to tell LibSTP what hardware your robot has and how it's arranged. This is configured in `raccoon.project.yml` — the Raccoon CLI then generates two Python files from it: - **`defs.py`** — hardware inventory (motors, servos, sensors) - **`robot.py`** — how those parts work together (kinematics, drive, odometry) > **Important:** You define hardware in `raccoon.project.yml` and code generation produces `defs.py` and `robot.py` automatically. **Never edit these files by hand** — they are overwritten every time code generation runs. Always make changes in the YAML. This page explains what the generated code looks like and what each part means, so you understand what to configure in the YAML. ## Hardware Definitions (`defs.py`) The `Defs` class is a flat list of every physical component on your robot. Each attribute maps to a port on the Wombat controller. ### Motors ```python from libstp import Motor, MotorCalibration front_left_motor = Motor( port=0, # Wombat motor port (0-3) inverted=False, # True if motor spins backwards calibration=MotorCalibration( ticks_to_rad=1.947e-05, # Encoder ticks → radians (set by calibration) vel_lpf_alpha=1.0, # Velocity low-pass filter (1.0 = no filtering) ), ) ``` **Parameters:** - `port`: Physical motor port on the Wombat (0–3) - `inverted`: Set to `True` if the motor is mounted backwards (spins the wrong way relative to the expected direction) - `calibration`: Conversion factors measured during [calibration]({{< ref "10-calibration" >}}). You generally don't set these by hand — they're populated by the calibration step ### Servos Servos can be created plain or with named presets: ```python from libstp import Servo, ServoPreset # Plain servo — you specify angles in your mission code plain_servo = Servo(port=0) # Servo with presets — named positions you can call directly claw = ServoPreset( Servo(port=2), positions={"closed": 135, "open": 30} ) # Multi-position servo arm = ServoPreset( Servo(port=1), positions={ "down": 10, "above_pom": 55, "up": 105, "start": 160, } ) ``` With `ServoPreset`, you can move to named positions directly in your missions: ```python Defs.claw.open() # Moves to angle 30 Defs.arm.above_pom() # Moves to angle 55 Defs.arm.up(300) # Moves to angle 105 at 300 degrees/sec (slow servo) ``` ### Sensors ```python from libstp import IRSensor, DigitalSensor, AnalogSensor, SensorGroup from libstp import IMU as Imu # Inertial measurement unit (one per robot, no port needed) imu = Imu() # Infrared line sensors — used for line detection and following front_right_ir = IRSensor(port=0) front_left_ir = IRSensor(port=1) # Digital sensors — buttons, limit switches (returns True/False) button = DigitalSensor(port=10) arm_down_limit = DigitalSensor(port=0) # Analog sensors — raw analog readings light_sensor = AnalogSensor(port=2) ``` ### Sensor Groups A `SensorGroup` bundles two IR sensors (left and right) and exposes convenience methods for common operations: ```python front = SensorGroup(left=front_left_ir, right=front_right_ir) rear = SensorGroup(right=rear_right_ir) # Single sensor is fine too ``` Sensor groups give you shorthand methods you can call directly in missions: ```python Defs.front.drive_until_black() # Drive forward until either sensor sees black Defs.front.drive_over_line() # Drive forward over a black line Defs.front.follow_right_edge(cm=50) # Follow the right edge of a line for 50 cm Defs.front.strafe_left_until_black() # Strafe left until sensor sees black Defs.front.lineup_on_black() # Align both sensors on a black line ``` ### Required Attributes: `button` and `wait_for_light_sensor` The `RobotDefinitionsProtocol` expects specially-named attributes in your `Defs` class: ```python from libstp import DigitalSensor, AnalogSensor button = DigitalSensor(port=10) # Required — exact name wait_for_light_sensor = AnalogSensor(port=2) # Optional — exact name ``` These names are **not arbitrary** — the framework looks for them by name: - **`button`** (required): Registered as the system-wide primary button. Used by `wait_for_button()`, UI interactions, and any step that needs physical button input. Must be named exactly `button`. - **`wait_for_light_sensor`** (optional): Used by the pre-start gate to detect the competition start light. If not present, the robot can only start via button press. For competition, you need this so the robot can start with the light signal. Must be named exactly `wait_for_light_sensor`. Both are defined in the YAML `definitions:` section with these exact names: ```yaml definitions: button: type: DigitalSensor port: 10 wait_for_light_sensor: # Optional, but needed for competition start type: AnalogSensor port: 2 ``` ### The `analog_sensors` List Include all IR/analog sensors in an `analog_sensors` list. The calibration system uses this to know which sensors need calibrating: ```python class Defs: # ... all your hardware above ... analog_sensors = [front_right_ir, front_left_ir] ``` --- ## Robot Class (`robot.py`) The `robot.py` file is **entirely code-generated** from the `robot:` section of `raccoon.project.yml`. It wires together your hardware definitions, drive system, kinematics, odometry, motion PID, physical dimensions, and mission list. **Never edit this file by hand** — all configuration goes through the YAML. Here's what each part of the generated code does and which YAML section it comes from: ### Generated Attributes Reference | Attribute | What It Does | YAML Source | |-----------|-------------|-------------| | `defs` | Hardware definitions instance | `definitions:` | | `kinematics` | Translates chassis velocity to/from wheel speeds | `robot.drive.kinematics` | | `drive` | Velocity controller (PID + feedforward per axis) | `robot.drive.vel_config` | | `odometry` | Tracks robot position and heading on the field | `robot.odometry` | | `motion_pid_config` | Controls trajectory following accuracy (distance/heading PID, axis constraints) | `robot.motion_pid` | | `shutdown_in` | Emergency stop timer in seconds | `robot.shutdown_in` | | `setup_mission` | Mission that runs before the start signal | `missions:` (entry tagged `setup`) | | `missions` | Main missions, run in order after start | `missions:` (untagged entries) | | `shutdown_mission` | Mission that runs when timer expires | `missions:` (entry tagged `shutdown`) | | `width_cm`, `length_cm` | Physical robot dimensions | `robot.physical` | | `rotation_center_forward_cm`, `rotation_center_strafe_cm` | Offset from geometric center to rotation center | `robot.physical.rotation_center` | | `_sensor_positions` | Where each sensor is mounted relative to rotation center | `robot.physical.sensors` | | `_wheel_positions` | Where each wheel is mounted (mecanum only) | Derived from kinematics geometry | ### YAML → Generated Code Here's how the YAML maps to the generated `robot.py`. You configure everything on the left; code generation produces the right: ```yaml # raccoon.project.yml robot: shutdown_in: 120 drive: kinematics: type: differential # or "mecanum" wheel_radius: 0.0345 wheelbase: 0.16 left_motor: front_left_motor right_motor: front_right_motor vel_config: vx: pid: { kp: 0.0, ki: 0.0, kd: 0.0 } ff: { kS: 0.0, kV: 1.0, kA: 0.0 } odometry: type: FusedOdometry # or "Stm32Odometry" motion_pid: distance: { kp: 7.875, ki: 0.0, kd: 0.0 } heading: { kp: 7.875, ki: 0.0, kd: 0.0625 } linear: max_velocity: 0.2368 acceleration: 0.2798 deceleration: 2.0532 angular: max_velocity: 2.9424 acceleration: 14.6122 deceleration: 7156.1491 physical: width_cm: 13.0 length_cm: 19.0 rotation_center: x_cm: 2.5 y_cm: 5.5 sensors: - name: front_right_ir_sensor x_cm: 14.0 y_cm: 7.5 clearance_cm: 1.0 missions: - M00SetupMission: setup - M01DriveToConeMission - M99ShutdownMission: shutdown ``` All the PID values, axis constraints, kinematics parameters, and physical dimensions are set in the YAML. Code generation turns them into the Python `Robot` class. Auto-tune and calibration steps update these YAML values automatically. ### Kinematics: Differential vs. Mecanum ```mermaid graph LR subgraph "Differential (2 motors)" DL["Left Motor"] --- DC["Robot"] DR["Right Motor"] --- DC DC -->|"forward/backward + turning"| DM["Can't strafe"] end subgraph "Mecanum (4 motors)" ML["Front Left"] --- MC["Robot"] MR["Front Right"] --- MC BL["Rear Left"] --- MC BR["Rear Right"] --- MC MC -->|"any direction + turning"| MM["Full omnidirectional"] end ``` | Feature | Differential | Mecanum | |---------|-------------|---------| | Motors | 2 | 4 | | Can drive forward/backward | Yes | Yes | | Can turn in place | Yes | Yes | | Can strafe sideways | No | Yes | In competition, you typically use both: one differential and one mecanum robot. Set `type: differential` or `type: mecanum` in the YAML kinematics section accordingly. ### Mission Lifecycle ```mermaid sequenceDiagram participant R as robot.start() participant S as Setup Mission participant G as Start Gate participant M1 as Mission 1 participant M2 as Mission 2 participant SD as Shutdown Mission participant T as Timer R->>S: Run setup_mission Note over S: Calibration, homing, servo init S-->>R: Setup complete R->>G: Wait for start signal Note over G: Button press or light sensor G-->>R: Go! R->>T: Start shutdown_in timer (120s) R->>M1: Run missions[0] M1-->>R: Complete R->>M2: Run missions[1] M2-->>R: Complete Note over R: All missions done R->>SD: Run shutdown_mission SD-->>R: Done Note over R: All motors disabled Note over T: Timer is a last resort
if missions take too long ``` **Setup mission** runs before the start signal — use it for calibration, homing servos, and any pre-match preparation. **Main missions** run in order after the start signal. Each mission runs to completion before the next one starts. **Shutdown mission** runs when all main missions have completed **or** when the `shutdown_in` timer expires — whichever comes first. Most of the time, your missions finish normally and shutdown runs as the final cleanup step. Use it for controlled shutdown (lowering arms, releasing objects). If no shutdown mission is set, all motors are simply disabled. ### The `shutdown_in` Timer `shutdown_in = 120` means the robot will force-stop 120 seconds after the start signal. This is a **last resort safety mechanism** required by Botball competition rules — your missions should normally finish well before the timer fires. When the timer does fire: 1. The currently running mission is cancelled 2. The shutdown mission runs (if defined) 3. All motors are disabled 4. The program exits Plan your missions to finish within the time limit. If everything goes well, your robot completes all missions, runs the shutdown mission, and stops cleanly — without ever hitting the timer. --- # Programs URL: /01-botui/02-programs/ # Programs This page lists all your programs that were created. ![IMG: A list of all programs](/images/botui/programs/program-list.png) ## Functions A program provides **Start** and **Calibrate** buttons. The **Start** button launches the program, while the **Calibrate** button recalibrates the Wombat and saves the calibration settings to a YAML file. There are two calibration modes available: **Aggressive** and **Standard**, allowing users to choose the level of adjustment needed. ![IMG: Shows functions for programs](/images/botui/programs/program-details.png) --- # Synchronizing Two Robots URL: /02-programming/03a-synchronizing-robots/ # Synchronizing Two Robots In a Botball match, your two robots run completely separate programs on separate Wombats — they don't talk to each other over the network, they don't share sensors, and neither one knows where the other is. What they **do** share is a common start signal: the **wait-for-light gate** on both robots fires the instant the start lamp turns on, and at that exact moment both mission clocks start measuring from T=0. From there on, each robot's clock ticks forward in lockstep with the other — not because they're communicating, but because they were both kicked off by the same physical event in the room. `wait_for_checkpoint()` uses that shared timeline to coordinate the two robots without any communication. You pick absolute mission times — *"enter the shared zone at T=20s"*, *"drop the cube at T=35s"* — and bake them into both robots' missions. As long as each robot started at the same moment, they'll arrive at the same wall-clock time even though neither knows what the other is doing. ## The Mental Model Think of the two robots as dancers who can't see each other, performing the same choreography to a shared song. Neither one is listening for the other — they're both listening to the beat. If both start when the music starts, step 1 at bar 4 and step 2 at bar 8 will line up perfectly, even though neither dancer has any idea where the other is on the stage. `wait_for_checkpoint()` is the beat. Your mission is the choreography. ```mermaid sequenceDiagram participant L as Start Lamp participant A as Robot A Clock participant B as Robot B Clock L->>A: wait_for_light fires (T=0) L->>B: wait_for_light fires (T=0) Note over A,B: Both clocks run independently,
no communication between robots Note over A: T=8s
wait_for_checkpoint(12) Note over A: idle... Note over B: T=10s
still grabbing cube Note over A: T=12s
checkpoint released, drive forward Note over B: T=12s
finishes grab Note over B: T=14s
wait_for_checkpoint(18) Note over A: T=15s
clears shared zone Note over B: T=18s
checkpoint released, enters zone ``` Both robots look at their own clock. They never look at each other. ## How It Works Every robot has a `synchronizer` attached to it. The synchronizer captures `T=0` the moment the wait-for-light gate fires — i.e. the instant the start lamp is detected. This is what ties the two robots' clocks together: because the lamp turns on for both robots at the same physical moment, both synchronizers latch `T=0` at (nearly) the same instant, and `wait_for_checkpoint(20.0)` on Robot A fires at (nearly) the same wall-clock moment as `wait_for_checkpoint(20.0)` on Robot B. From that moment on, `wait_for_checkpoint(seconds)` pauses the current sequence until the mission clock reaches that absolute time. > If your robot is started by button press instead of a light (e.g. during development, or because the `wait_for_light_sensor` is not defined), the synchronizer latches `T=0` on the button press instead. See **[Making Your Robot Competition Ready]({{< ref "15-competition-ready" >}})** for how to enable the light-start gate. ```python from libstp.step.timing import wait_for_checkpoint seq([ drive_forward(30), wait_for_checkpoint(15.0), # Don't continue before T=15s drive_forward(20), wait_for_checkpoint(30.0), # Don't continue before T=30s Defs.claw.open(), ]) ``` The key word is **"before"**. If the robot arrives early, it waits. If the previous steps already ran long and the checkpoint has passed, the step logs a warning and returns immediately — it never rewinds the clock. Think of it as a *"no earlier than"* gate, not a precise trigger. > The synchronizer clock is **mission-relative**, not wall-clock. It starts at zero when your first mission begins and counts up from there. T=20.0 means "20 seconds after start", not "20 seconds past the minute". ## A Realistic Scenario Say Robot A has to cross the center of the table on its way to a scoring zone, and Robot B has to drop a cube into a bin that sits on the same center line. Both robots pass through the shared area, but at different times. They must not collide. Sit down with your partner before the match and agree on a timeline: | Time | Robot A | Robot B | |------|---------|---------| | T=0–12s | Drives from start to the center edge | Collects the cube from the home zone | | T=12s | **Crosses the center line** | Still in its own zone | | T=15s | Clear of the shared area | Finishing grab | | T=18s | Working the scoring zone | **Enters the shared area to drop** | | T=25s | Returning home | Clear of the shared area | The two risky moments are **T=12s** (A is in the middle) and **T=18s** (B is in the middle). Everything else is safe. **Robot A's mission:** ```python class M01CrossCenterMission(Mission): def sequence(self) -> Sequential: return seq([ drive_forward(40), # A can cross the center no earlier than T=12s. # By this time, we know B is still grabbing. wait_for_checkpoint(12.0), drive_forward(30), # Cross the center turn_right(90), drive_forward(20), # Enter scoring zone ]) ``` **Robot B's mission:** ```python class M01DropCubeMission(Mission): def sequence(self) -> Sequential: return seq([ drive_forward(20), Defs.claw.closed(), # Grab cube # B must wait until A is out of the shared area. # A clears the center by T=15s; we leave 3s of margin. wait_for_checkpoint(18.0), drive_forward(25), Defs.claw.open(), # Drop cube drive_backward(25), # Back out ]) ``` Neither robot knows where the other is. They only agree on *when* each action is allowed to happen. As long as both start at T=0 together, the checkpoints do the rest. ## Choosing Checkpoint Times This is the part that matters, and it's where most teams get it wrong. A checkpoint you overshoot on a bad run is a checkpoint that does nothing — the warning in the log is your only hint that timing slipped. A checkpoint that fires too early defeats the point of the synchronization. Picking the right numbers is a measurement problem, not a guessing problem. ### Measure First, Then Gate Run the mission a few times **without** checkpoints and note how long each phase actually takes. Write it down. Then add the checkpoints with margin. For example, you might find: - Robot A reaches the center in 8–11 seconds depending on how cleanly it follows the line. - Robot B finishes grabbing the cube in 13–17 seconds, depending on whether the cube is stuck against the wall. Given those numbers, `wait_for_checkpoint(12.0)` for A is tight (bad runs already arrive at 11s). Bump it to 13s. And `wait_for_checkpoint(18.0)` for B is only 1 second of margin past the worst case — bump it to 19s or 20s. ### Add Margin to the Slower Robot The checkpoint should be *after* the worst-case finish time of whichever robot's action blocks the other. If Robot B's grab usually finishes by T=14s but occasionally stretches to T=17s, set the checkpoint to at least T=18s — ideally T=19s or T=20s. Two to three seconds of slack is a good starting point. ### Checkpoint Just Before the Risky Action Place the `wait_for_checkpoint()` call **immediately before** the step that could collide, not at the start of the sequence. You want the pause to absorb any earlier slowdowns, so the robot arrives at the dangerous action exactly on schedule. ```python # BAD — checkpoint is too early seq([ wait_for_checkpoint(12.0), # Gate here... drive_forward(40), # ...but this takes variable time drive_forward(30), # Might enter shared zone any time between T=12 and T=25 ]) # GOOD — checkpoint right before the risky step seq([ drive_forward(40), wait_for_checkpoint(12.0), # Gate immediately before the risky step drive_forward(30), # Always enters shared zone at exactly T=12+ ]) ``` ### Use It Sparingly Every checkpoint is a hard synchronization point. Two or three well-placed checkpoints per mission is usually enough. Gating every step is a recipe for missed checkpoints and wasted time — if the first checkpoint slips, every subsequent one does too, and the log fills up with warnings you'll ignore. ## Catching Collisions Before They Happen Sketching out a timeline on paper is fine, but it's easy to miss a conflict when two missions are long and both involve sweeping arms, wide turns, or shared lanes. The **Web IDE** can do this check for you. If you open both robots' projects in the Web IDE, it can replay each mission against the simulated game table, time-stamp every motion step based on the mission clock, and overlay the two paths on the same timeline. Wherever the two robots occupy overlapping space *at the same timestamp*, the IDE flags a collision — before you ever run the robots on a real table. This turns picking checkpoint times from guesswork into a measurement exercise: you change a `wait_for_checkpoint()` value, the IDE re-simulates, and any remaining collision markers tell you exactly which checkpoints still need slack. Catching a collision in the browser costs nothing. Catching it at competition costs a run. Use this before every major mission change. Especially useful when: - You're adding a new shared-zone crossing and want to find a safe window - You changed a drive speed and need to re-verify the old timeline still holds - You're swapping who grabs which object between the two robots - You're debugging why your partner's robot keeps getting clipped The collision check reads the same timestamped path data that the mission clock uses at runtime, so the simulated timing closely matches what you'll see on the real robot — provided you've kept your `wait_for_checkpoint()` margins realistic. ## Doing Something Useful While Waiting Idling at a checkpoint is a waste of seconds you can't get back. `do_until_checkpoint(seconds, task)` runs a task *until* the checkpoint is reached, then cancels it. Use it to scan for a line, sweep a scanner servo, poll a sensor, or make any other productive use of the wait time: ```python seq([ drive_forward(40), # Sweep the scanner servo back and forth while waiting for the window do_until_checkpoint(12.0, loop_forever( seq([ Defs.scanner_servo.sweep_left(), wait_for_seconds(0.3), Defs.scanner_servo.sweep_right(), wait_for_seconds(0.3), ]) )), drive_forward(30), ]) ``` When T=12s hits, the loop is cancelled cleanly at the next `await` point and execution continues with `drive_forward(30)`. ## Limitations `wait_for_checkpoint()` is a simple tool, and it's important to know what it *isn't*. - **No cross-robot clock correction.** The two synchronizers run independently on two Raspberry Pis. If one robot starts half a second late because a button was pressed sluggishly, its checkpoints fire half a second late. There's no handshake — the two clocks drift independently for the whole match. - **Time only moves forward.** A missed checkpoint is logged and ignored. The step does not block waiting for a partner signal, because there is no partner signal. If you need "wait until my partner finishes", you need a sensor (e.g. the partner flips a reflector your sensor can see), not a checkpoint. - **Not a substitute for sensors.** If you need to know whether the *other robot* has physically left a zone, use a light sensor, a distance sensor, or a camera. `wait_for_checkpoint()` only knows about time, not about reality. If your partner's mission crashes at T=5s and never enters the shared zone, your "wait until T=18s" will still fire at T=18s — blind to the fact that there's nothing to wait for. - **One robot, one clock.** Every robot has its own synchronizer. Two robots on the same team have two independent timelines that happen to start together. ## Quick Reference ```python from libstp.step.timing import wait_for_checkpoint, do_until_checkpoint # Pause until the mission clock reaches T=seconds (or return immediately if past). wait_for_checkpoint(seconds) # Run `task` until the checkpoint, then cancel it. do_until_checkpoint(seconds, task) ``` Both live in `libstp.step.timing` and can be placed anywhere inside a `seq([...])` or `parallel(...)` block, just like any other step. --- # Flowchart Editor (Center) URL: /03-web-ide/03-flowchart-editor/ ## Flowchart Editor (Center) The main workspace. Each mission is represented as a visual flowchart of connected step nodes. ![Flowchart editor with SetupMission](/images/web-ide/webide-flowchart.png) **Nodes** represent steps (e.g. `calibrate`, `drive_forward`, `wait_for_button`). Each node shows its step name and any editable arguments inline. **Connections** show the execution order — arrows flow from top to bottom (vertical layout) or left to right (horizontal layout). ### Editing - **Add a step**: Drag from the Step Library (right panel) onto the canvas - **Connect steps**: Drag from the output dot of one node to the input dot of another - **Edit arguments**: Click on an argument field directly on the node - **Delete**: Select a node and press Delete - **Undo / Redo**: Buttons in the toolbar, or Ctrl+Z / Ctrl+Y ### Toolbar | Button | Function | |--------|---------| | ⚙ (gear) | Open Settings modal | | ↩ ↪ | Undo / Redo | | 🕐 | Toggle Timing panel (step execution durations) | | 🗺 | Toggle Table Visualization panel (robot path on table) | | 📋 | Toggle Run Logs panel | | **Sim** | Toggle simulation mode for runs | | ▶ (green) | Run the current mission | | ⚙ (green) | Device settings | --- # run URL: /04-raccoon-cli/03-run/ # raccoon run ```bash raccoon run ``` The command you'll use most. Syncs your project to the robot, generates hardware code, runs the program, and pulls results back — all in one step. ## What it does 1. **Checkpoint** — saves a local git snapshot of your current files (requires git) 2. **Sync (push)** — uploads your project files to the robot via rsync 3. **Codegen** — runs on the robot: generates `defs.py`, `defs.pyi`, and `robot.py` from your YAML config 4. **Run** — executes `src/main.py` on the robot, streaming output live to your terminal 5. **Checkpoint** — saves another snapshot after the run 6. **Sync (pull)** — downloads any updated files back (e.g. calibration data written during the run) ## Typical output ``` INFO Authentication (publickey) successful! Checkpoint e4b6191 saved Syncing 'MyRobot' (pushing to ConeBot)... Sync complete! Uploaded Files: 16 Bytes Total: 8978 Running 'MyRobot' on ConeBot... Press Ctrl+C to stop INFO ✓ Generated defs.py INFO ✓ Generated robot.py INFO Running src.main... 2026-03-22 12:03:42 | info | [Robot]: Starting robot ...robot output streams here... ^C Cancelling... Sync complete! Downloaded: 4 ``` ## Stopping a run Press **Ctrl+C**. raccoon sends a shutdown signal to the robot, waits for it to stop cleanly (motors disabled), then pulls files back. ## Running individual steps manually ```bash raccoon codegen # regenerate defs.py / robot.py only raccoon sync # push or pull files only ``` These are rarely needed — `raccoon run` covers both. ## Version mismatch warning If the client and server versions differ, raccoon prints a warning before running: ``` Warning: version mismatch Client: 0.1.25 Server: 0.1.23 Run raccoon update to update the Pi. ``` Things usually still work across minor versions, but run `raccoon update` to keep both sides in sync. --- # Sensor Reading URL: /06-firmware/sensors/ ## Analog Sensor Ports The Wombat exposes six general-purpose analog input ports (AIN0–AIN5) plus a battery voltage monitor. All seven are scanned by ADC1. **ADC1 channel assignments:** | Port | GPIO | ADC channel | Rank | |---|---|---|---| | AIN0 | PB1 | ADC1_IN9 | 1 | | AIN1 | PC1 | ADC1_IN11 | 2 | | AIN2 | PC2 | ADC1_IN12 | 3 | | AIN3 | PC3 | ADC1_IN13 | 4 | | AIN4 | PC4 | ADC1_IN14 | 5 | | AIN5 | PC5 | ADC1_IN15 | 6 | | Battery | PC0 | ADC1_IN10 | 7 | ADC1 is software-triggered, scan mode, 12-bit resolution. The six sensor channels use 112-cycle sample time; the battery voltage channel uses the maximum 480-cycle sample time for best accuracy (battery ADC connects through a resistor divider with higher source impedance). `sampleAnalogPorts()` is called periodically from the TIM6 interrupt at `ANALOG_SENSOR_SAMPLING_INTERVAL`. It calls `HAL_ADC_Start_DMA(&hadc1, ...)` which fills `analogReader.adcRaw[7]` via DMA2 stream 0. The ADC completion callback for ADC1 sets a `NEW_DATA` flag in `analogReader.state`. The main loop's `updatingAnalogValuesInSpiBuffer()` then copies the raw counts directly into `txBuffer.analogSensor[6]` and `txBuffer.batteryVoltage` using `memcpy`, as these fields are adjacent in the struct. **Battery voltage conversion** is performed on the Pi side in `SpiReal::readSensorData()`: ```cpp const float stmVoltage = 3.3f; const float voltageDividerFactor = 11.0f; const float adcResolution = 4096.0f; float rawVoltage = adcCount * stmVoltage * voltageDividerFactor / adcResolution; ``` A further exponential moving average filter (α = 0.05) smooths the battery voltage before it is published to LCM. **Raw ADC values** in `txBuffer.analogSensor` are 12-bit counts (0–4095) stored as `int16_t`. Conversion to physical units (e.g., voltage or reflectance) is the responsibility of user code or libstp sensor wrappers. ## Digital Input Ports Ten general-purpose digital inputs (DIN0–DIN9) plus the on-board button are read each SPI cycle. **GPIO pin assignments:** | Port | GPIO | |---|---| | DIN0 | PD12 | | DIN1 | PD13 | | DIN2 | PD14 | | DIN3 | PD15 | | DIN4 | PB9 | | DIN5 | PB8 | | DIN6 | PC9 | | DIN7 | PE0 | | DIN8 | PE1 | | DIN9 | PE4 | | Button | PB0 | `readDigitalInputs()` iterates over all ten port pins. A pin LOW reads as 1 (logic is inverted — these inputs are typically pulled high and pulled low by the sensor). The button (PB0) uses normal logic (HIGH = pressed). The function packs all eleven bits into a `uint16_t`, which is written to `txBuffer.digitalSensors` inside the SPI completion callback for minimum latency. The Pi reads this word and demultiplexes individual bits into separate LCM topics (`libstp/digital/0/value`, `libstp/digital/1/value`, …, `libstp/digital/10/value`). ## IMU (MPU-9250 / AK8963) The IMU is an InvenSense MPU-9250, which integrates a 3-axis gyroscope, 3-axis accelerometer, and a 3-axis AK8963 magnetometer. It is connected to the STM32 via SPI3 (PC10 SCK, PC11 MISO, PC12 MOSI, PE2 CS0). The firmware uses the InvenSense **eMPL** (embedded Motion Processing Library) and the MPU-9250 DMP (Digital Motion Processor) to perform sensor fusion on-chip. The DMP outputs a 6-axis quaternion (gyro + accelerometer, no magnetometer) at 50 Hz. ### Self-Test and Bias Calibration On startup, `runImuSelfTest()` calls `mpu_run_6500_self_test()` to run the factory self-test routine. If the gyro and accelerometer pass (result bits 0 and 1 set), the function extracts the measured factory biases and pushes them into the hardware offset registers (`mpu_set_gyro_bias_reg`, `mpu_set_accel_bias_6500_reg`). This eliminates the static offset that every IMU chip has from manufacturing variation. Note: the magnetometer self-test is not performed. The comment in the code reads: "compass is fucked since we use SPI and I am too lazy to fix invensense shit." The InvenSense library was originally written for I²C; the SPI HAL shim (`mpu9250_hal.c`) maps the library's `hal_i2c_write`/`hal_i2c_read` calls to the SPI driver. The AK8963 is on an auxiliary I²C bus inside the MPU-9250 package and is accessed through the MPU-9250's I²C master mode regardless of whether the STM32→MPU-9250 bus is I²C or SPI. ### MPL Configuration The MPL is initialised with: | Feature | Setting | |---|---| | Quaternion | Enabled (6-axis, DMP) | | 9-axis fusion | Disabled (commented out) | | Gyro temperature compensation | Enabled | | Fast no-motion detection | Enabled | | In-use auto-calibration | Enabled | | Heading from gyro | Enabled | | Sample rate | 50 Hz (20 ms period) | | Compass rate | 10 Hz (100 ms period) | | Gyro FSR | ±2000 dps (configured in `mpu9250_config.h`) | | Accel FSR | ±2 g | | Low-pass filter | 42 Hz | The DMP is loaded with the firmware from InvenSense (`dmp_load_motion_driver_firmware()`), then configured to output 6-axis LP quaternion, raw accelerometer, and calibrated gyro at 50 Hz. ### Orientation Matrices The IMU chip axes may not align with the robot body frame. The orientation matrix remaps chip axes to board axes. The default matrices in `mpu9250_config.h` are: ```c // Gyro/accel: chip X = +E, chip Y = −N, chip Z = +D #define IMU_GYRO_ORIENTATION_MATRIX { 0, 1, 0, -1, 0, 0, 0, 0, -1 } // Compass #define IMU_COMPASS_ORIENTATION_MATRIX { 1, 0, 0, 0, -1, 0, 0, 0, 1 } ``` The Pi can override these at runtime by setting new 3×3 signed-char matrices in `rxBuffer.imuGyroOrientation` and `rxBuffer.imuCompassOrientation` with the `PI_BUFFER_UPDATE_IMU_ORIENTATION` flag. The firmware calls `updateImuOrientation()` which re-applies the matrices to the MPL via `inv_set_gyro_orientation_and_scale` and `dmp_set_orientation`. ### Data Acquisition Loop `readImu()` is called on every main-loop iteration. It checks whether 20 ms have elapsed since the last gyro read and 100 ms since the last compass read, then: 1. Reads raw data from the DMP FIFO (`dmp_read_fifo`), which gives gyro, accelerometer, and quaternion data as integer arrays. 2. Pushes data into the MPL (`inv_build_gyro`, `inv_build_accel`, `inv_build_quat`). 3. Reads the magnetometer separately via `mpu_get_compass_reg` and stores raw counts (compass is not fed into the MPL fusion in the current build). 4. Calls `inv_execute_on_data()` to run the MPL fusion step. 5. Calls `read_from_mpl()` to extract fused outputs. `read_from_mpl()` computes: - **Quaternion** (w, x, y, z) via `inv_get_quaternion_set` - **Rotation matrix** from quaternion (for body→world transform) - **Gyro vector** in world frame (body gyro rotated by quaternion) - **Accelerometer vector** in world frame (scaled to m/s² by × 9.80665) - **Linear acceleration** (gravity removed) via `inv_get_linear_accel` - **Velocity** from integrated linear acceleration (experimental — decays with factor 0.998 per cycle to limit drift) - **Heading** in degrees derived from the quaternion: `atan2(q1*q2 - q0*q3, q0^2 + q2^2 - 0.5)` (NB: this uses q30 fixed-point intermediate values) When new data is available and the SPI bus is not busy, `txBuffer.imu` is updated atomically from the local `imu` struct. ### SPI3 Configuration SPI3 is configured as master at startup with prescaler 64 (giving ~1.4 MHz), then changed to prescaler 256 during the IMU initialisation to stay within the MPU-9250's configuration register speed limit. After DMP load it can run faster, but the `changeSPIBaudRatePrescaler` call to restore a higher rate is commented out in the current code. The SPI3 DMA (DMA1 stream 0/5) runs in normal (not circular) mode — each transaction is explicitly started by the MPU-9250 driver and completes with an interrupt. --- # Missions URL: /02-programming/03-missions/ # Missions A mission is a self-contained task your robot performs — "drive to the cone and pick it up", "follow the line to the basket", "go home". Missions are composed of steps arranged in sequences. ## Writing a Mission Every mission extends the `Mission` base class and implements a `sequence()` method that returns a `Sequential` of steps: ```python from libstp import * from src.hardware.defs import Defs class M01DriveToConeMission(Mission): def sequence(self) -> Sequential: return seq([ drive_forward(25), turn_right(90), drive_forward(15), Defs.claw.open(), ]) ``` That's it. The framework handles execution, timing, error handling, and hardware cleanup. ## Real-World Example: ConeBot Here's the actual mission from the Ecer2026 ConeBot that drives to a cone on the game table: ```python from libstp import * from src.hardware.defs import Defs class M01DriveToConeMission(Mission): def sequence(self) -> Sequential: return seq([ # Start position: against the wall Defs.cone_arm_servo.up(), drive_backward(3), turn_right(55), # Align against the back wall wall_align_backward(accel_threshold=0.3), drive_forward(2), # Sweep to check for cone Defs.cone_arm_servo.down(), turn_left(25), turn_right(25), Defs.cone_arm_servo.up(), # Drive to the line and follow it Defs.front.drive_until_black(), forward_single_lineup( Defs.front.right, entry_threshold=0.9, exit_threshold=0.7, correction_side=CorrectionSide.RIGHT, forward_speed=1.0, ), # Drive along the line to the cone drive_forward(27), turn_right(90), Defs.front.drive_until_black(), ]) ``` Notice how the mission reads almost like natural language: "raise arm, back up, turn, align against wall, find the line, follow it, drive to the cone." ## Composing Steps ### Sequential Execution `seq([...])` runs steps one after another. Each step must finish before the next one starts: ```python seq([ drive_forward(25), # Step 1: drive 25 cm turn_right(90), # Step 2: turn 90 degrees (after step 1 finishes) drive_forward(25), # Step 3: drive 25 cm (after step 2 finishes) ]) ``` ### Parallel Execution `parallel(...)` runs multiple things at the same time. Each argument is an independent track: ```python parallel( drive_forward(50), # Track 1: drive forward seq([ # Track 2: arm movement wait_until_distance(20), # wait until 20cm traveled Defs.arm.down(), # then lower the arm ]), ) ``` Parallel finishes when **all tracks** complete. You can pass individual steps, lists of steps (implicitly sequential), or explicit `seq([...])` blocks. **Resource safety**: Parallel validates that no two tracks use the same hardware resource. You can't drive forward and turn at the same time, because both need the drive system. The framework will raise an error before execution if it detects a conflict. ### Real Parallel Example From the PackingBot — driving forward while operating a grabber: ```python parallel( # Track 1: Follow the line edge Defs.front.follow_right_edge(999).until( after_cm(125) & on_black(Defs.front.right) ), # Track 2: Grab POMs during the drive seq([ wait_until_distance(35), # Wait until we've traveled 35cm Defs.pom_grab.closed(), # Close grabber Defs.pom_arm.up(), # Lift arm ]), # Track 3: Put arm back down after passing edge seq([ wait_until_distance(45), Defs.pom_arm.down(), ]), # Track 4: Pre-adjust grabber width Defs.pom_grab.slightly_open(), ) ``` Four things happening simultaneously: line following, grabbing objects at the right moment, repositioning the arm, and adjusting the grabber width. ## Stop Conditions Many steps accept a `.until(condition)` clause that controls when the step finishes: ```python drive_forward(speed=0.8).until(on_black(Defs.front.right)) drive_forward(speed=1.0).until(on_black(Defs.front.right) | after_cm(50)) drive_forward(speed=1.0).until(after_cm(10) + on_black(Defs.front.right)) ``` Conditions can be combined with `|` (OR), `&` (AND), `+` (THEN), and grouped with parentheses for complex logic. See **[Stop Conditions]({{< ref "04a-stop-conditions" >}})** for the full reference. ## Control Flow ### Looping ```python # Repeat a step 5 times loop_for(drive_forward(10), iterations=5) # Repeat forever (use inside do_while_active to stop it) loop_forever(seq([ drive_forward(10), turn_right(90), ])) ``` ### Do While Active Run a task that stops when a reference step finishes: ```python do_while_active( reference_step=drive_forward(100), # This controls the lifetime task=loop_forever( # This runs alongside and gets cancelled seq([ Defs.arm.up(), wait_for_seconds(0.5), Defs.arm.down(), wait_for_seconds(0.5), ]) ), ) ``` ### Inline Code with `run()` Execute arbitrary Python code as a step: ```python seq([ drive_forward(25), run(lambda robot: print("Reached the cone!")), Defs.claw.open(), ]) ``` `run()` accepts sync or async callables: ```python async def check_sensor(robot): value = robot.defs.front_right_ir.read() if value > 2000: print("On black line") seq([ run(check_sensor), drive_forward(10), ]) ``` ### Deferred Steps Build a step at runtime based on robot state: ```python def choose_direction(robot): if robot.defs.front_right_ir.read() > 2000: return turn_left(90) else: return turn_right(90) seq([ drive_forward(25), defer(choose_direction), # Decides at runtime drive_forward(25), ]) ``` ## Setup Mission Pattern The setup mission runs before the match starts. Common pattern: ```python class M00SetupMission(Mission): def sequence(self) -> Sequential: return seq([ # Home all servos to known positions motor_off(Defs.cone_container_motor), Defs.claw.closed(), Defs.arm.up(), # Run distance calibration calibrate(distance_cm=50), ]) ``` ## Mission Registration Missions are registered in the Robot class. This is salso automatically generated for you. **Don't edit this!** To change it, edit the `raccoon.project.yaml` file: ```python class Robot(GenericRobot): # ... setup_mission = M00SetupMission() # Runs before start signal missions = [ # Run in order after start M01DriveToConeMission(), M02CollectConeMission(), M03CollectBotguyMission(), ] shutdown_mission = M99ShutdownMission() # Runs when timer expires ``` Missions execute in list order. If mission 1 finishes, mission 2 starts immediately. If the `shutdown_in` timer fires during any mission, that mission is cancelled and the shutdown mission runs. --- # Wait for Light URL: /02-programming/algorithms/wait-for-light/ # Wait for Light In Botball competitions, robots start when a light signal turns on. The robot must detect this light reliably — without false-triggering from ambient changes (people walking past, overhead lighting shifting) and without missing a weak signal. LibSTP's `wait_for_light` step solves this using a 1D Kalman filter for baseline tracking and a multi-phase workflow with a built-in test mode. The approach is based on the paper [*"Comprehensive Light-Start Methods in Botball"*](https://ecer.pria.at/archive/ecer-2024/papers/Comprehensive_Light-Start_Methods_in_Botball.pdf) by James Gosling, Matthias Rottensteiner, and Alexander Müllner (ECER 2024), which compares several noise reduction techniques and proposes the Kalman filter + downward-facing sensor combination used here. ## The Problem A light sensor returns a raw analog value that varies with ambient conditions. The start lamp adds brightness, which causes the sensor reading to drop (lower value = more light on a typical LDR). The challenge is distinguishing "the lamp turned on" from "the room got slightly brighter" or "someone's shadow moved." Naive approaches fail in predictable ways: - **Fixed threshold** (e.g. "trigger below 2000"): breaks when the room lighting changes between setup and match start, or between venues. - **Manual calibration** (measure dark/light, compute midpoint): works, but requires operator interaction before every run and is sensitive to the time gap between calibration and match start. - **Simple delta detection** (trigger on any sudden change): fires on noise spikes. ## How It Works ### Sensor Mounting The sensor faces **downward** toward the table surface, with no shielding (no black tape, no straw). When the start lamp turns on above the robot, light reflects off the table and reaches the sensor. This downward-facing mount reduces environmental noise by up to 76% compared to a horizontal mount (Gosling et al., 2024), because the sensor's field of view is dominated by the table (a stable reflector) rather than the room. ### Phase 1: Warm-Up (Baseline Establishment) During a configurable warm-up period (default: 1 second), the step reads the sensor at ~200 Hz and feeds every reading into a 1D Kalman filter. The filter tracks the slowly-changing ambient baseline while smoothing out sensor noise. The Kalman filter maintains two values: - **Estimate** — the current best guess of the true ambient reading - **Error** — how uncertain the estimate is Each new sensor reading updates both via the standard Kalman predict/update cycle. During warm-up, the process variance is set relatively high (0.01) so the filter adapts quickly to the current lighting conditions. ### Phase 2: Test Mode After warm-up, the step enters **test mode**. In this mode, the lamp can be turned on and off to verify detection works — but the robot **will not start**. The UI shows "TEST MODE" and counts how many times the lamp was successfully detected. This is critical at competition: you can verify the sensor picks up the lamp signal from its actual position on the starting board before committing to an armed state. If it doesn't trigger, you can adjust the sensor position or the `drop_fraction` parameter without wasting a run. While in test mode, the Kalman filter continues updating the baseline on non-triggering readings, keeping it current. ### Phase 3: Armed Once the operator has confirmed detection works (by pressing the button after at least one successful test trigger), the step transitions to **armed** mode. Now a trigger will actually start the mission. A key detail: when transitioning from test mode to armed, the step sets a **needs_clear gate**. The sensor must first see an above-threshold reading before it will accept triggers. This prevents an instant false start if the lamp happens to still be on from the last test. The Kalman filter's process variance is also reduced (from 0.01 to 0.001) after warm-up, making the baseline more stable — it tracks slow ambient drift but won't chase the lamp signal. ### Trigger Detection A trigger fires when the raw sensor reading drops below `baseline * (1 - drop_fraction)` for `confirm_count` consecutive samples. With the defaults (15% drop, 3 consecutive samples at 200 Hz): - The lamp must cause at least a 15% brightness increase - This must persist for ~15 ms (3 samples) — effectively instant, but rejects single-sample noise spikes ## Parameters | Parameter | Default | Effect | |-----------|---------|--------| | `drop_fraction` | 0.15 | How much the reading must drop below baseline. Lower = more sensitive, higher = safer | | `confirm_count` | 3 | Consecutive below-threshold samples needed. Higher = more noise-resistant, slower response | | `warmup_seconds` | 1.0 | Baseline establishment time. Longer = more stable baseline | | `poll_interval` | 0.005 | Seconds between reads (~200 Hz). Faster = more responsive | ## Usage The framework handles wait-for-light automatically when a `wait_for_light_sensor` is defined in your hardware. It runs as part of the pre-start gate — after the setup mission, before the first main mission. For manual use: ```python # Default — works for most setups wait_for_light(Defs.wait_for_light_sensor) # More sensitive for a weak lamp signal wait_for_light(Defs.wait_for_light_sensor, drop_fraction=0.10, confirm_count=2) ``` ### Excluding from Calibration The wait-for-light sensor is typically not an IR line sensor — it doesn't need black/white calibration. Exclude it from the standard calibration step: ```python calibrate(distance_cm=50, exclude_ir_sensors=[ Defs.wait_for_light_sensor, ]) ``` ## Legacy Method If the automatic method doesn't work for your setup (e.g. the sensor isn't mounted downward, or the lamp signal is too weak for flank detection), there's a manual fallback: ```python wait_for_light_legacy(Defs.wait_for_light_sensor) ``` This runs the traditional two-step flow: the operator measures the sensor with the lamp off, then with it on, confirms the midpoint threshold, and the robot waits for the reading to cross it. It works, but requires manual interaction before every run. --- # Settings URL: /01-botui/03-settings/ # Settings This page provides access to basic system and display settings. ![IMG: Homepage for settings](/images/botui/settings/botui-settings.png) ## Function Tiles ### Wi-Fi The Wi-Fi settings provide three different operating modes: **Wi-Fi Client Mode (1)** Allows the device to connect to existing wireless networks. New networks can be added manually. Once connected, networks are saved automatically, so the password does not need to be entered again. If a saved network is nearby and the robot has no internet connection, it will automatically reconnect to that network. > **Note:** A USB keyboard must be plugged into the Wombat to type the WiFi password. Once connected, tap **Device Info** on the WiFi page to see the assigned IP address — you will need this to connect raccoon-cli from your laptop. **Troubleshooting:** If the robot connects successfully (status turns green) but your laptop cannot reach it, reboot the Wombat. A known network manager bug occasionally requires this. **Hotspot Mode (2)** Allows the robot to create its own wireless hotspot. The following settings can be configured: - **Network name (SSID)** (*default*: auto-generated) - **Password** (*default*: auto-generated) - **Wi-Fi band** (5 GHz, 2.4 GHz or Auto, *default*: Auto) - **Security type** (WPA3 Personal, WPA2 Personal or Open, *default*: WPA3 Personal) - **Hidden** (*default*: off) The hotspot status is also displayed, showing whether it is **inactive** or **active**. When active, the assigned IP address is shown. ![IMG: Example of using default values to create a new hotspot](/images/botui/settings/wifi-hotspot.png) **LAN Only Mode (3)** Used when the device is connected via an Ethernet (LAN) cable. In this mode, information about the currently connected network is displayed. All three modes provide a dedicated page that displays detailed network information such as: - **IP address** - **SSID** - **Encryption type** - **MAC address** ![IMG: Device info example screen](/images/botui/settings/wifi-device-info.png) ### Power This section contains options for controlling the device's power state. - **Shutdown**: Powers off the device completely. - **Reboot**: Restarts the device. ### Calibrate This option starts the touchscreen calibration process. A Wombat icon appears in the corners and in the center of the screen. The user must tap each one as accurately as possible. The more precise the taps are, the more accurate the screen calibration will be for user input. ### Rotate Changes the screen orientation. The display can be rotated in 90° increments (90°, 180°, 270°, and 360°). ![IMG: Screen of the rotation options](/images/botui/settings/rotate-screen.png) ### Hide UI Hides the current user interface and displays the system logs. ### Status The status page provides an overview of different **systemd services**. Each service has its own page where users can **start**, **restart**, **view logs**, and **enable** the service. The current **service status** is also displayed. ![IMG: Example status page of the flutter-ui.service](/images/botui/settings/systemd-status.png) ## Toggles - **Screensaver**: Enables or disables the screensaver. --- # Data Pipeline URL: /06-firmware/data-pipeline/ This page traces a complete data path from a physical sensor reading to a value accessed in user Python code, and a complete command path from Python code to motor movement. ## Sensor Data Path (STM32 → Python) ``` Physical sensor │ ▼ STM32 ADC / GPIO / IMU │ (interrupt-driven, µs latency) ▼ txBuffer (volatile struct in STM32 RAM) │ (updated after each measurement cycle) ▼ SPI2 DMA transfer to Raspberry Pi │ (circular DMA, Pi polls at main-loop rate) ▼ stm32-data-reader: SpiReal::readSensorData() │ (spi_update() call in C, copies RxBuf → TxBuf, │ returns SensorData struct) ▼ stm32-data-reader: DataPublisher::publishSensorData() │ (serialises fields to LCM messages, publishes │ via raccoon::Transport) ▼ LCM UDP multicast (localhost) │ ▼ libstp: LcmReader (background thread in C++ process) │ (raccoon::Transport::spinOnce, updates caches) ▼ LcmReader caches (mutex-protected std::unordered_map │ / scalar fields) ▼ Python binding call (e.g., motor.get_position(), analog.read(), imu.heading()) ``` ### Step 1: STM32 Hardware Layer **Analog sensors and battery:** TIM6 ISR fires every 1 µs and counts to `ANALOG_SENSOR_SAMPLING_INTERVAL` (by default configurable). When the interval elapses, `sampleAnalogPorts()` starts ADC1 DMA. On completion, `HAL_ADC_ConvCpltCallback` sets `NEW_DATA`. The main loop then copies the 7 ADC values into `txBuffer`. **Digital sensors:** `readDigitalInputs()` is called directly from `HAL_SPI_TxRxCpltCallback` and the result written into `txBuffer.digitalSensors`. Digital sensors are thus updated on every single SPI transaction. **BEMF and motor state:** TIM6 ISR fires `stop_motors_for_bemf_conv()` every 5 ms. After 500 µs settle time, ADC2 DMA conversion runs. On ADC2 completion, BEMF is processed and position is accumulated. The motor control loop re-applies the PWM command. The main loop's `updatingMotorsInSpiBuffer()` copies `motor_data` into `txBuffer.motor` after checking the SPI bus is not busy. **IMU:** Main loop calls `readImu()`. When the MPL has new fused data, `txBuffer.imu` is updated. ### Step 2: SPI Transfer The Pi initiates SPI transactions by calling `spi_update()` in the C SPI layer (in `stm32-data-reader/src/wombat/hardware/Spi.cpp`). This performs a synchronous `ioctl(SPI_IOC_MESSAGE)` on `/dev/spidev`. The Pi sends the current `RxBuffer` (commands) while simultaneously receiving the STM32's `TxBuffer` (sensor data) in a single full-duplex transfer. The transfer length is always `BUFFER_LENGTH_DUPLEX_COMMUNICATION` — the larger of the two struct sizes. Both ends pad with whatever was last in the struct if the other side's struct is smaller. The Pi's `stm32-data-reader` calls `spi_update()` on every main-loop iteration. The default `mainLoopDelay` is a very short sleep to avoid 100% CPU usage while still polling as fast as possible. ### Step 3: stm32-data-reader → LCM `SpiReal::readSensorData()` unpacks the raw `TxBuffer` into a C++ `SensorData` struct with named fields. This is where unit conversions happen: - Battery voltage is converted from ADC counts to volts using the known 11× resistor divider and 3.3 V reference, then filtered with a 5% EMA. - Analog sensor values are passed as raw `int16_t` counts. No physical conversion is applied in the SPI layer. - Motor positions are adjusted by per-motor software offsets (`positionOffsets_[port]`) to implement the "reset position counter" feature without writing to the STM32. `DataPublisher::publishSensorData()` then serialises each field as an LCM message on the raccoon transport. Key topics: | LCM channel | Content | |---|---| | `libstp/gyro/value` | `vector3f_t` (rad/s) | | `libstp/accel/value` | `vector3f_t` (m/s²) | | `libstp/linear_accel/value` | `vector3f_t` (m/s², gravity removed) | | `libstp/mag/value` | `vector3f_t` (raw counts) | | `libstp/imu/quaternion` | `quaternion_t` (w, x, y, z) | | `libstp/imu/heading` | `scalar_f_t` (degrees, retained) | | `libstp/imu/temp/value` | `scalar_f_t` (°C) | | `libstp/battery/voltage` | `scalar_f_t` (volts, retained) | | `libstp/analog/N/value` | `scalar_i32_t` (raw ADC counts) | | `libstp/digital/N/value` | `scalar_i32_t` (0 or 1) | | `libstp/bemf/N/value` | `scalar_i32_t` (filtered ticks, retained) | | `libstp/motor/N/position` | `scalar_i32_t` (accumulated ticks, retained) | | `libstp/motor/N/done` | `scalar_i32_t` (0 or 1, retained) | | `libstp/odometry/pos_x` | `scalar_f_t` (meters, retained) | | `libstp/odometry/pos_y` | `scalar_f_t` (meters, retained) | | `libstp/odometry/heading` | `scalar_f_t` (radians, retained) | "Retained" channels use `publishRetained()`, which stores the last value in the raccoon retain store. A new subscriber that calls `subscribeWithRetain` immediately receives the last published value without waiting for the next SPI cycle. ### Step 4: raccoon-transport The raccoon-transport is a thin C++ wrapper around the LCM library. LCM uses UDP multicast on localhost. The `Transport::spinOnce(0)` call pumps pending messages with zero timeout. Messages are serialised using LCM's code-generated (de)serialisers — each message type (scalar, vector3f, quaternion) has a corresponding `.lcm` schema file in `raccoon-transport/messages/types/`. ### Step 5: libstp HAL `LcmReader` runs as a singleton with a background `listenLoop` thread. This thread calls `transport_.spinOnce(0)` continuously and updates mutex-protected caches. Each HAL class (Motor, Analog, Digital, IMU) calls the appropriate `LcmReader::read*()` method which takes the lock and returns the cached value. This means sensor reads in user code are non-blocking and always return the most recently observed value. ## Command Path (Python → STM32) ``` Python: motor.set_speed(50) │ ▼ libstp::hal::motor::Motor::setSpeed() │ (maps percent to signed duty, applies inversion) ▼ LcmDataWriter::setMotor(port, duty) │ (publishes scalar_i32_t to libstp/motor/N/power_cmd) ▼ LCM UDP multicast (localhost) │ ▼ stm32-data-reader: CommandSubscriber::onMotorPowerCommand() │ (maps percent → duty: duty = percent × 4) ▼ DeviceController::setMotorPwm() │ ▼ SpiReal::setMotorPwm() │ (calls C API: set_motor_pwm(port, duty)) ▼ C SPI layer: updates rxBuffer in Pi's memory │ ▼ Next spi_update() call: Pi sends updated rxBuffer to STM32 │ (full-duplex SPI transaction) ▼ STM32: rxBuffer updated by DMA │ (SPI completion callback fires) ▼ HAL_SPI_TxRxCpltCallback validates version and parity │ ▼ Next BEMF cycle (≤5 ms): update_motor() reads motorControlMode │ and motorTarget, calls motor_setDutycycle() ▼ TIM1/TIM8 compare register updated │ ▼ Motor PWM changes ``` ### Reliable Commands Commands that must not be lost (position targets, PID configuration, servo mode changes, shutdown) are sent with `reliableOpts` in the LCM publish call. The raccoon transport's `ReliablePublisher` layer wraps these in an envelope with a sequence number and retransmits until an ACK is received. The `stm32-data-reader`'s `CommandSubscriber` uses `reliableOpts` when subscribing to ensure it participates in the ACK protocol. Continuous commands (motor power percentage, motor velocity) are sent without reliable delivery to avoid queuing stale commands. If a power command is dropped, the next iteration of the user's control loop sends a fresh one. ### Timestamp-Based Deduplication The `CommandSubscriber` tracks the timestamp of the most recently applied command per channel. If a message arrives with a timestamp older than the last applied message (possible if LCM delivers messages out of order), it is silently dropped. This is the `isTimestampNewer()` check. ## Timing Budget | Stage | Typical latency | |---|---| | Sensor measurement to txBuffer | 0–5 ms (BEMF-dominated) | | SPI transfer | < 1 ms | | stm32-data-reader main loop | < 1 ms | | LCM UDP delivery | < 1 ms on localhost | | LcmReader cache update | < 0.1 ms | | Python read call | < 0.1 ms (non-blocking) | | **Total sensor→Python** | **< 10 ms** | Command latency follows the same path in reverse; the dominant factor is again the BEMF cycle (up to 5 ms) from when the STM32 receives the new command to when the motor output actually changes. --- # Step Library (Right) URL: /03-web-ide/04-step-library/ ## Step Library (Right) A searchable panel listing all steps available from libstp. Steps are grouped by category (motion, sensors, calibration, etc.). - **Filter** by typing in the search box at the top - **Drag** any step onto the flowchart canvas to add it to the mission > **Note:** The step library requires the device to be online. When offline, it shows "No cached steps yet." Connect to the robot first, then use **Re-index now** in Settings → Project to populate the library. --- # update URL: /04-raccoon-cli/04-update/ # raccoon update ```bash raccoon update ``` Updates raccoon on both your laptop and the robot to the latest release. ## What it does 1. Checks the current version on your laptop and on the connected robot 2. Fetches the latest release from GitHub 3. Downloads and installs updated packages on both sides automatically ## Requirements The [GitHub CLI (`gh`)](https://cli.github.com/) must be installed and authenticated, as the raccoon repository is currently private. Install `gh`: ```bash # macOS brew install gh # Ubuntu / Debian sudo apt install gh ``` Authenticate: ```bash gh auth login ``` ## When to run - After first connecting to a new robot (to ensure both sides match) - When you see a version mismatch warning from `raccoon run` - Before a competition to make sure you're on the latest stable release --- # Wall Alignment URL: /02-programming/algorithms/wall-alignment/ # Wall Alignment Wall alignment drives the robot into a physical boundary — a wall, a pipe, a divider panel — and uses the IMU's gravity-compensated linear acceleration to detect the moment of impact. Instead of stopping the instant contact is made, the robot keeps pushing for a short settle period, which lets the chassis rotate flush against the surface. The result is a reliable positional reset that requires no sensors, no encoders, and no prior knowledge of the robot's exact heading. This is one of the most robust re-localization techniques in Botball. Odometry drifts over a full mission, but a wall always stays where it is. Driving into a wall and resetting the heading afterward cancels accumulated heading error completely. ## Quick Start ```python # Align the front of the robot against a wall ahead wall_align_forward() # Align the back of the robot, using a lower threshold for a softer impact wall_align_backward(accel_threshold=0.3) # Align the left side when strafing (mecanum/omni drive) wall_align_strafe_left(speed=0.4) ``` ## How It Works ### No Heading Correction Unlike normal drive steps, the wall align steps apply constant velocity **without any heading correction**. The robot drives in a straight line, but if it approaches the wall at a small angle, no correction is applied. This is deliberate: heading correction would fight the wall's physical feedback and prevent the chassis from rotating flush. The angular freedom lets the contact force rotate the robot into full surface contact during the settle period. ### IMU-Based Impact Detection The IMU provides gravity-compensated linear acceleration in three axes. During normal driving the horizontal (XY) acceleration magnitude stays low — typically well below 0.5 m/s². When the robot hits a wall, the sudden deceleration spike is clearly visible in the linear acceleration signal. Each control cycle (100 Hz), the step reads the linear acceleration and computes: ``` accel_magnitude = sqrt(ax² + ay²) ``` When `accel_magnitude >= accel_threshold`, a bump has been detected. ### Grace Period When the robot starts moving from rest, the drive motors cause a brief acceleration transient. Without protection, this startup transient could look like a bump and fire a false detection immediately. The `grace_period` parameter suppresses detection for a configurable window after the step starts (default: 0.3 s), giving the robot time to reach a stable cruising speed before monitoring begins. ### Settle Period After the first threshold crossing, the step does **not** immediately stop. It continues applying drive velocity for `settle_duration` seconds (default: 0.2 s). During this window: - The robot pushes against the wall, letting friction and contact force rotate the chassis flush. - The step tracks the peak acceleration magnitude across all samples in the window — the true impact peak often arrives 1–2 samples after the threshold crossing. - The IMU heading at impact and the heading at the end of the settle window are both recorded. When the settle window expires, the step stops and populates `bump_result`. ### BumpResult After the step completes, `step.bump_result` holds diagnostic information about the impact: | Field | Description | |-------|-------------| | `accel_magnitude` | Peak XY acceleration magnitude at impact in m/s² | | `impact_angle_deg` | Estimated misalignment angle at impact, computed from the peak accel vector direction. 0° = hit square-on. Positive = wall angled CCW from perpendicular | | `heading_correction_deg` | How many degrees the robot's heading changed during the settle push — the more reliable alignment metric, derived from the IMU heading rather than a single acceleration sample | ### Safety Timeout If no impact is detected before `max_duration` seconds elapse, the step finishes anyway with a warning. This prevents a robot from driving indefinitely when the wall is further away than expected or when the threshold is set too high. ### Direction Variants Four variants cover all standard drive directions. The expected deceleration angle differs per variant, which is used internally to compute `impact_angle_deg`: | Step | Robot motion | Surface contacted | |------|-------------|------------------| | `wall_align_forward` | Forward (+X) | Front | | `wall_align_backward` | Backward (-X) | Rear | | `wall_align_strafe_left` | Strafe left (-Y) | Left side | | `wall_align_strafe_right` | Strafe right (+Y) | Right side | The strafe variants require a mecanum or omni drivetrain capable of lateral motion. ## Parameters All four variants share the same parameter set: | Parameter | Type | Default (forward/backward) | Default (strafe) | Description | |-----------|------|:--------------------------:|:----------------:|-------------| | `speed` | float | 1.0 | 0.5 | Drive speed in m/s | | `accel_threshold` | float | 0.5 | 0.5 | Minimum XY linear-acceleration magnitude (m/s²) to classify as a bump | | `settle_duration` | float | 0.2 | 0.2 | Seconds to keep pushing after impact, letting the chassis rotate flush | | `max_duration` | float | 5.0 | 5.0 | Safety timeout in seconds — step finishes even if no bump is detected | | `grace_period` | float | 0.3 | 0.3 | Seconds to ignore acceleration after starting, suppressing the startup transient | ### Choosing accel_threshold The threshold is the most important parameter to tune for a specific situation: - **Default (0.5 m/s²):** Suitable for a hard impact at 1.0 m/s against a rigid wall. The robot decelerates sharply and the spike is clearly above noise. - **Lower (0.15–0.3 m/s²):** Use when approaching slowly, hitting a soft or yielding object (a pipe, a lightweight divider), or when the robot's mass is small. The drumbot missions use 0.15 when aligning against a thin pipe at 0.3 m/s. - **Higher (>0.5 m/s²):** Use when driving fast over a rough surface that generates vibration — raising the threshold prevents false triggers from floor bumps. The noise floor of the IMU at rest is typically 0.05–0.1 m/s². Setting `accel_threshold` below 0.15 is not recommended. ### Choosing settle_duration - **0.0–0.1 s:** Effectively stop on contact. Use when the robot must not push an object, or when the wall contact is already square and no rotation correction is needed. - **0.2 s (default):** Enough time for a slightly skewed robot to rotate flush. Suitable for most walls. - **Longer:** Rarely needed. A longer settle does not improve alignment once the chassis is already flush, and wastes time. ## Usage ### Typical Pattern: Drive Near, Then Align The most common pattern is to drive most of the distance with odometry, then finish with wall alignment: ```python from libstp import * from src.hardware.defs import Defs class M02AlignOnBackWall(Mission): def sequence(self) -> Sequential: return seq([ turn_right(90), # Drive close to the back wall, then let the IMU detect contact wall_align_backward(speed=1.0, accel_threshold=0.4, settle_duration=0.1, max_duration=1.0), # Heading is now reliable — reset reference mark_heading_reference(), # Use the aligned heading to strafe-find a line strafe_right().until(on_black(Defs.rear.right)), strafe_left(speed=0.5).until(on_white(Defs.rear.right)), strafe_right(speed=0.3).until(on_black(Defs.rear.right)), ]) ``` ### Aligning Against a Thin Object When aligning against something narrow like a pipe or post, lower the threshold and the approach speed: ```python # Approach at 0.3 m/s, very sensitive threshold for a thin pipe wall_align_forward(speed=0.3, accel_threshold=0.15, settle_duration=0, max_duration=3, grace_period=0.4) ``` A `settle_duration` of 0 makes sense here — pushing a thin pipe further does not help alignment, and may push the object away. ### Reading the Result After the step, inspect `bump_result` to decide on a recovery action or to log diagnostics: ```python step = wall_align_forward() # ... run mission ... if step.bump_result: print(f"Impact: {step.bump_result.accel_magnitude:.2f} m/s²") print(f"Heading corrected by: {step.bump_result.heading_correction_deg:.1f}°") ``` ## Tips 1. **Always follow wall alignment with `mark_heading_reference()`.** The whole purpose of wall alignment is to give you a known physical reference. Lock that into the odometry heading immediately after. 2. **Approach speed affects threshold.** A faster approach produces a sharper deceleration spike. At 1.0 m/s the default threshold of 0.5 m/s² works well. At 0.3 m/s, drop the threshold to 0.15–0.3. 3. **Rough or uneven floors can trigger false bumps.** If the robot drives over a cable, a mat edge, or an uneven joint in the game field, the vibration may exceed the threshold. Raise `accel_threshold` slightly, or increase `grace_period` if the false trigger happens early in the move. 4. **Strafe alignment is slower than forward/backward alignment.** Mecanum drive is generally less efficient laterally — use the lower default speed (0.5 m/s) and be prepared to tune the threshold if the wheels generate more lateral vibration noise than expected. 5. **Do not use wall alignment as the primary distance mechanism.** Drive most of the distance with odometry steps, then use wall alignment only for the final contact and heading correction. Relying on the timeout to stop rather than an actual bump means the robot's position is undefined. 6. **`settle_duration=0` is valid.** For situations where any physical push is undesirable (a fragile target, a sensor array, a light pipe), set `settle_duration=0` to stop immediately on contact. The chassis will not rotate flush, but you get the positional reset without damaging anything. --- # Steps DSL URL: /02-programming/04-steps/ # Steps DSL Steps are the building blocks of every mission. Each step is a single action: drive forward, turn, move a servo, wait for a sensor. You combine steps into sequences and parallel blocks to build complex behaviors. > For a complete, always up-to-date list of every available step function and its parameters, see the **[API Reference]({{< ref "05-api-reference" >}})** — specifically the DSL Steps section. This page explains the concepts and patterns behind the DSL, not every individual function. ## How the DSL Works You never instantiate step classes directly. Instead, you call **factory functions** that return **builder objects**: ```python # You write this: drive_forward(25, speed=0.8).until(on_black(Defs.front.right)) # NOT this: DriveForward(cm=25, speed=0.8, until=OnBlack(Defs.front.right)) ``` The underlying classes (`DriveForward`, `TurnLeft`, etc.) are hidden. You interact with clean `snake_case` factory functions. ### The Builder Pattern Every factory function returns a builder — an object that you can configure with chained method calls before it executes: ```python # Simple — just parameters drive_forward(25) # Chained — add a stop condition drive_forward(speed=0.8).until(on_black(Defs.front.right)) # Multiple chains drive_forward(25).on_anomaly(lambda step, dt: print(f"Slow: {dt}s")).skip_timing() ``` Builders are steps themselves — you can place them directly into `seq([...])` without calling `.build()`. When the mission runs, the builder constructs the real step and executes it. ### How It's Built: Annotations and Code Generation Under the hood, the DSL is powered by two decorators and a code generator: ```mermaid graph LR A["@dsl_step on class
DriveForward"] -->|"codegen scans AST"| B["Generated Builder
DriveForwardBuilder"] B --> C["Generated Factory
drive_forward()"] C -->|"user calls"| D["Builder instance
with fluent setters"] D -->|"seq() executes"| E["Real DriveForward
step runs"] style A fill:#AB47BC,color:#fff style B fill:#42A5F5,color:#fff style C fill:#4CAF50,color:#fff style D fill:#66BB6A,color:#fff style E fill:#FF7043,color:#fff ``` **1. `@dsl_step`** — Marks a step class for code generation. Applied to the internal class: ```python @dsl_step(tags=["motion", "drive"]) class DriveForward(MotionStep): def __init__(self, cm: float = None, speed: float = 1.0, until: StopCondition = None): ... ``` The decorator marks the class as hidden from the DSL discovery system (e.g., the API reference listing) and registers it for the code generator. The class still exists — you can find it if you search for it — but you're expected to use the generated factory function instead. **2. Code generator** — At build time, scans all `@dsl_step` classes via Python's AST. For each one, it generates: - A **Builder class** (`DriveForwardBuilder`) with fluent setter methods for every `__init__` parameter (`.cm()`, `.speed()`, `.until()`) - A **factory function** (`drive_forward()`) with the same signature as the original `__init__`, decorated with `@dsl` for metadata **3. `@dsl`** — Applied to the generated factory function. Attaches metadata (name, tags) used by the API reference and BotUI to discover and display available steps. The generated files are named `*_dsl.py` and sit alongside the original source. You never edit them — they're regenerated on every build. ### Available Builder Methods Every builder inherits these methods from `StepBuilder`: | Method | What It Does | |--------|-------------| | `.until(condition)` | Add a stop condition (motion steps) | | `.on_anomaly(callback)` | Register a timing anomaly watchdog | | `.skip_timing()` | Exclude from timing instrumentation | | Per-parameter setters | One for each `__init__` parameter (e.g., `.cm()`, `.speed()`) | ## Stop Conditions Many steps accept a `.until(condition)` clause that controls when the step finishes. Conditions can be combined with `|` (OR), `&` (AND), and `+` (THEN), and grouped with parentheses for complex logic: ```python drive_forward(speed=0.8).until(on_black(Defs.front.right)) drive_forward(speed=1.0).until(on_black(Defs.front.right) | after_cm(50)) drive_forward(speed=1.0).until( after_cm(5) + (on_black(Defs.front.left) & on_black(Defs.front.right)) ) ``` See **[Stop Conditions]({{< ref "04a-stop-conditions" >}})** for the full reference — all available conditions, operators, parenthesized grouping, and common patterns. For the full, always up-to-date list of every available step function and its parameters, see the **[API Reference]({{< ref "05-api-reference" >}})** — specifically the DSL Steps section. --- # Build and Flash URL: /06-firmware/build-flash/ ## Toolchain The firmware is compiled with the **ARM Embedded GCC** toolchain targeting the Cortex-M4F with hardware floating-point: | Tool | Package (Debian/Ubuntu) | |---|---| | Compiler | `arm-none-eabi-gcc` | | Assembler | `arm-none-eabi-as` | | Linker | `arm-none-eabi-ld` (via gcc) | | Object copy | `arm-none-eabi-objcopy` | | Size reporter | `arm-none-eabi-size` | Install on Ubuntu/Debian: ```bash sudo apt install gcc-arm-none-eabi binutils-arm-none-eabi ``` CMake >= 3.24 is also required. ## Build System The firmware uses CMake. The top-level `CMakeLists.txt` is at `Firmware-Stp/CMakeLists.txt`. It sets the target MCU family (`STM32F427xx`), defines the HSE oscillator frequency (`HSE_VALUE = 24000000` — note: the board uses the internal HSI oscillator, not HSE; the HSE define is legacy), and includes the libraries subdirectory. The application sources are compiled by `Firmware-Stp/Firmware/CMakeLists.txt`, which globs all `.c` files from each subdirectory and links against: - `stm32f4xx` — the STM32 HAL library - `mpl_prebuilt` — the InvenSense Motion Processing Library (binary blob) - `motion_driver` — the InvenSense MPU-9250 DMP firmware loader ### Compiler Flags ``` -mcpu=cortex-m4 # Target CPU -mthumb # Thumb-2 instruction set -mfloat-abi=hard # Hardware FPU -mfpu=fpv4-sp-d16 # VFPv4 single-precision -O2 # Optimisation (Release build) -Wall # All warnings -gdwarf-2 # Debug information format ``` The linker script is `Firmware-Stp/linker/STM32F427VITx_FLASH.ld`, which defines the memory regions (flash at 0x08000000, RAM at 0x20000000) and places the vector table at the flash origin. ### Build Steps ```bash cd Firmware-Stp mkdir -p build cd build cmake .. -DCMAKE_BUILD_TYPE=Release make -j$(nproc) ``` The output files are generated in `build/Firmware/`: - `wombat.elf` — ELF binary with debug symbols - `wombat.bin` — flat binary for flashing - `wombat.hex` — Intel HEX format - `wombat.map` — linker map file - `wombat.lss` — extended listing with interleaved source CMake also runs `arm-none-eabi-size -B wombat.elf` to print the flash and RAM usage. ## Flashing ### Via ST-Link (recommended) The Wombat board exposes an SWD (Serial Wire Debug) header connected to an ST-Link programmer or any compatible SWD probe. Use `openocd`: ```bash openocd -f interface/stlink.cfg \ -f target/stm32f4x.cfg \ -c "program build/Firmware/wombat.elf verify reset exit" ``` Or with `st-flash`: ```bash st-flash write build/Firmware/wombat.bin 0x08000000 ``` ### Via DFU (USB, no debugger needed) The STM32F427 has a built-in USB DFU bootloader in system memory. To enter DFU mode, hold BOOT0 high at reset. Then: ```bash sudo apt install dfu-util dfu-util -d 0483:df11 -a 0 -s 0x08000000:leave -D build/Firmware/wombat.bin ``` ### Verifying the Flash After flashing, open a serial terminal on the UART3 pins (PB10/PB11) at 115200 baud. The firmware does not print a startup banner by default, but the `stm32-data-reader` on the Pi can forward UART output to the application log when `uart.enabled = true` is set in its configuration. A running STM32 can also be verified by observing the `txBuffer.updateTime` field via the `stm32-data-reader` log, which prints IMU values every 500 SPI cycles. If the timestamp increments, the STM32 is alive. ## Building stm32-data-reader (Pi-side bridge) The `stm32-data-reader` is a separate CMake project that runs on the Raspberry Pi (aarch64): ```bash cd stm32-data-reader mkdir -p build cd build cmake .. -DCMAKE_BUILD_TYPE=Release make -j$(nproc) ``` It depends on: - **LCM** (Lightweight Communications and Marshalling) — the underlying transport library - **raccoon-transport** — the message schema library (included as a subdirectory or installed) - **spdlog** — logging The build produces a single binary `stm32-data-reader` which should be run as a daemon on the robot: ```bash ./stm32-data-reader ``` A mock SPI mode is available for development without hardware: ```bash cmake .. -DUSE_SPI_MOCK=ON ``` The mock mode generates synthetic sensor data and does not require `/dev/spidev`. ## Interrupt Priority Table Understanding the interrupt priority hierarchy is important for debugging timing issues. Higher preempt priority = lower number = can interrupt a lower-priority ISR. | ISR | Preempt | Sub | Purpose | |---|---|---|---| | SPI2 | 0 | 0 | Pi SPI completion (safety-critical) | | DMA1 Stream 3 (SPI2 RX) | 0 | 1 | SPI2 DMA RX | | DMA1 Stream 4 (SPI2 TX) | 0 | 2 | SPI2 DMA TX | | DMA1 Stream 0 (SPI3 RX) | 1 | 1 | IMU SPI3 DMA RX | | SPI3 | 1 | 0 | IMU SPI completion | | DMA1 Stream 5 (SPI3 TX) | 1 | 2 | IMU SPI3 DMA TX | | ADC (ADC2) | 2 | 0 | BEMF ADC completion | | ADC (ADC2) | 2 | 3 | BEMF DMA completion | | ADC (ADC1) | 2 | 4 | Analog sensor completion | | DMA2 Stream 0 (ADC1) | 2 | 1 | Analog sensor DMA | | DMA2 Stream 2 (ADC2) | 2 | 0 | BEMF DMA | | TIM6 | 3 | 0 | 1 µs system tick + scheduling | The SPI2 completion ISR runs at the highest priority to ensure that new Pi commands are processed with minimum latency and the shutdown flag is enforced immediately. The BEMF ADC runs at lower priority so the SPI ISR can always preempt a BEMF processing routine if a new Pi command arrives during BEMF sampling. ## Modifying the Firmware ### Adding a New Sensor 1. Add ADC channel configuration in `adcInit.c` (for analog sensors) or GPIO initialisation in `gpio.c` (for digital sensors). 2. Add reading/processing logic in the appropriate `Sensors/` file. 3. Add a field to `TxBuffer` in `pi_buffer_struct.h` (STM32 side) and the corresponding field in `stm32-data-reader/shared/spi/pi_buffer.h` (shared header). Both must stay in sync. 4. Populate the new field before the main loop calls `updatingAnalogValuesInSpiBuffer()`. 5. In `stm32-data-reader`, unpack the field in `SpiReal::readSensorData()` and add a publish call in `DataPublisher`. 6. Increment `TRANSFER_VERSION` in both `communication_with_pi.h` and `pi_buffer.h` so stale Pi processes are rejected. ### Changing PID Gains Default gains are in `pid.c`: ```c #define PID_DEFAULT_P 1.22f #define PID_DEFAULT_I 0.045f #define PID_DEFAULT_D 0.000f ``` These are applied at startup. They can be overridden at runtime by sending a `motorPidSettings` block via the SPI protocol without reflashing. From libstp Python: ```python motor.set_pid(kp=1.5, ki=0.05, kd=0.0) ``` This publishes to `libstp/motor/N/pid_cmd`, which `CommandSubscriber` picks up and forwards to the STM32 via the SPI `updates` bitmask. ### Changing BEMF Timing The BEMF cycle parameters are in `bemf.h`: ```c #define BEMF_SAMPLING_INTERVAL 5000 // µs #define BEMF_CONVERSION_START_DELAY_TIME 500 // µs ``` Reducing `BEMF_SAMPLING_INTERVAL` increases the position tracking update rate but also increases the fraction of time the motors are switched off for sampling, which reduces maximum effective torque. Reducing `BEMF_CONVERSION_START_DELAY_TIME` risks reading residual PWM switching noise rather than the true back-EMF. Both parameters must be chosen together for the specific H-bridge hardware used on the Wombat. --- # Settings Modal URL: /03-web-ide/05-settings-modal/ ## Settings Modal Open with the ⚙ gear icon in the top-left of the toolbar. Has five tabs: ### Project Tab ![Settings - Project tab](/images/web-ide/webide-settings-project.png) | Setting | Description | |---------|-------------| | **Orientation** | Flowchart direction: **Vertical** (top-to-bottom) or **Horizontal** (left-to-right) | | **Auto Layout** | Automatically arrange nodes on the canvas | | **Step Indexing** | Cache libstp steps from the connected device. Click **Re-index now** to refresh after updating raccoon. | ### Robot Tab ![Settings - Robot tab](/images/web-ide/webide-settings-robot.png) This is the most important tab for correct robot behaviour. Configure it before running any missions that use sensors or precise movement. #### Sensors If your robot has IR sensors, analog sensors, or other positional sensors defined in `config/hardware.yml`, you must place them on the virtual robot diagram here. 1. Select a sensor from the list on the left 2. The right side shows a top-down view of your robot (dimensions in cm) 3. Drag the sensor icon to match its physical position on the real robot 4. Set any clearance values if required Getting this wrong will cause lineup and line-following steps to behave incorrectly. #### Rotation Center The point your robot physically rotates around — for a differential drive robot, this is the midpoint between the two driven wheels. The default is `7.5 × 7.5 cm` (center of a 15×15 cm robot). Adjust to match your robot's actual rotation center. Getting this wrong will cause `turn_left()` / `turn_right()` to produce incorrect arcs. #### Drivetrain | Field | Description | |-------|-------------| | **Type** | `Differential` or `Mecanum` | | **Track width** | Distance between left and right wheels (cm) | | **Wheel radius** | Radius of the driven wheels (cm) | These mirror the values in `config/robot.yml` and are kept in sync. ### Map Tab ![Settings - Map tab](/images/web-ide/webide-settings-map.png) A canvas editor for the competition table (200×100 cm by default). - **Draw lines**: Select line type (Black line, White line, etc.) and draw directly on the canvas — these represent the lines your robot will follow or line up on - **Load Map**: Upload a background image (photo of the actual table) - **Units**: Switch between cm and mm - Zoom controls for precision editing The lines you draw here are used by the Table Visualization panel to show your robot's planned path relative to the table layout. ### Start Tab ![Settings - Start tab](/images/web-ide/webide-settings-start.png) Sets the robot's starting position and orientation on the table. | Field | Description | |-------|-------------| | **X (cm)** | Starting X position from left edge of table | | **Y (cm)** | Starting Y position from bottom edge of table | | **Rotation (deg)** | Starting heading in degrees | You can also **click directly on the table canvas** to place the robot visually. ### Keybindings Tab Assign keyboard shortcuts to frequently used steps. Requires a modifier key (Ctrl, Alt, Cmd) or a function key (F1–F12). Steps you've used recently appear under **Recent Steps** for quick binding. --- # Custom Steps URL: /02-programming/05-custom-steps/ # Custom Steps When the built-in steps don't cover what you need, you can create your own reusable steps. There are two approaches: **function-based** (simple, recommended) and **class-based** (for complex behavior). ## Function-Based Custom Steps (Recommended) The simplest way to create a reusable step: write a function that returns a composition of existing steps. ### Example: Lower a Container Motor From the Ecer2026 ConeBot — a motor-driven container that moves down until a limit switch is hit: ```python # src/steps/cone_container_steps.py from libstp import * from src.hardware.defs import Defs def down_cone_container() -> Sequential: return seq([ set_motor_velocity(Defs.cone_container_motor, -100), wait_for_digital(Defs.cone_arm_down_button), motor_passive_brake(Defs.cone_container_motor), ]) ``` Use it in a mission like any other step: ```python from src.steps.cone_container_steps import down_cone_container class M02CollectMission(Mission): def sequence(self) -> Sequential: return seq([ drive_forward(20), down_cone_container(), # Your custom step drive_backward(10), ]) ``` ### Example: Grab and Lift A reusable step that grabs an object and lifts it: ```python def grab_and_lift(delay: float = 0.2) -> Sequential: # you can use args just like in a function return seq([ Defs.claw.open(), Defs.arm.down(), wait_for_seconds(delay), # Wait for arm to settle Defs.claw.closed(), wait_for_seconds(delay), # Wait for claw to grip Defs.arm.up(), ]) def drop_object() -> Sequential: return seq([ Defs.arm.down(), Defs.claw.open(), wait_for_seconds(0.3), Defs.arm.up(), ]) ``` > **Remember:** Passing a number to a `ServoPreset` method like `Defs.claw.closed(120)` sets the servo *speed* in degrees per second — it does **not** add a delay. Use `wait_for_seconds()` when you need to pause between actions. ### Example: Wall-Align-and-Mark Pattern A common competition pattern — align against a wall and mark the heading as reference: ```python def align_and_mark() -> Sequential: return seq([ wall_align_backward(accel_threshold=0.3), mark_heading_reference(), ]) ``` ## Class-Based Custom Steps For steps that need custom logic, extend the `Step` base class directly. > **Never use threads or `time.sleep()` in steps.** The SDK is single-threaded by design and not thread-safe. Always use `await asyncio.sleep()` to wait. See [Advanced Topics]({{< ref "11-advanced" >}}) for why this matters. ### The Step Interface Add `@dsl_step` to your class so the code generator creates a factory function and builder for it (see [Steps DSL]({{< ref "04-steps" >}}) for how this works): ```python from libstp import Step, GenericRobot from libstp.step.annotation import dsl_step @dsl_step(tags=["custom"]) class MyCustomStep(Step): async def _execute_step(self, robot: GenericRobot) -> None: """Called once when the step runs. Must be async.""" # Your logic here pass def required_resources(self) -> frozenset[str]: """Return hardware resources this step uses (for parallel safety).""" return frozenset() # e.g., frozenset({"drive", "servo:0"}) ``` The `@dsl_step` decorator generates: - `MyCustomStepBuilder` — a builder class with fluent setters for every `__init__` parameter - `my_custom_step()` — a `snake_case` factory function you call in missions If you don't need a generated builder (e.g., the step has no parameters), you can use `@dsl` instead to just register it for discovery without code generation: ```python from libstp.step.annotation import dsl @dsl(tags=["custom"]) class SimpleStep(Step): async def _execute_step(self, robot: GenericRobot) -> None: # ... pass ``` ### Example: Wait Until Sensor Threshold ```python import asyncio from libstp import Step, GenericRobot, AnalogSensor from libstp.step.annotation import dsl_step @dsl_step(tags=["sensor", "wait"]) class WaitForAnalogRange(Step): """Wait until an analog sensor reads within a specified range.""" def __init__(self, sensor: AnalogSensor, min_value: float, max_value: float, poll_hz: int = 50) -> None: super().__init__() self.sensor = sensor self.min_value = min_value self.max_value = max_value self.poll_interval: float = 1.0 / poll_hz async def _execute_step(self, robot: GenericRobot) -> None: while True: value = self.sensor.read() if self.min_value <= value <= self.max_value: return await asyncio.sleep(self.poll_interval) ``` Because of `@dsl_step`, the code generator produces a `wait_for_analog_range()` factory function. You use it like any built-in step: ```python # Generated factory function — same parameters as __init__ seq([ wait_for_analog_range(Defs.light_sensor, min_value=1000, max_value=2000), drive_forward(25), ]) # Or use the builder pattern for chaining wait_for_analog_range(Defs.light_sensor, min_value=1000, max_value=2000).skip_timing() ``` ### Example: Custom Motion Step (100 Hz Loop) For steps that need a tight control loop, use `MotionStep` as your base. It provides a fixed-rate 100 Hz update loop: ```python from libstp import GenericRobot from libstp.step.annotation import dsl_step from libstp.step.motion.motion_step import MotionStep @dsl_step(tags=["motion", "custom"]) class DriveUntilBump(MotionStep): """Drive forward until the IMU detects a collision.""" def __init__(self, speed: float = 0.3, accel_threshold: float = 0.5) -> None: super().__init__() self.speed = speed self.accel_threshold = accel_threshold def on_start(self, robot: GenericRobot) -> None: """Called once before the loop starts.""" self.drive = robot.drive def on_update(self, robot: GenericRobot, dt: float) -> bool: """Called every 10ms. Return True when done.""" self.drive.set_desired_velocity(self.speed, 0, 0) accel = abs(robot.defs.imu.get_accel()[0]) if accel > self.accel_threshold: return True # Done — hit something return False # Keep going def on_stop(self, robot: GenericRobot) -> None: """Called after the loop ends. Default: hard stop.""" robot.drive.hard_stop() def required_resources(self) -> frozenset[str]: return frozenset({"drive"}) ``` This generates `drive_until_bump(speed=0.3, accel_threshold=0.5)` with `.speed()` and `.accel_threshold()` builder methods. ## Resource Declarations When you create class-based steps, declare what hardware resources they use. This enables the framework to validate parallel blocks — if two steps use the same resource, they can't run in parallel. Resource format: `""` or `":"` ```python def required_resources(self): return frozenset({ "drive", # Uses the drive system "motor:2", # Uses motor port 2 "servo:0", # Uses servo port 0 "servo:*", # Uses ALL servos (wildcard) }) ``` ## Organizing Custom Steps Keep custom steps in `src/steps/`: ``` src/ ├── steps/ │ ├── arm_steps.py # Arm-related custom steps │ ├── grabber_steps.py # Grabber operations │ └── alignment_steps.py # Custom alignment routines └── missions/ └── m01_mission.py # Imports from steps/ ``` Function-based steps can reference `Defs` directly (they're coupled to your robot's hardware). If you want portable steps that work across robots, accept hardware as parameters: ```python # Portable: works with any servo from libstp import ServoPreset def grab_with(claw_preset: ServoPreset, arm_preset: ServoPreset) -> Sequential: return seq([ claw_preset.open(), arm_preset.down(), wait_for_seconds(0.2), claw_preset.closed(), arm_preset.up(), ]) # Usage: grab_with(Defs.claw, Defs.arm) ``` --- # Stop Conditions URL: /02-programming/04a-stop-conditions/ # Stop Conditions Stop conditions control *when* a step finishes. Many steps (especially motion steps) can run indefinitely — `drive_forward(speed=0.8)` drives forever unless you tell it when to stop. That's what `.until()` does. ```python drive_forward(speed=0.8).until(on_black(Defs.front.right)) ``` ## Basic Conditions | Condition | What It Checks | |-----------|---------------| | `on_black(sensor, threshold=0.7)` | IR sensor reads black | | `on_white(sensor, threshold=0.7)` | IR sensor reads white | | `over_line(sensor)` | Sensor crosses a line (black then white) — shorthand for `on_black(sensor) + on_white(sensor)` | | `after_seconds(s)` | Fixed time elapsed | | `after_cm(cm)` | Distance traveled (via odometry) | | `after_degrees(deg)` | Heading changed by N degrees (via IMU) | | `on_digital(sensor, pressed=True)` | Digital sensor state | | `on_analog_above(sensor, threshold)` | Analog reading above value | | `on_analog_below(sensor, threshold)` | Analog reading below value | | `stall_detected(motor, threshold_tps=10, duration=0.25)` | Motor is stalling | | `custom(fn)` | Your own lambda/function | ## Combining Conditions Conditions can be combined with three operators: | Operator | Meaning | Triggers when... | |----------|---------|-----------------| | `A \| B` | OR | Either A or B is true | | `A & B` | AND | Both A and B are true | | `A + B` | THEN | A becomes true, then B becomes true after that | ### OR — Either Condition ```python # Stop when sensor sees black OR after 50 cm (whichever comes first) drive_forward(speed=1.0).until( on_black(Defs.front.right) | after_cm(50) ) ``` Use OR as a safety fallback: "stop at the line, but if we miss it, stop after 50 cm anyway." ### AND — Both Conditions ```python # Stop when sensor sees black AND at least 10 cm traveled drive_forward(speed=1.0).until( on_black(Defs.front.right) & after_cm(10) ) ``` Use AND to prevent false triggers: "don't count a black reading until we've driven at least 10 cm." This is critical when the robot starts on or near a line. ### THEN — Sequential Conditions ```python # First wait 10 cm, THEN start checking for black drive_forward(speed=1.0).until( after_cm(10) + on_black(Defs.front.right) ) ``` THEN is like AND but with a strict ordering: the first condition must become true before the second one is even checked. This is different from AND — with AND, both are checked simultaneously from the start; with THEN, the second condition is completely ignored until the first fires. > **`>` is deprecated — use `+` instead.** The `>` operator works for two-element chains (`A > B`), but Python expands longer chains like `a > b > c` into `(a > b) and (b > c)`, which produces incorrect behavior. The `+` operator does the same thing without this problem and works for any number of chained conditions. ### Real Example: Driving Over a Line Imagine the robot needs to drive forward, cross over a black line, and stop at the *next* black line. The THEN operator makes this natural: ```python # Cross a line, then stop at the next one: # 1. First, detect the line (on_black) # 2. Then drive 5 cm to pass over it (after_cm) # 3. Then stop at the next black reading (on_black again) drive_forward(speed=0.8).until( on_black(Defs.front.right) # Hit the first line + after_cm(5) # Drive past it + on_black(Defs.front.right) # Stop at the next line ) ``` You can chain `+` multiple times — each condition must fire in sequence before the next one starts checking. ### When to Use THEN vs AND Both can work for "skip the first N cm" scenarios, but they behave differently: ```python # AND: both checked from the start drive_forward(speed=1.0).until( on_black(Defs.front.right) & after_cm(10) ) # If the sensor flickers black at cm 5, it won't stop (AND not satisfied). # If the robot is on a black line at cm 10, it stops immediately. # THEN: second condition is OFF until the first fires drive_forward(speed=1.0).until( after_cm(10) + on_black(Defs.front.right) ) # Black detection is completely disabled for the first 10 cm. # After 10 cm, it starts checking for black. ``` In most cases the result is the same, but THEN is clearer in intent and avoids edge cases where both conditions happen to be true simultaneously. ## Complex Combinations with Parentheses You can nest and group conditions with parentheses to build arbitrarily complex logic: ```python # Drive until: # (black on left OR black on right) AND at least 15 cm traveled drive_forward(speed=0.8).until( (on_black(Defs.front.left) | on_black(Defs.front.right)) & after_cm(15) ) ``` ```python # Drive until: # After 5 cm, THEN (black on sensor OR 3 seconds elapsed) drive_forward(speed=1.0).until( after_cm(5) + (on_black(Defs.front.right) | after_seconds(3)) ) ``` ```python # Drive until: # (after 10 cm AND on black) OR after 60 cm (absolute safety limit) drive_forward(speed=1.0).until( (after_cm(10) & on_black(Defs.front.right)) | after_cm(60) ) ``` ```python # Really complex: # After 5 cm, THEN: # (black on left AND black on right) — both sensors see the line # OR after 100 cm — safety limit drive_forward(speed=0.8).until( after_cm(5) + ( (on_black(Defs.front.left) & on_black(Defs.front.right)) | after_cm(100) ) ) ``` Use parentheses whenever the intent isn't obvious. They make the logic readable and prevent operator precedence surprises. ## Custom Conditions Write your own condition with `custom()`: ```python # Stop when analog reading is in a specific range drive_forward(speed=0.5).until( custom(lambda robot: 1000 < robot.defs.distance_sensor.read() < 2000) ) ``` The lambda receives the robot instance and must return `True` when the condition is met. It's polled at the step's update rate (typically 100 Hz for motion steps). ## Conditions on Different Step Types Stop conditions work on motion steps — they have a continuous 100 Hz update loop that checks the condition every cycle: ```python drive_forward(speed=0.8).until(on_black(Defs.front.right)) strafe_right(speed=0.5).until(on_black(Defs.rear.right)) ``` > **Note:** Motor steps like `set_motor_velocity()` complete instantly — they send the command and return. Adding `.until()` on them won't work as expected because the step is already finished before the condition is ever checked. To run a motor until a condition, use a separate wait step: > > ```python > seq([ > set_motor_velocity(Defs.arm_motor, -100), > wait_for_digital(Defs.arm_limit), > motor_passive_brake(Defs.arm_motor), > ]) > ``` ## Common Patterns ### Line with Safety Limit Always add a distance or time safety limit when driving to a line. If the sensor misses the line (dust, ambient light, damage), the robot drives forever: ```python # BAD — drives forever if line is missed drive_forward(speed=1.0).until(on_black(Defs.front.right)) # GOOD — stops after 80 cm even if no line detected drive_forward(speed=1.0).until(on_black(Defs.front.right) | after_cm(80)) ``` ### Skip Starting Line When the robot starts on or near a line, skip it before looking for the next one: ```python # Skip first 10 cm, then look for black drive_forward(speed=1.0).until(after_cm(10) + on_black(Defs.front.right)) ``` ### Dual-Sensor Confirmation Wait for both sensors to see the line (more reliable than a single sensor): ```python drive_forward(speed=0.5).until( on_black(Defs.front.left) & on_black(Defs.front.right) ) ``` ### Stall Detection with Timeout Detect a motor stall but don't wait forever: ```python drive_forward(speed=0.3).until( stall_detected(Defs.front_left_motor) | after_seconds(5) ) ``` --- # Floating Panels URL: /03-web-ide/06-floating-panels/ ## Floating Panels Three floating panels can be toggled on/off from the toolbar. They can be moved and repositioned on the canvas. ### Run Logs Shows live output from the robot while a mission is running. Displays the same log stream as `raccoon run` in the terminal. Shows **IDLE** when no mission is running. ### Table Visualization A live top-down view of the table showing the robot's current position and planned path. Updates in real-time during a run. - **Edit Path** button opens path planning mode — place waypoints on the table and the IDE generates the corresponding steps automatically ### Timing Panel Shows execution time for each step in the last run, both as a list and as a chart. Useful for identifying slow steps and optimising mission timing. --- # Sensors URL: /02-programming/06-sensors/ # Sensors LibSTP supports four sensor types: **IR line sensors**, **digital sensors**, **analog sensors**, and a **camera** interface. All sensors are declared in `defs.py` and accessed throughout your mission code. ## IR Sensors (Line Detection) IR sensors are the primary tool for line following and line detection. They measure surface reflectivity — black surfaces absorb light (high value), white surfaces reflect it (low value). ### Shielding from Ambient Light An IR line sensor works by shining its own infrared LED at the table and measuring the bounce-back. The problem is that it can't tell *its* light apart from anyone else's. Overhead fluorescent tubes, halogen stage lamps, a sunbeam through a window, the IR emitters on a neighbouring robot — all of it leaks into the reading. A sensor that was rock-solid on your workshop bench can suddenly see "black" everywhere because the competition venue's ceiling lights are flooding it, or swing between readings every time someone walks past. The fix is cheap and mechanical: put a physical shroud around each sensor so its field of view is limited to the patch of table directly underneath it. A small collar cut from black paper, black foam board, or even a few layers of electrical tape is usually enough. What matters is: - **It extends past the front face of the sensor.** A few millimeters of overhang blocks light coming in at an angle, which is where most of the stray IR enters. - **It sits close to the table.** Leave about 2–3 mm of clearance so the shroud doesn't scrape the surface, but no more than that — the bigger the gap, the more ambient light slips in from the side. - **It's actually opaque to IR.** Matte black paper and foam board are fine. Some glossy black plastics look opaque to your eye but still pass infrared; if in doubt, double it up, or hold it up to a TV remote and see whether the remote still works through it. Shield the sensors **before** you run `calibrate()`. The K-Means thresholds are only meaningful under the lighting conditions in which the samples were taken — if you calibrate an unshielded sensor in your workshop and then add a shroud at the venue (or vice versa), every threshold on the robot will be wrong and line following will fall apart. > **Quick check:** After fitting the shrouds, watch a raw IR reading in BotUI while waving a flashlight around the robot — not shining it down through the shroud, but sweeping it past the sides and in front. A well-shielded sensor barely moves. An unshielded one jumps by hundreds or thousands of units, and that is exactly the noise you're going to be fighting at competition. ### Declaration ```python from libstp import IRSensor, SensorGroup front_right_ir = IRSensor(port=0) front_left_ir = IRSensor(port=1) # Group sensors for convenience methods front = SensorGroup(left=front_left_ir, right=front_right_ir) rear = SensorGroup(right=rear_right_ir) # Single sensor groups work too ``` ### Calibration IR sensors need calibration to distinguish black from white on your specific game table. During calibration, the robot drives across the surface while sampling sensors at 100 Hz. A K-Means clustering algorithm (k=2) automatically separates the readings into white and black clusters, producing per-sensor thresholds that are persisted across runs. Run calibration as part of your setup mission: ```python calibrate(distance_cm=50) # Calibrates both distance and IR sensors ``` For a detailed explanation of how the calibration algorithm works, see [Calibration]({{< ref "10-calibration" >}}). ### Using IR Sensors in Steps The most common way to use IR sensors is through **stop conditions** and **SensorGroup shortcuts**: ```python # Stop conditions drive_forward(speed=0.8).until(on_black(Defs.front.right)) drive_forward(speed=0.8).until(on_white(Defs.front.right)) drive_forward(speed=0.8).until(on_black(Defs.front.right, threshold=0.9)) # SensorGroup shortcuts (the easiest way) Defs.front.drive_until_black() # Drive forward → black Defs.front.drive_over_line() # Drive forward over a line Defs.front.strafe_left_until_black() # Strafe until black Defs.front.strafe_right_until_black() # Strafe until black Defs.front.follow_right_edge(cm=50) # Follow right edge of line Defs.front.lineup_on_black() # Square up on a line # Using a specific sensor from the group Defs.front.strafe_left_until_black(sensor=Defs.front.right) ``` ### Line Following Line following uses a PID controller to keep a sensor on the edge between black and white. For a detailed explanation of the algorithm, variants, and parameters, see [Line Following]({{< ref "algorithms/line-following" >}}). ### Lineup (Line Alignment) Lineup aligns the robot square on a line using a geometric single-pass approach — no iterative correction loops, so it completes with almost no time lost. For a detailed explanation, see [Lineup]({{< ref "algorithms/lineup" >}}). ## Digital Sensors Digital sensors return `True` (pressed) or `False` (released). Used for buttons, limit switches, and bump sensors. ### Declaration ```python from libstp import DigitalSensor button = DigitalSensor(port=10) # Wombat's built-in button arm_down_limit = DigitalSensor(port=0) # Limit switch arm_up_limit = DigitalSensor(port=1) # Limit switch ``` ### Usage in Steps ```python # Wait until pressed wait_for_digital(Defs.arm_down_limit, pressed=True) # Use as stop condition drive_forward(speed=0.5).until( on_digital(Defs.arm_down_limit) ) # Wait for the Wombat button wait_for_button() ``` ### Example: Motor with Limit Switch Drive a motor until it hits a limit switch: ```python def lower_arm(): return seq([ set_motor_velocity(Defs.arm_motor, -100), wait_for_digital(Defs.arm_down_limit), motor_passive_brake(Defs.arm_motor), ]) ``` ## Analog Sensors Raw analog readings from the Wombat's analog ports. Values depend on the sensor type and wiring. ### Declaration ```python from libstp import AnalogSensor light_sensor = AnalogSensor(port=2) distance_sensor = AnalogSensor(port=3) ``` ### Usage in Steps ```python # Stop conditions drive_forward(speed=0.5).until(on_analog_above(Defs.distance_sensor, 2000)) drive_forward(speed=0.5).until(on_analog_below(Defs.distance_sensor, 500)) # Read in custom code run(lambda robot: print(f"Light: {robot.defs.light_sensor.read()}")) ``` ## Camera Sensor The camera interface provides access to the Raccoon camera system for object detection. ### Declaration ```python from libstp import CamSensor cam = CamSensor() ``` Camera functionality depends on the `raccoon-cam` and `object-detector` services running on the robot. See the Raccoon platform documentation for camera setup. ## ET Sensor (Distance) The ET (electro-topographic) sensor is a distance/range finder. It wraps an analog input but has a distinct type for clarity: ```python from libstp import ETSensor distance = ETSensor(port=3) ``` Use it like an analog sensor — read raw values with `distance.read()` or use it with `on_analog_above()` / `on_analog_below()` conditions. ## Sensor Groups in Depth `SensorGroup` is a convenience wrapper that pre-binds common step patterns to your sensors. It accepts `left` and/or `right` IR sensors: ```python front = SensorGroup(left=front_left_ir, right=front_right_ir) ``` ### Available SensorGroup Methods | Method | What It Does | |--------|-------------| | `.drive_until_black()` | Drive forward until any sensor sees black | | `.drive_over_line()` | Drive forward through a line to the other side | | `.lineup_on_black()` | Align both sensors on a black line | | `.follow_right_edge(cm)` | Follow the right edge of a line | | `.follow_right_until_black()` | Follow line until right sensor sees black | | `.strafe_left_until_black()` | Strafe left until a sensor sees black | | `.strafe_right_until_black()` | Strafe right until a sensor sees black | | `.strafe_left_until_black(sensor=...)` | Use a specific sensor | All methods return step objects that work in `seq([...])` and `parallel(...)` just like any other step. ## Stall Detection You can detect when a motor is stalling (trying to move but physically blocked): ```python # Drive forward until the motor stalls (e.g., hit a wall) drive_forward(speed=0.3).until( stall_detected(Defs.front_left_motor, threshold_tps=10, duration=0.25) ) ``` Parameters: - `threshold_tps`: Speed below this (ticks per second) counts as stalling - `duration`: Must stall for this many seconds to trigger --- # Running a Mission URL: /03-web-ide/07-running-a-mission/ ## Running a Mission 1. Select a mission in the left panel 2. Click the green **▶ Run** button in the toolbar 3. Output streams live in the Run Logs panel 4. Press the stop button (or Ctrl+C in the terminal) to stop By default, runs execute in **Sim** (simulation) mode — the mission logic runs but motor commands are suppressed. Toggle **Sim** off to run on the real robot. > **Note:** Running from the Web IDE does NOT create checkpoints or sync files the way `raccoon run` does. For competition runs, use `raccoon run` from the terminal. --- # Drive System URL: /02-programming/07-drive-system/ # Drive System The drive system translates high-level velocity commands ("go forward at 0.2 m/s") into individual motor speeds. It handles PID control, feedforward compensation, and kinematics — all at 100 Hz. You rarely interact with the drive system directly. The [Steps DSL]({{< ref "04-steps" >}}) calls it for you. This page explains how it works under the hood, and how to tune it when your robot isn't driving straight or overshooting targets. ## Architecture ```mermaid graph LR subgraph "Motion Layer" MP["Motion Profile
Trapezoidal velocity"] end subgraph "Drive Controller" CV["Desired Velocity
vx, vy, wz"] PID["PID + Feedforward
Per-axis correction"] end subgraph "Kinematics" IK["Inverse Kinematics
Chassis → Wheels"] FK["Forward Kinematics
Wheels → Chassis"] end subgraph "HAL" M1["Motor 0"] M2["Motor 1"] M3["Motor 2"] M4["Motor 3"] end MP --> CV CV --> PID PID --> IK IK --> M1 IK --> M2 IK --> M3 IK --> M4 M1 -.-> FK M2 -.-> FK M3 -.-> FK M4 -.-> FK FK -.->|"actual velocity"| PID style MP fill:#42A5F5,color:#fff style PID fill:#66BB6A,color:#fff style IK fill:#AB47BC,color:#fff style FK fill:#AB47BC,color:#fff ``` ## Velocity Control The drive controller operates on three axes: | Axis | Symbol | Unit | Direction | |------|--------|------|-----------| | Forward | `vx` | m/s | Positive = forward | | Lateral | `vy` | m/s | Positive = right | | Angular | `wz` | rad/s | Positive = counter-clockwise | Each axis has its own PID controller and feedforward term: ```python vel_config = ChassisVelocityControlConfig( vx=AxisVelocityControlConfig( pid=PidGains(kp=0.001, ki=0.86, kd=0.0002), ff=Feedforward(kS=0.0, kV=1.0, kA=0.0), ), vy=AxisVelocityControlConfig( pid=PidGains(kp=0.0, ki=0.0, kd=0.0), ff=Feedforward(kS=0.0, kV=1.0, kA=0.0), ), wz=AxisVelocityControlConfig( pid=PidGains(kp=0.0, ki=0.0, kd=0.0), ff=Feedforward(kS=0.0, kV=1.0, kA=0.0), ), ) ``` ### PID Gains - **kp** (proportional): Corrects based on current error. Higher = more aggressive correction. - **ki** (integral): Corrects for accumulated error over time. Eliminates steady-state offset. - **kd** (derivative): Dampens oscillation. Reacts to rate of change of error. ### Feedforward Feedforward predicts what motor command is needed *before* the error happens: - **kS** (static friction): Minimum voltage to overcome friction. Applied whenever the robot is moving. - **kV** (velocity): Scales velocity command to motor voltage. Usually 1.0. - **kA** (acceleration): Compensates for inertia during acceleration. For most robots, `kV=1.0` with zero PID is a reasonable starting point. Add PID only if the robot doesn't track the target velocity accurately. ## Kinematics Kinematics defines the relationship between chassis velocity and wheel speeds. ### Differential Drive Two motors, one on each side: ```python from libstp import DifferentialKinematics kinematics = DifferentialKinematics( left_motor=defs.front_left_motor, right_motor=defs.front_right_motor, wheel_radius=0.0345, # meters wheelbase=0.16, # distance between wheel centers, meters ) ``` - Both wheels forward → robot drives forward - Left faster than right → robot turns right - Wheels opposite directions → robot spins in place - **Cannot strafe sideways** ### Mecanum Drive Four mecanum wheels that enable omnidirectional movement: ```python from libstp import MecanumKinematics kinematics = MecanumKinematics( front_left_motor=defs.front_left_motor, front_right_motor=defs.front_right_motor, back_left_motor=defs.rear_left_motor, back_right_motor=defs.rear_right_motor, track_width=0.2, # side-to-side distance, meters wheel_radius=0.0375, # meters wheelbase=0.125, # front-to-back distance, meters ) ``` - All wheels forward → robot drives forward - All wheels inward → robot strafes right - Diagonal pattern → robot drives at an angle - **Full omnidirectional movement** ### Measuring Wheel Radius and Wheelbase **Wheel radius**: Measure the wheel diameter with calipers and divide by 2. Convert to meters. **Wheelbase** (differential): Measure the distance between the center of the left wheel contact patch and the center of the right wheel contact patch. Convert to meters. **Track width** (mecanum): Distance between left and right wheel centers. **Wheelbase** (mecanum): Distance between front and rear wheel centers. Getting these values wrong will cause the robot to over- or under-turn and drive inaccurate distances. ## Motion PID Config The motion PID controls *how accurately the robot follows planned trajectories* (distance and heading). This is separate from the velocity PID above. ```python motion_pid_config = UnifiedMotionPidConfig( # How accurately the robot reaches target distance distance=PidConfig( kp=7.875, ki=0.0, kd=0.0, integral_max=10.0, integral_deadband=0.01, derivative_lpf_alpha=0.5, output_min=-10.0, output_max=10.0, ), # How accurately the robot maintains heading heading=PidConfig( kp=7.875, ki=0.0, kd=0.0625, integral_max=10.0, integral_deadband=0.01, derivative_lpf_alpha=0.5, output_min=-10.0, output_max=10.0, ), # Maximum velocities and accelerations linear=AxisConstraints( max_velocity=0.2368, # m/s (your robot's top speed) acceleration=0.2798, # m/s^2 deceleration=2.0532, # m/s^2 (can be higher than accel) ), angular=AxisConstraints( max_velocity=2.9424, # rad/s acceleration=7.6122, # rad/s^2 deceleration=16.1491, # rad/s^2 ), lateral=AxisConstraints( # Only for mecanum max_velocity=0.2209, acceleration=0.6485, deceleration=0.4498, ), # Tolerances: when to consider "arrived" distance_tolerance_m=0.005, # 5mm angle_tolerance_rad=0.017, # ~1 degree velocity_ff=1.0, ) ``` ### Axis Constraints `AxisConstraints` define the robot's physical limits. The motion planner uses these to create trapezoidal velocity profiles: {{< figure src="/images/trapezoidal-velocity-profile.svg" alt="Trapezoidal velocity profile showing acceleration, cruise, and deceleration phases" >}} - `max_velocity`: The fastest the robot will ever go. Set this conservatively — it's the speed at `speed=1.0`. - `acceleration`: How quickly the robot ramps up to max velocity. - `deceleration`: How quickly the robot slows down. Can be higher than acceleration (braking is usually faster). These values are measured automatically by the `calibrate()` and `auto_tune()` steps. ## Auto-Tuning LibSTP includes automatic tuning steps that measure your robot's actual performance and set the PID gains and axis constraints: ```python # In your setup mission: auto_tune() ``` This drives the robot through a series of test maneuvers, measures the response, and updates the configuration. Run it on a flat surface with enough room (at least 1m in each direction). ## Common Tuning Issues | Symptom | Likely Cause | Fix | |---------|-------------|-----| | Robot curves when it should drive straight | Unequal wheel calibration or `inverted` flag wrong | Re-run `calibrate()`, check motor `inverted` | | Robot overshoots target distance | `deceleration` too low or `distance.kp` too high | Increase deceleration, reduce kp | | Robot oscillates at target | `distance.kp` too high or `distance.kd` too low | Reduce kp, increase kd | | Turns over/undershoot | `wheelbase` measurement wrong or `heading.kp` wrong | Re-measure wheelbase, tune heading PID | | Robot is sluggish | `max_velocity` too low or `acceleration` too low | Increase values or re-run characterization | | Robot jerks when starting | `kS` (static friction feedforward) too low | Increase kS | --- # Projects List URL: /03-web-ide/08-projects-list/ ## Projects List Click **Back to Projects** in the top-left to return to the projects list view, which shows all raccoon projects found on your laptop. --- # Odometry URL: /02-programming/08-odometry/ # Odometry Odometry tracks the robot's position and heading on the field. It answers the question: "Where am I, and which way am I facing?" The motion system uses odometry to know when the robot has traveled 25 cm or turned 90 degrees. ## How It Works ```mermaid graph LR subgraph "Inputs" ENC["Wheel Encoders
Ticks → distance"] IMU_H["IMU
Heading (gyroscope)"] end subgraph "Odometry" FUSE["Fuse readings
Weighted combination"] POSE["Pose
x, y, heading"] end ENC --> FUSE IMU_H --> FUSE FUSE --> POSE style ENC fill:#42A5F5,color:#fff style IMU_H fill:#42A5F5,color:#fff style FUSE fill:#66BB6A,color:#fff style POSE fill:#FF7043,color:#fff ``` The odometry system integrates two data sources: 1. **Wheel encoders**: Motor tick counts converted to distance via calibration. Gives x/y displacement. 2. **IMU gyroscope**: Measures angular velocity. Integrated over time to get heading. Much more accurate for heading than wheel-based estimation. ## Odometry Types ### FusedOdometry Combines wheel encoder velocity (derived from back-EMF) with IMU heading. This is the recommended default. ```python from libstp import FusedOdometry, FusedOdometryConfig odometry = FusedOdometry( imu=defs.imu, kinematics=kinematics, config=FusedOdometryConfig(bemf_trust=1.0), ) ``` **`bemf_trust`** (0.0–1.0): How much to trust the wheel encoder velocity vs. other sources. At `1.0`, wheel velocity is fully trusted. Lower values blend in other estimates. Start at `1.0` and reduce if you see drift. ### Stm32Odometry Uses the STM32 firmware's built-in odometry calculation. This runs on the microcontroller and may have lower latency. ```python from libstp import Stm32Odometry odometry = Stm32Odometry(imu=defs.imu, kinematics=kinematics) ``` Use `Stm32Odometry` if you're using the STM32 firmware's native encoder processing. Use `FusedOdometry` if you want more control over the fusion parameters. ## Pose The odometry output is a `Pose` — a position (x, y) and heading: ``` Pose: position: (x, y) in meters, relative to start heading: radians, 0 = initial heading, positive = counter-clockwise ``` The pose is relative to where the robot was when it started. There's no absolute field positioning — odometry only tracks relative movement. ## Heading Reference The heading reference system lets you mark a known heading and then turn relative to it later. This is useful when you align against a wall (known orientation) and then want to make precise turns. ```python seq([ # Align against wall — now we know our heading wall_align_backward(accel_threshold=0.3), mark_heading_reference(), # Save this heading as "0 degrees" # Later in the mission... drive_forward(50), turn_to_heading(0), # Return to the wall-aligned heading turn_to_heading(-90), # Face 90° clockwise from wall turn_to_heading(180), # Face the opposite direction ]) ``` ### How `mark_heading_reference()` works ```mermaid sequenceDiagram participant M as Mission participant HR as HeadingReferenceService participant IMU as IMU M->>HR: mark_heading_reference() HR->>IMU: Read current heading IMU-->>HR: heading = 1.234 rad HR->>HR: Store reference = 1.234 rad Note over M: Later... M->>HR: turn_to_heading(0) HR->>HR: Compute: target = reference + 0 = 1.234 rad HR->>IMU: Turn until heading ≈ 1.234 rad ``` The heading reference is stored as a `RobotService` — it persists across missions within a single run. ### When to Use Heading References - After wall-aligning, mark the reference. All subsequent `turn_to_heading()` calls will be relative to that wall. - If you need to face the same direction at multiple points in a mission. - When cumulative turn errors would otherwise build up. ```python # Bad: cumulative error turn_right(90) drive_forward(50) turn_left(90) # Might not be exactly back to original heading # Good: absolute heading mark_heading_reference() turn_right(90) drive_forward(50) turn_to_heading(0) # Guaranteed back to reference heading ``` ## Odometry Accuracy Odometry drifts over time. Every measurement has small errors that accumulate: | Source | Error Type | Impact | |--------|-----------|--------| | Wheel slip | Position drift | Robot thinks it traveled further/shorter than it did | | Wheel diameter variation | Scale error | All distances are off by a constant factor | | IMU gyro drift | Heading drift | Slow rotation of coordinate frame | | Uneven surfaces | Position + heading | Bumps cause encoder miscounts | ### Mitigating Drift 1. **Calibrate regularly**: Run `calibrate(distance_cm=50)` at the start of every match. This measures the actual ticks-per-cm for your robot on that surface. 2. **Use wall alignment**: `wall_align_backward()` + `mark_heading_reference()` resets heading drift. 3. **Use line detection**: `drive_until_black()` and `lineup_on_black()` provide absolute position references from the game table. 4. **Keep missions short**: Less driving = less accumulated error. 5. **Don't rely on pure odometry for long distances**: After driving 2+ meters, re-align using a wall or line. --- # Servos URL: /02-programming/09-servos/ # Servos Servos are position-controlled actuators used for arms, claws, shields, and other mechanisms. ## Declaration ### Plain Servo ```python from libstp import Servo my_servo = Servo(port=0) ``` Use plain servos when you only need one or two positions, or when you compute angles dynamically. ### Servo with Presets (Recommended) ```python from libstp import Servo, ServoPreset claw = ServoPreset( Servo(port=2), positions={"closed": 135, "open": 30} ) arm = ServoPreset( Servo(port=1), positions={ "down": 10, "above_pom": 55, "up": 105, "start": 160, "high_up": 165, } ) ``` `ServoPreset` creates callable methods for each position name. The angle values (0–180) correspond to the servo's physical range. ### Servo Offsets `ServoPreset` supports an `offset` parameter that shifts all positions by a fixed number of degrees: ```python claw = ServoPreset( Servo(port=2), positions={"closed": 135, "open": 30}, offset=5 # Adds 5 degrees to every position ) ``` This is useful when replacing a broken servo. A new servo mounted on the same shaft may land a few teeth off from the original, causing all positions to be shifted by the same amount. Since all positions are just angle values, adding a constant offset shifts every position by the same amount — no need to re-tune each angle individually. > **Tip:** When choosing your servo angles, try to avoid values very close to 0 or 180. Keeping some margin (e.g. 10–170) leaves room to apply a positive or negative offset when a servo needs to be swapped at competition — without hitting the physical limits. ## Usage in Missions ### Preset Servos ```python # Move to a named position (auto-waits for travel time) Defs.claw.open() Defs.claw.closed() Defs.arm.up() Defs.arm.down() # With speed control (degrees per second) — returns a slow servo step Defs.arm.up(300) # Move to "up" at 300 deg/s Defs.claw.closed(120) # Move to "closed" at 120 deg/s ``` When called without an argument, the servo moves at full speed. When called with a number, it moves at that speed in degrees per second — useful for gentle or controlled movements. ### Plain Servo ```python # Move to a specific angle servo(Defs.my_servo, 90) # Move to 90 degrees servo(Defs.my_servo, 0) # Move to 0 degrees ``` ### Shake Servo Oscillate between two angles — useful for shaking objects loose: ```python shake_servo(Defs.claw_servo, duration=2.0, angle_a=30, angle_b=135) ``` ### Slow Servo Move to a position at a controlled speed (degrees per second) — useful for gentle placement. By default, `slow_servo` uses ease-in-ease-out interpolation, which smoothly accelerates and decelerates the servo for fluid motion: ```python slow_servo(Defs.my_servo, angle=90, speed=30.0) # 30 degrees/sec (slower than default 60) ``` ### Disable All Servos Turn off all servo outputs (servos go limp): ```python fully_disable_servos() ``` ## Real-World Patterns ### Grab and Release ```python # From ConeBot: grab a cone seq([ Defs.claw.open(), Defs.arm.down(), Defs.claw.closed(120), # Close at 120 deg/s (controlled grip) Defs.arm.up(120), # Lift at 120 deg/s Defs.claw.open(60), # Release slowly at 60 deg/s ]) ``` ### Parallel Servo + Drive Move servos while the robot is driving: ```python # From PackingBot: prepare arm while turning parallel( turn_right(90), seq([ Defs.pom_arm.above_pom(), Defs.pom_grab.open(), ]), ) ``` ### Servo Initialization in Setup Always home your servos in the setup mission so they're in a known position: ```python class M00SetupMission(Mission): def sequence(self) -> Sequential: return seq([ Defs.claw.closed(), Defs.arm.up(), Defs.shield.down(), Defs.grabber.closed(), # ... rest of setup ]) ``` ## Finding Servo Angles To find the right angle values for each position: 1. Open **BotUI → Sensors & Actors** on the robot's touchscreen 2. Use the servo slider to move the servo manually 3. Note the angle when it's in the position you want 4. Enter that angle in your `ServoPreset` positions Repeat for each named position. The angles depend on how the servo is mounted — there's no universal "up" or "down" angle. ## Timing Considerations Servo steps **block** until the servo reaches its target position. When called without a speed argument, the servo moves at full speed and the step estimates travel time based on the angle difference. When called with a speed (degrees per second), it uses a slow servo step that controls the movement rate: ```python Defs.arm.down() # Full speed, auto-estimated travel time Defs.arm.down(60) # 60 deg/s — slower, more controlled ``` Use a slower speed for heavy or delicate mechanisms where full-speed movement could cause problems (slamming, overshooting, dropping objects). --- # Calibration URL: /02-programming/10-calibration/ # Calibration Calibration measures your robot's actual hardware characteristics and stores correction factors. Without calibration, `drive_forward(25)` might actually drive 22 cm or 28 cm. After calibration, it drives 25 cm (within a few mm). ## What Gets Calibrated | What | Why | How | |------|-----|-----| | **Motor ticks-to-rad** | Encoders report ticks, the SDK needs radians | `calibrate()` — drives a known distance | | **Distance scaling** | Compensates for wheel diameter and surface grip | `calibrate_distance(distance_cm=50)` | | **IR sensor thresholds** | Every sensor reads differently; every surface is different | `calibrate()` or `calibrate_sensors()` | | **Motor deadzone** | Minimum power to overcome static friction | `calibrate_deadzone()` | | **Axis constraints** | Max velocity, acceleration, deceleration | `auto_tune()` — characterizes the robot | ## Calibration Workflow ```mermaid graph TD A["1. Build robot"] --> B["2. Run motor calibration"] B --> C["3. Run distance calibration"] C --> D["4. Run sensor calibration"] D --> E["5. Run auto-tune"] E --> F["6. Test missions"] F -->|"Robot drifts or
inaccurate"| G["7. Re-calibrate"] G --> F F -->|"Works correctly"| H["Ready for competition"] style A fill:#FF7043,color:#fff style H fill:#4CAF50,color:#fff ``` ### Step 1: Unified Calibration (Recommended) The `calibrate()` step is an all-in-one calibration that handles both distance and IR sensor calibration in a single guided flow: ```python calibrate(distance_cm=50) ``` This step uses the BotUI to guide you through the process: 1. The robot drives a known distance while continuously sampling IR sensors 2. You measure the actual distance driven and enter it via the BotUI 3. The system calculates per-wheel `ticks_to_rad` correction factors 4. IR sensor thresholds are computed from the samples using K-Means clustering Run this on a flat, smooth surface that includes both black and white areas (e.g., a game board with black lines). Make sure the robot has room to drive forward (at least 50 cm). ### Individual Calibration Steps (Alternative) If you only need to calibrate one thing, use the individual steps: ```python calibrate_distance(distance_cm=50) # Distance only calibrate_sensors() # IR sensors only ``` ### Step 4: Auto-Tune (Optional but Recommended) Measures the robot's dynamic characteristics — maximum velocity, acceleration, and deceleration for each axis: ```python auto_tune( vel_axes=["vx"], # Tune velocity PID tune_velocity=True, # Tune velocity PID tune_motion=True, # Tune motion PID characterize_axes=["linear", "angular"], # Measure physical limits characterize_trials=3, # Repetitions per test ) ``` Auto-tune drives the robot through test maneuvers and measures the response. It needs about 1m of clear space in each direction. ## How IR Sensor Calibration Works IR sensor calibration uses **K-Means clustering (k=2)** to automatically separate sensor readings into white and black groups. During the calibration drive, sensors are sampled at 100 Hz. The collected readings are clustered, and the two centroids become the white and black thresholds. This approach achieves 100% accuracy even with skewed data (e.g., brief passes over black lines), compared to 92–98% for percentile-based methods. For a full explanation of the algorithm, validation checks, and the research behind it, see [IR Sensor Calibration (K-Means)]({{< ref "algorithms/ir-calibration" >}}). ## Distance Calibration Distance calibration corrects for differences in wheel diameter, encoder resolution, and surface grip. The process works in four phases: 1. **Prepare** — Place the robot at a starting mark 2. **Drive** — The robot drives forward a known distance (default 30 cm) while recording encoder ticks per wheel 3. **Measure** — You measure the *actual* distance traveled and enter it via the BotUI 4. **Compute** — For each drive motor, the system calculates a corrected `ticks_to_rad` ratio: ``` theta_rad = measured_distance / wheel_radius new_ticks_to_rad = theta_rad / abs(delta_ticks) ``` Each wheel is calibrated independently, which also corrects for slight differences between left and right motors that would otherwise cause drift. ### Exponential Moving Average (EMA) Calibration values are smoothed across runs using an **exponential moving average**: ``` new_baseline = old_baseline * alpha + measured_value * (1 - alpha) ``` With the default `ema_alpha=0.7`, the baseline converges toward the true value over multiple calibration runs rather than jumping to each new measurement. This filters out one-off measurement errors while still adapting to real changes (like new wheels). ## Typical Setup Mission ```python class M00SetupMission(Mission): def sequence(self) -> Sequential: return seq([ # Home servos Defs.claw.closed(), Defs.arm.up(), # Calibrate calibrate(distance_cm=50), ]) ``` The setup mission runs before the match start signal. Use it to calibrate and verify that sensors and servos are working. ## Calibration Data Storage Calibration values are automatically persisted so the system can remember them between runs, removing the need to recalibrate every time you restart your program. **Distance calibration** (per-wheel `ticks_to_rad`) is stored in `raccoon.project.yml` alongside the motor definitions. **IR sensor thresholds** are stored in `racoon.calibration.yml`: ```yaml root: ir-calibration: default: white_tresh: 1469.84 black_tresh: 2490.58 default_port0: white_tresh: 543.45 black_tresh: 3647.12 default_port4: white_tresh: 1451.85 black_tresh: 3550.00 ``` The naming scheme uses the calibration set name and port: - `default` — Global default thresholds for the "default" set - `default_port0` — Per-port override (port 0) in the "default" set - `upper_port4` — Port 4 in calibration set "upper" These files are managed automatically by the calibration system. You should not edit them by hand — just run `calibrate()` again if you need new values. ## Calibration Sets If your robot operates on surfaces at different heights (e.g., a ramp vs. the ground), sensors may need different thresholds. Use calibration sets: ```python # During setup: calibrate both surfaces calibrate( distance_cm=50, calibration_sets=["default", "upper"], ) # During missions: switch between sets seq([ switch_calibration_set("default"), # Ground level Defs.front.drive_until_black(), drive_forward(50), # Drive up a ramp switch_calibration_set("upper"), # Elevated surface Defs.front.follow_right_edge(30), ]) ``` ## When to Re-Calibrate - **Different surface**: Game tables vary. Calibrate on the actual competition surface. - **Changed wheels or motors**: Any mechanical change invalidates motor calibration. - **Battery level**: Very low batteries can affect motor performance. Re-calibrate if behavior changes. - **Between matches**: Quick sensor calibration takes 30 seconds and prevents surprises. ## Calibration Tips 1. **Calibrate on the actual game table** if possible. Table surface affects both IR sensor readings and wheel grip. 2. **Use fresh batteries** during calibration. Low batteries = different motor characteristics. 3. **Calibrate distance on a straight, flat section** with clear markings at the measured distance. 4. **Drive over both black and white areas** during sensor calibration. The robot needs to see both surfaces to form two clusters. 5. **Commit calibration files** to your repository so teammates can use the same values (but re-calibrate on the competition table). --- # Advanced Topics URL: /02-programming/11-advanced/ # Advanced Topics This page covers the internals and advanced features you'll need when building complex competition robots. ## Async Execution Model Missions run inside Python's `asyncio` event loop. Every step is an `async` function. This is what makes `parallel(...)` and `do_while_active(...)` possible — they use asyncio concurrency, not threads. ```mermaid graph TD subgraph "asyncio Event Loop" ML["Mission Loop"] ML --> S1["Step 1 (await)"] S1 --> S2["Step 2 (await)"] S2 --> P["parallel()"] P --> PA["Track A (await)"] P --> PB["Track B (await)"] PA --> S3["Step 3"] PB --> S3 end subgraph "100 Hz Timer" T["Drive.update()"] T --> OD["Odometry.update()"] end style ML fill:#4CAF50,color:#fff style P fill:#42A5F5,color:#fff ``` You don't need to understand asyncio to write missions — `seq()` and `parallel()` handle it. But if you're writing custom steps, know that: - `_execute_step(self, robot)` must be `async` - Use `await asyncio.sleep(duration)` instead of `time.sleep()` - Never block the event loop with synchronous I/O or long computations ### No Threads — This Is Critical > **The SDK is NOT thread-safe. Never use threads.** Do not use `threading`, `Thread`, `concurrent.futures`, or any other threading mechanism. This is a deliberate design decision, not a limitation. **Why no threads?** The robot spends ~90% of its time busy-waiting (polling sensors, waiting for motors to reach a position, sleeping between control loop iterations). Threads would be overkill and introduce race conditions. Instead, `asyncio` schedules everything as coroutines on a single thread — essentially acting as a cooperative CPU scheduler. Each coroutine runs until it hits an `await`, then yields control so other coroutines can run. This design is what makes safe shutdown possible. When the 120-second timer fires, the framework cancels the running mission by cancelling its asyncio task. Because everything is cooperative and single-threaded, the cancellation happens cleanly at the next `await` point — no motor is left spinning, no half-finished operation corrupts state. With threads, you'd have race conditions between the shutdown logic and the mission code, and motors could keep running after the robot is supposed to stop. **The golden rule: always yield control with `await`.** ```python # CORRECT — yields control, other coroutines can run await asyncio.sleep(0.1) # WRONG — blocks the entire event loop, nothing else runs import time time.sleep(0.1) # NEVER do this! ``` If you block the event loop with `time.sleep()`, synchronous network calls, or heavy computation: - The 100 Hz control loop stops updating → the robot jerks or drifts - `parallel()` tracks freeze → only one track runs at a time - The shutdown timer can't fire → the robot won't stop at 120 seconds - UI screens stop responding → the touchscreen freezes ### The 100 Hz Control Loop Motion steps run a fixed-rate loop at 100 Hz (every 10ms): ```python class MotionStep: async def run_step(self, robot): self.on_start(robot) while True: dt = self._calculate_dt() if self.on_update(robot, dt): # Returns True when done break await asyncio.sleep(0.01) # Yield to event loop self.on_stop(robot) ``` The `on_update()` method is called every 10ms. It must return quickly (< 1ms). If it takes too long, the control loop falls behind and motion becomes jerky. ## Resource Conflict Detection The step framework tracks which hardware resources each step uses. When you create a `parallel(...)` block, the framework validates that no two tracks use the same resource *before* execution starts. ```python # This will raise an error at startup: parallel( drive_forward(50), # Uses resource "drive" turn_right(90), # Also uses resource "drive" — CONFLICT! ) ``` Resources are declared in `required_resources()`: ```python class DriveForward(MotionStep): def required_resources(self): return frozenset({"drive"}) class SetServoPosition(Step): def required_resources(self): return frozenset({f"servo:{self.servo.port}"}) ``` Resource format: `""` or `":"`. The wildcard `"servo:*"` conflicts with any `"servo:"`. This is a compile-time safety net. It catches common mistakes like accidentally driving and turning simultaneously, or moving two mechanisms that share a motor. ## Robot Services Services are lazily-instantiated singletons attached to the robot. They persist across missions within a single run and provide a place for stateful logic that doesn't belong in missions or steps. ### When to Use a Service Not every piece of logic needs a service. Use one when: - **State persists across missions.** A mission runs, stores a result, and a later mission needs that result. Steps are stateless — they execute and return. Services hold onto data for the entire run. - **Multiple hardware components work together as a single mechanism.** If a motor, a sensor, and a servo need to be coordinated (e.g. a drum collector that spins a motor while reading a light sensor to count pockets), that coordination logic belongs in a service — not spread across individual steps. - **Calibration produces thresholds used later.** A calibration step collects samples and computes thresholds. The service stores those thresholds so subsequent steps can use them without recalibrating. If your logic is stateless and only touches one hardware component, a plain step function is simpler and sufficient. ### Creating a Service Services extend `RobotService` and receive the robot instance in their constructor: ```python from libstp import RobotService class CounterService(RobotService): def __init__(self, robot): super().__init__(robot) self.count = 0 def increment(self): self.count += 1 return self.count ``` Access a service from any step via `robot.get_service()`. The first call creates the instance; subsequent calls return the same one (lazy singleton): ```python run(lambda robot: robot.get_service(CounterService).increment()) ``` ### Example: Combined Hardware Mechanism A drum collector robot uses a motor to rotate a drum and a light sensor to detect pocket positions. The `DrumMotorService` owns both pieces of hardware and provides higher-level operations: ```python from libstp import RobotService, Motor, AnalogSensor class DrumMotorService(RobotService): def __init__(self, robot): super().__init__(robot) self._blocked_threshold = None self._pocket_threshold = None self._current_index = 0 @property def motor(self) -> Motor: return self.robot.defs.drum_motor @property def light_sensor(self) -> AnalogSensor: return self.robot.defs.drum_light_sensor @property def is_calibrated(self) -> bool: return self._blocked_threshold is not None @property def midpoint(self) -> float: """Midpoint between blocked and pocket thresholds.""" return (self._blocked_threshold + self._pocket_threshold) / 2 @property def current_index(self) -> int: return self._current_index def apply_calibration(self, blocked, pocket): """Store thresholds computed during calibration.""" self._blocked_threshold = blocked self._pocket_threshold = pocket async def advance(self, count=1): """Spin motor forward, counting pocket transitions on the sensor.""" # Uses stored thresholds to detect transitions ... async def go_to(self, index): """Navigate to a specific pocket by shortest path.""" ... ``` Properties give the service a clean interface: `motor` and `light_sensor` provide convenient access to the hardware without exposing `self.robot.defs` to callers, `is_calibrated` and `midpoint` derive values from internal state, and `current_index` exposes read-only tracking. Steps that use the service only see the properties and methods — never the raw internals. The calibration step calls `apply_calibration()`, and later mission steps call `advance()` or `go_to()` — all sharing the same thresholds and position tracking through the service. ### Where to Put Services Services live in `src/service/`: ``` src/ ├── service/ │ ├── __init__.py │ └── drum_motor_service.py ├── missions/ ├── steps/ └── hardware/ ``` ### Call Flow The typical pattern is **mission → step → service**: 1. **Missions** define the sequence of what happens and when. 2. **Steps** are the atomic operations that missions compose (drive, servo move, sensor read). 3. **Services** provide the stateful logic and hardware coordination that steps call into. Steps should call service methods rather than managing state themselves. This keeps steps reusable and testable — the state lives in one place. ### Built-in Services The `HeadingReferenceService` is a built-in service that stores the marked heading reference. You can create your own services for any cross-mission state. ## Robot Geometry The `RobotGeometry` mixin provides methods for computing sensor positions, wheel positions, and distances relative to the robot's center of rotation: ```python class Robot(GenericRobot): width_cm = 23.5 length_cm = 29.6 rotation_center_forward_cm = 3.7 rotation_center_strafe_cm = 0.0 _sensor_positions = { defs.front_right_ir: SensorPosition( forward_cm=14.2, # 14.2 cm in front of rotation center strafe_cm=-3.55, # 3.55 cm to the right clearance_cm=0, # Height above ground ), } _wheel_positions = { defs.front_left_motor: WheelPosition(forward_cm=6.25, strafe_cm=10.0), } ``` The geometry system uses these to compute: - Distance between two sensors - Sensor offset from rotation center - Angle a sensor makes from center - Used by lineup and line-following to compensate for sensor placement ## Raccoon Transport (IPC) Raccoon Transport is the inter-process communication layer between the Python SDK and the Wombat's STM32 firmware. It uses LCM (Lightweight Communications and Marshalling) for message passing. ```mermaid graph LR subgraph "Raspberry Pi" PY["Python SDK"] -->|"LCM messages"| RT["Raccoon Transport"] end subgraph "STM32 Firmware" FW["Motor/Servo/Sensor
Control Loop"] end RT <-->|"USB / UART"| FW style PY fill:#4CAF50,color:#fff style RT fill:#42A5F5,color:#fff style FW fill:#FF7043,color:#fff ``` The transport provides: - **Publish/Subscribe**: Send commands and receive sensor data - **Reliable delivery**: Messages are retransmitted if lost - **Retain**: Last-known values are cached for late subscribers - **Channels**: Typed message channels for motors, servos, sensors, IMU, etc. You don't interact with Raccoon Transport directly unless you're building platform-level extensions. The HAL layer wraps it for you. ## Logging LibSTP uses spdlog (C++) and Python's logging for diagnostics: ```python import libstp.foundation as logging # Set log level for specific source files logging.set_file_level("stm32_odometry.cpp", logging.Level.trace) logging.set_file_level("drive.py", logging.Level.trace) logging.set_file_level("linear_motion.cpp", logging.Level.trace) logging.set_file_level("libstp.step.base", logging.Level.debug) ``` Log levels (from most to least verbose): `trace`, `debug`, `info`, `warn`, `error`, `critical`. Enable verbose logging when debugging motion issues — the drive system, odometry, and motion planner log detailed telemetry at `trace` level. > **Warning:** Setting motion-related files (e.g. `drive.py`, `linear_motion.cpp`, `stm32_odometry.cpp`) to `trace` level produces a high volume of log output. This significantly drops the control loop update rate, causing noticeably worse driving behavior — the robot may steer erratically or overshoot targets. Use `trace` on motion files only for stationary debugging or short diagnostic runs, not during actual competition missions. For normal operation, `info` or `debug` is sufficient. ## Timing Instrumentation Every step is automatically timed. The framework tracks: - Execution duration per step - Anomalies (steps that take unexpectedly long) Register an anomaly callback on any step: ```python drive_forward(25).on_anomaly(lambda step, duration: print(f"Slow: {duration}s")) ``` Or skip timing for steps where you don't care: ```python wait_for_button().skip_timing() ``` ## Simulation Support Steps can export simulation data via `to_simulation_step()`. This is used by the Botball Simulator to visualize mission execution without real hardware. The mock platform driver (`libstp-platforms/mock/`) returns simulated sensor values and accepts motor commands without real hardware. Use it for testing mission logic on your laptop. ## Build and Deploy ### Building LibSTP LibSTP cross-compiles from an x86 host to ARM64 (Raspberry Pi) using Docker: ```bash cd /path/to/library bash build.sh # Builds ARM64 .whl inside Docker ``` The output is a Python wheel in `build-docker/`. The build uses: - CMake 3.15+ with Ninja - scikit-build-core for Python packaging - pybind11 2.13.6 for C++/Python bindings - ccache for incremental builds ### Deploying to the Robot ```bash RPI_HOST=192.168.100.237 bash deploy.sh ``` This copies the wheel to the Raspberry Pi and installs it via pip. Environment variables: | Variable | Default | Purpose | |----------|---------|---------| | `RPI_HOST` | (required) | Raspberry Pi IP address | | `RPI_USER` | `pi` | SSH username | | `RPI_DIR` | `/home/pi/python-libs` | Install directory | | `BUILD_TYPE` | `Release` | CMake build type | ### Installing Locally (for development) ```bash pip install . ``` This builds the native extensions and installs the Python package locally. Useful for running tests against the mock driver. --- # UI Steps & Screens URL: /02-programming/12-ui-steps/ # UI Steps & Screens The Wombat controller has a touchscreen. LibSTP lets you display custom screens on it — progress indicators, sensor readings, confirmation dialogs, input forms, and live visualizations. This is how calibration wizards, wait-for-button prompts, and debug dashboards work under the hood. You can use the built-in screens for common tasks, or build your own custom screens with a full widget toolkit. ## Architecture ```mermaid graph LR subgraph "Your Step (Python)" US["UIStep"] SC["UIScreen"] US -->|"show() / display()"| SC end subgraph "Raccoon Transport" LCM["LCM Messages"] end subgraph "BotUI (Flutter)" FL["Touchscreen Renderer"] end SC -->|"Widget tree (JSON)"| LCM LCM -->|"screen_render"| FL FL -->|"screen_render/answer"| LCM LCM -->|"clicks, values"| SC style US fill:#4CAF50,color:#fff style SC fill:#66BB6A,color:#fff style FL fill:#42A5F5,color:#fff ``` Your Python code sends a widget tree to BotUI via Raccoon Transport. BotUI renders it on the touchscreen. When the user taps a button or changes an input, BotUI sends the event back. Your screen's event handlers react to it. ## Quick Helpers (One-Liners) The fastest way to show UI — no custom screens needed. These are methods on `UIStep`: ```python from libstp import UIStep class MyMissionStep(UIStep): async def _execute_step(self, robot): # Simple message with OK button await self.message("Calibration complete!") # Yes/No confirmation — returns True or False confirmed = await self.confirm("Ready to start?") # Number input — returns the entered value distance = await self.input_number("Enter distance", unit="cm", min_value=10, max_value=200) # Text input name = await self.input_text("Enter robot name") # Slider input speed = await self.input_slider("Set speed", min=0, max=100, default=50) # Multiple choice — returns the selected option choice = await self.choose("Pick a strategy", options=["Aggressive", "Safe", "Custom"]) # Wait for physical button press await self.wait_for_button("Press button to continue...") ``` Each helper displays a pre-built screen, waits for the user to interact, and returns the result. Use these when you need a quick interaction without building a custom screen. ## Built-In Screens LibSTP ships with ready-to-use screens for common patterns. Import them from `libstp.ui.screens`: ### Message & Confirmation ```python from libstp.ui.screens import MessageScreen, ConfirmScreen class MyStep(UIStep): async def _execute_step(self, robot): # Message with custom icon await self.show(MessageScreen( title="Done!", message="All POMs collected successfully.", button_label="Continue", icon_name="check_circle", )) # Confirmation dialog result = await self.show(ConfirmScreen( title="Dangerous Action", message="This will reset calibration. Continue?", confirm_label="Yes, reset", cancel_label="Cancel", )) if result.confirmed: # ... reset calibration pass ``` ### Progress & Status ```python from libstp.ui.screens import ProgressScreen, StatusScreen class MyStep(UIStep): async def _execute_step(self, robot): # Show progress while doing work async with self.showing(ProgressScreen( message="Calibrating motors...", show_spinner=True, show_progress=True, )) as ctx: for i in range(100): ctx.screen.progress = i # 0-100 integer, shown as percentage ctx.screen.message = f"Motor {i // 25 + 1} of 4..." await ctx.screen.refresh() await asyncio.sleep(0.05) # Show a status indicator (non-blocking — has no close button) await self.display(StatusScreen( message="Robot ready", icon_name="rocket_launch", icon_color="green", status="All systems go", )) await asyncio.sleep(2) await self.close_ui() ``` ### Input Screens ```python from libstp.ui.screens import NumberInputScreen, SliderInputScreen, ChoiceScreen class MyStep(UIStep): async def _execute_step(self, robot): # Number with keypad result = await self.show(NumberInputScreen( title="Distance Calibration", prompt="Measure how far the robot actually drove:", unit="cm", initial_value=50.0, min_value=1.0, max_value=500.0, )) measured = result.value # NumberInputResult # Slider result = await self.show(SliderInputScreen( title="Sensitivity", prompt="Adjust IR threshold:", min=0, max=4095, default=2000, )) threshold = result.value # SliderInputResult # Multiple choice — choices are (id, label) or (id, label, description) tuples result = await self.show(ChoiceScreen( title="Calibration Set", message="Which surface are you calibrating for?", choices=[ ("ground", "Ground level", "Default table surface"), ("upper", "Upper platform", "Elevated surface"), ("ramp", "Ramp", "Angled surface"), ], cancel_label="Cancel", )) selected = result.choice # Returns the chosen id string ``` ### Wait-for-Button Screen ```python from libstp.ui.screens import WaitForButtonScreen class MyStep(UIStep): async def _execute_step(self, robot): await self.show(WaitForButtonScreen( message="Place robot on the starting line", icon_name="sports_score", icon_color="blue", )) ``` ## Building Custom Screens For anything beyond the built-in screens, create your own by extending `UIScreen`. ### Minimal Example ```python from libstp import UIStep, UIScreen from libstp.ui.widgets import * from libstp.ui.events import on_click class GreetingScreen(UIScreen): title = "Hello!" def build(self): return Center(children=[ Text("Welcome to my robot", size="large", bold=True), Spacer(height=16), Text("Press the button to start the mission"), Spacer(height=24), Button("start", "Start Mission", style="success"), ]) @on_click("start") async def on_start(self): self.close("started") class MyStep(UIStep): async def _execute_step(self, robot): result = await self.show(GreetingScreen()) # result == "started" ``` ### Screen with Typed Results Screens can return typed results using generics: ```python from dataclasses import dataclass @dataclass class CalibrationResult: white_value: float black_value: float confirmed: bool class CalibrationScreen(UIScreen[CalibrationResult]): title = "Sensor Calibration" _primary_button_id = "confirm" # Physical button triggers this def __init__(self, port: int): super().__init__() self.port = port self.white_value = 0.0 self.black_value = 0.0 def build(self): return Column(children=[ SensorValue(port=self.port, sensor_type="analog"), SensorGraph(port=self.port, sensor_type="analog", max_points=100), Divider(), Row(children=[ Button("read_white", "Read White", style="secondary"), Button("read_black", "Read Black", style="secondary"), ]), Spacer(height=16), ResultsTable(rows=[ ("White", f"{self.white_value:.0f}"), ("Black", f"{self.black_value:.0f}"), ]), Spacer(height=16), Row(children=[ Button("cancel", "Cancel", style="secondary"), Button("confirm", "Confirm", style="success"), ]), ]) @on_click("read_white") async def on_read_white(self): self.white_value = await self.read_sensor(self.port) await self.refresh() # Re-render with new value @on_click("read_black") async def on_read_black(self): self.black_value = await self.read_sensor(self.port) await self.refresh() @on_click("confirm") async def on_confirm(self): self.close(CalibrationResult( white_value=self.white_value, black_value=self.black_value, confirmed=True, )) @on_click("cancel") async def on_cancel(self): self.close(CalibrationResult( white_value=0, black_value=0, confirmed=False, )) ``` ### Screen with Live Updates Screens can update in real time while background work runs: ```python import asyncio class LiveSensorScreen(UIScreen): title = "Sensor Monitor" def __init__(self, ports): super().__init__() self.ports = ports self.values = {p: 0 for p in ports} def build(self): rows = [] for port in self.ports: rows.append(Row(children=[ Text(f"Port {port}", bold=True), SensorValue(port=port, sensor_type="analog"), ])) rows.append(Spacer(height=24)) rows.append(Button("done", "Done", style="primary")) return Column(children=rows) @on_click("done") async def on_done(self): self.close(self.values) class MonitorStep(UIStep): async def _execute_step(self, robot): # Display screen without blocking — mission continues screen = LiveSensorScreen(ports=[0, 1, 2]) result = await self.show(screen) ``` ## Widget Reference All widgets are imported from `libstp.ui.widgets`. ### Display Widgets | Widget | Purpose | Key Parameters | |--------|---------|---------------| | `Text(text)` | Text display | `size` ("small", "medium", "large", "title"), `color`, `bold`, `muted`, `align` | | `Icon(name)` | Material icon | `size`, `color` — see [Material Icons](https://fonts.google.com/icons) for names | | `Spacer(height)` | Vertical space | `height` in pixels | | `Divider()` | Horizontal line | `thickness`, `color` | | `StatusBadge(text)` | Colored pill | `color`, `glow` (boolean) | | `StatusIcon(icon)` | Icon in circle | `color`, `animated` (boolean) | | `HintBox(text)` | Highlighted box | `icon`, `style` ("normal", "prominent") | | `DistanceBadge(value)` | Distance display | `unit`, `color` | | `ResultsTable(rows)` | Key-value table | `rows` — list of `(label, value)` tuples | ### Input Widgets | Widget | Purpose | Key Parameters | |--------|---------|---------------| | `Button(id, label)` | Clickable button | `style` ("primary", "secondary", "success", "danger", "warning"), `icon`, `disabled` | | `Slider(id)` | Slider control | `min`, `max`, `value`, `label`, `show_value` | | `Checkbox(id, label)` | Toggle | `value` (boolean) | | `Dropdown(id, options)` | Select menu | `value`, `label` | | `NumericKeypad()` | Touch keypad | 0–9, decimal, backspace | | `NumericInput(id)` | Number display | `value`, `unit`, `min_value`, `max_value` | | `TextInput(id)` | Text field | `value`, `label`, `placeholder` | ### Visualization Widgets | Widget | Purpose | Key Parameters | |--------|---------|---------------| | `SensorValue(port)` | Large sensor reading | `sensor_type` ("analog", "digital") — live updates automatically | | `SensorGraph(port)` | Real-time line graph | `sensor_type`, `max_points` | | `LightBulb(is_on)` | Animated light bulb | Boolean state | | `AnimatedRobot(moving)` | Robot animation | `size` — shows spinning wheels when `moving=True` | | `CircularSlider(id)` | Circular control | `min`, `max`, `value`, `label` | | `ProgressSpinner()` | Loading spinner | `size`, `color` | | `PulsingArrow(direction)` | Directional arrow | `direction` ("up", "down", "left", "right") | | `RobotDrivingAnimation(target_distance)` | Driving visualization | Shows robot moving with distance counter | | `MeasuringTape(distance)` | Tape animation | `distance` in cm | | `CalibrationChart(samples)` | Scatter/line chart | `thresholds`, `height` | ### Layout Widgets | Widget | Purpose | Key Parameters | |--------|---------|---------------| | `Row(children)` | Horizontal layout | `align`, `spacing` | | `Column(children)` | Vertical layout | `align`, `spacing` | | `Center(children)` | Center content | Centers both horizontally and vertically | | `Card(children)` | Card container | `title`, `padding` | | `Split(left, right)` | Side-by-side layout | `ratio` — tuple of ints, e.g. `(1, 1)` or `(5, 3)` | | `Expanded(child)` | Fill available space | `flex` weight | ## Event Decorators Bind screen methods to user interactions: ```python from libstp.ui.events import ( on_click, # Button tapped on_change, # Input value changed on_slider, # Slider moved on_submit, # Form submitted on_keypad, # Keypad key pressed on_button_press, # Physical Wombat button pressed on_screen_tap, # Screen tapped anywhere ) class MyScreen(UIScreen): title = "Events Demo" def build(self): return Column(children=[ Button("go", "Go!", style="success"), Slider("speed", min=0, max=100, value=50, label="Speed"), NumericKeypad(), NumericInput("distance", value=0, unit="cm"), ]) @on_click("go") async def handle_go(self): speed = self.get_value("speed") self.close(speed) @on_slider("speed") async def handle_speed(self, value): # Called every time slider moves pass @on_keypad() async def handle_key(self, key): # key is "0"-"9", ".", or "back" current = self.get_value("distance") or "" if key == "back": current = current[:-1] else: current += key self.set_value("distance", current) await self.refresh() @on_button_press() async def handle_physical_button(self): # Physical button on the Wombat self.close("button") ``` ## Display Modes ### Blocking (`show`) The most common pattern. Shows a screen and waits for `close()` to be called: ```python result = await self.show(MyScreen()) # Execution pauses here until the screen calls self.close(result) ``` ### Non-Blocking (`display` + `pump_events`) Show a screen while doing background work. You must pump events manually: ```python screen = StatusScreen(message="Working...") await self.display(screen) for i in range(100): await self.pump_events() # Handle any UI events screen.message = f"Step {i}/100" await screen.refresh() await asyncio.sleep(0.1) await self.close_ui() ``` ### Context Manager (`showing`) Cleaner syntax for non-blocking display: ```python async with self.showing(ProgressScreen(message="Calibrating...")) as ctx: for i in range(100): ctx.screen.progress = i # 0-100 integer await ctx.screen.refresh() await asyncio.sleep(0.05) # Screen automatically closed when exiting the context ``` ### Background Task (`run_with_ui`) Show a screen while running a coroutine. Screen stays visible until the task completes: ```python result = await self.run_with_ui( ProgressScreen("Running diagnostics..."), self.run_diagnostics, # async method ) ``` ## The `_primary_button_id` Pattern Set `_primary_button_id` on your screen to link the physical Wombat button to a specific on-screen button. When the user presses the physical button, it triggers the click handler for that button: ```python class MyScreen(UIScreen): _primary_button_id = "confirm" # Physical button = clicking "confirm" def build(self): return Column(children=[ Text("Ready?"), Button("confirm", "OK", style="success"), ]) @on_click("confirm") async def on_confirm(self): self.close(True) ``` This is important for screens that need to work without touching the screen — the physical button acts as a universal "OK". ## Accessing the Robot from Screens Screens have access to the robot instance via `self.robot`: ```python class DiagnosticsScreen(UIScreen): def build(self): return Column(children=[ Text("Motor Diagnostics", size="large"), Button("test", "Run Test"), ]) @on_click("test") async def on_test(self): # Read sensor directly value = await self.read_sensor(0, sensor_type="analog") # Access robot hardware motor = self.robot.defs.front_left_motor # ... use motor ... ``` `read_sensor(port, sensor_type="analog")` is a convenience method that reads a sensor value from within a screen. The `sensor_type` parameter defaults to `"analog"`, so `await self.read_sensor(0)` is enough for most cases. ### Widget State: `get_value` / `set_value` Screens track input widget values automatically. Access them with: ```python # Read the current value of an input widget speed = self.get_value("speed_slider") # Set a widget's value programmatically self.set_value("distance_input", "42.5") await self.refresh() # Re-render to show the new value ``` Values are automatically updated when users interact with input widgets (sliders, text inputs, keypads, etc.). ## Patterns from Real Competition Code ### Calibration Loop with Retry From the actual calibration system — measure, confirm, retry if needed: ```python class CalibrateStep(UIStep): async def _execute_step(self, robot): while True: # Step 1: Measure white = await self.show(MeasureScreen(port=0, surface="white")) black = await self.show(MeasureScreen(port=0, surface="black")) # Step 2: Confirm result = await self.show(ConfirmScreen( white_value=white.value, black_value=black.value, )) if result.confirmed: # Save and exit store_calibration(result) break # Otherwise loop and re-measure ``` ### Wait-for-Light with State Machine The competition start light detection uses a multi-state UI: ```mermaid stateDiagram-v2 [*] --> WarmingUp: Display screen WarmingUp --> TestMode: Warmup complete TestMode --> TestMode: Light triggers (test count++) TestMode --> Armed: Button press Armed --> Triggered: Light detected Triggered --> [*]: GO! ``` The screen updates in real time showing the current state, raw sensor value, baseline, and threshold — all while running a Kalman filter for robust detection. ### Driving with Visual Feedback Show an animation while the robot drives: ```python class DriveWithFeedback(UIStep): async def _execute_step(self, robot): screen = ProgressScreen(message="Driving to target...") async with self.showing(screen) as ctx: # Run a drive step while showing progress await drive_forward(50).run_step(robot) await self.show(MessageScreen( title="Arrived!", message="Robot reached the target.", icon_name="check_circle", )) ``` ## When to Use UI Steps | Situation | Use | |-----------|-----| | Quick "press button to continue" | `wait_for_button()` or `self.wait_for_button()` | | Simple yes/no | `self.confirm("Ready?")` | | Show a message | `self.message("Done!")` | | Get a number from the user | `self.input_number("Enter distance", unit="cm")` | | Sensor dashboard during testing | Custom screen with `SensorValue` + `SensorGraph` | | Calibration workflow | Custom screen with `read_sensor()` + `ResultsTable` | | Visual feedback during autonomous | `self.showing(ProgressScreen(...))` context manager | | Multi-step wizard | Chain multiple `self.show()` calls with different screens | > **Tip:** Start with the quick helpers (`message`, `confirm`, `input_number`). Only build custom screens when you need live sensor data, complex layouts, or multi-step flows. --- # Configuration Reference URL: /02-programming/13-configuration-reference/ This page is a complete reference for every key in every configuration file used by a raccoon/libstp project. Configuration is split across several YAML files that are assembled by raccoon's include-aware loader at build time. All paths are relative to the project root. ## File layout and include graph ``` raccoon.project.yml ├── config/robot.yml (robot: key) ├── config/missions.yml (missions: key) ├── config/hardware.yml (definitions: key) │ ├── config/motors.yml (!include-merge) │ └── config/servos.yml (!include-merge) └── config/connection.yml (connection: key) ``` `raccoon.project.yml` is the root file. It uses `!include` to delegate each top-level section to the files listed above. `config/hardware.yml` in turn uses `!include-merge` for motors and servos, which means all motor and servo entries are promoted to the same level as the sensor definitions. --- ## `raccoon.project.yml` The root project descriptor. Only a handful of keys live here directly; the rest are delegated. | Key | Type | Description | |-----|------|-------------| | `name` | `string` | Human-readable project name. Displayed in the raccoon CLI and web IDE. Set once at `raccoon create`. | | `uuid` | `string` | A unique identifier for the project (UUID v4). Used by the raccoon daemon to route commands to the correct project on the Pi. Set once at `raccoon create`, never change manually. | | `robot` | `!include` | Delegates to `config/robot.yml`. The entire robot subsystem configuration lives there. | | `missions` | `!include` | Delegates to `config/missions.yml`. The ordered list of missions. | | `definitions` | `!include` | Delegates to `config/hardware.yml`. All hardware object definitions. | | `connection` | `!include` | Delegates to `config/connection.yml`. SSH and daemon connection settings. | --- ## `config/robot.yml` Top-level robot configuration. Maps directly to the generated `Robot` class in `src/hardware/robot.py`. ### `shutdown_in` | Attribute | Value | |-----------|-------| | Type | `integer` | | Default | `120` | | Required | Yes | Number of seconds after program start after which `GenericRobot` will automatically shut down the running mission sequence. Prevents the robot from running indefinitely if something goes wrong. Set to `0` to disable. --- ### `drive` Container for the drive subsystem. All motion steps read their kinematics and velocity control from here. #### `drive.kinematics` Specifies the kinematic model that maps chassis velocities to per-wheel commands. | Key | Type | Default | Description | |-----|------|---------|-------------| | `type` | `string` | — | **Required.** Kinematic model. Supported values: `differential`, `mecanum`. | | `wheel_radius` | `float` | — | **Required.** Wheel radius in metres. Used to convert angular wheel velocity (rad/s) to linear velocity (m/s). | | `wheelbase` | `float` | — | **Required for differential and mecanum.** Distance between the left and right wheel contact patches in metres. For mecanum this is the front-to-back axle distance. | | `track_width` | `float` | — | **Required for mecanum only.** Left-to-right distance between wheel contact patches in metres. | | `left_motor` | `string` | — | **Required for differential.** Name of the left drive motor as it appears in `definitions`. Translated to `defs.` in generated code. | | `right_motor` | `string` | — | **Required for differential.** Name of the right drive motor as it appears in `definitions`. | | `front_left_motor` | `string` | — | **Required for mecanum.** Name of the front-left motor in `definitions`. | | `front_right_motor` | `string` | — | **Required for mecanum.** Name of the front-right motor in `definitions`. | | `back_left_motor` | `string` | — | **Required for mecanum.** Name of the back-left motor in `definitions`. | | `back_right_motor` | `string` | — | **Required for mecanum.** Name of the back-right motor in `definitions`. | The `type` value is case-insensitive. `DifferentialKinematics` is instantiated as `DifferentialKinematics(left_motor, right_motor, wheelbase, wheel_radius)` and `MecanumKinematics` as `MecanumKinematics(front_left_motor, front_right_motor, back_left_motor, back_right_motor, wheelbase, track_width, wheel_radius)`. Example — differential drive: ```yaml drive: kinematics: type: differential wheel_radius: 0.03 wheelbase: 0.16 left_motor: left_motor right_motor: right_motor ``` #### `drive.vel_config` Per-axis chassis velocity control configuration. Each axis has a PID controller and a feedforward term that together compute the motor command voltage. The three axes are: - `vx` — forward/backward linear velocity (m/s) - `vy` — lateral (strafe) velocity (m/s); only relevant for mecanum - `wz` — angular (yaw) velocity (rad/s) Each axis block has the same structure: ```yaml vel_config: vx: pid: kp: 0.0 ki: 0.0 kd: 0.0 ff: kS: 0.0 kV: 1.0 kA: 0.0 vy: pid: { kp: 0.0, ki: 0.0, kd: 0.0 } ff: { kS: 0.0, kV: 1.0, kA: 0.0 } wz: pid: { kp: 0.0, ki: 0.0, kd: 0.0 } ff: { kS: 0.0, kV: 1.0, kA: 0.0 } ``` **`pid` sub-keys** (maps to `PidGains`): | Key | Type | Default | Description | |-----|------|---------|-------------| | `kp` | `float` | `0.0` | Proportional gain. | | `ki` | `float` | `0.0` | Integral gain. | | `kd` | `float` | `0.0` | Derivative gain. | **`ff` sub-keys** (maps to `Feedforward`): | Key | Type | Default | Description | |-----|------|---------|-------------| | `kS` | `float` | `0.0` | Static friction compensation. Added as a constant offset whenever the commanded velocity is non-zero. | | `kV` | `float` | `1.0` | Velocity feedforward coefficient. Scales the commanded velocity to produce the open-loop motor command. `1.0` means the motor command equals the setpoint directly when no error exists. | | `kA` | `float` | `0.0` | Acceleration feedforward coefficient. | The `vel_config` is translated into a `ChassisVelocityControlConfig` struct at code-generation time. --- ### `odometry` Selects the odometry model used to estimate the robot's pose (position + heading) during autonomous motion. Can be specified in two forms: **Short form** (type name only): ```yaml odometry: FusedOdometry ``` **Long form** with configuration: ```yaml odometry: type: FusedOdometry config: bemf_trust: 1.0 ``` Supported `type` values (case-insensitive): `FusedOdometry`, `fused`, `ImuOdometry`, `imu`. When `type` is `FusedOdometry`, the optional `config` sub-block maps to `FusedOdometryConfig`: | Key | Type | Default | Description | |-----|------|---------|-------------| | `config.bemf_trust` | `float` | `0.95` | Complementary filter coefficient. `1.0` = rely entirely on BEMF-derived velocity; `0.0` = rely entirely on IMU accelerometer integration. Values between `0.8` and `1.0` are typical. | | `config.imu_ready_timeout_ms` | `integer` | `1000` | Maximum milliseconds to wait for the IMU to become ready on startup. | | `config.enable_accel_fusion` | `bool` | `true` | Enable the complementary filter that fuses BEMF estimates with IMU linear acceleration. When `false`, only BEMF is used for velocity. | | `config.turn_axis` | `string` | `"world_z"` | Which axis to use for yaw rate. Accepted values: `"world_z"`, `"body_x"`, `"body_y"`, `"body_z"`. Change to a body axis only if the robot's up-axis is not the sensor's body Z. | --- ### `motion_pid` Unified configuration for the position/heading PID controllers used by all drive-based motion steps (`Drive`, `Turn`, `DriveAngle`, etc.). **Required.** This entire block maps to the `UnifiedMotionPidConfig` C++ struct. #### Distance PID (`motion_pid.distance`) Controls convergence toward a position target (linear or 2-D). | Key | Type | Default | Description | |-----|------|---------|-------------| | `kp` | `float` | `2.0` | Proportional gain (velocity per metre of error). | | `ki` | `float` | `0.0` | Integral gain. | | `kd` | `float` | `0.0` | Derivative gain. | | `integral_max` | `float` | `10.0` | Maximum accumulated integral value (anti-windup). | | `integral_deadband` | `float` | `0.01` | Error magnitude below which the integrator stops accumulating. | | `derivative_lpf_alpha` | `float` | `0.1` | Low-pass filter coefficient for the derivative term. `0.1` = heavy filtering; `1.0` = no filtering. | | `output_min` | `float` | `-10.0` | Minimum PID output (velocity command, m/s or rad/s). | | `output_max` | `float` | `10.0` | Maximum PID output. | #### Heading PID (`motion_pid.heading`) Controls convergence toward an angular target. Same sub-keys as `distance` (`kp`, `ki`, `kd`, `integral_max`, `integral_deadband`, `derivative_lpf_alpha`, `output_min`, `output_max`). Scaffold defaults: `kp: 3.0`, all others matching the distance block. #### Velocity feedforward | Key | Type | Default | Description | |-----|------|---------|-------------| | `velocity_ff` | `float` | `1.0` | Scalar multiplied by the profiled velocity target and added directly to the PID output as open-loop feedforward. `1.0` means full feedforward. | #### Convergence tolerances | Key | Type | Default | Description | |-----|------|---------|-------------| | `distance_tolerance_m` | `float` | `0.01` | Position error below which the robot is considered to have reached its linear target (metres). | | `angle_tolerance_rad` | `float` | `0.02` | Heading error below which the robot is considered to have reached its angular target (radians). | #### Linear saturation handling When the drive command saturates the actuators, the controller de-rates its output to prevent integral windup. | Key | Type | Default | Description | |-----|------|---------|-------------| | `saturation_derating_factor` | `float` | `0.85` | Multiplier applied to the output scale when saturation is detected. Values below `1.0` cause the controller to back off. | | `saturation_min_scale` | `float` | `0.1` | Minimum scale the de-rating is allowed to reach. Prevents the controller from going completely silent. | | `saturation_recovery_rate` | `float` | `0.02` | Per-cycle increment to the output scale during recovery from saturation. | | `saturation_hold_cycles` | `integer` | `5` | Number of consecutive unsaturated cycles required before recovery begins. | | `saturation_recovery_threshold` | `float` | `0.95` | Output scale at which the robot is considered to have fully recovered from saturation. | #### Heading-specific saturation handling | Key | Type | Default | Description | |-----|------|---------|-------------| | `heading_saturation_derating_factor` | `float` | `0.85` | Same as `saturation_derating_factor` but applied to heading-axis saturation only. | | `heading_min_scale` | `float` | `0.25` | Minimum heading output scale. Higher than the linear minimum because heading corrections are always needed. | | `heading_recovery_rate` | `float` | `0.05` | Per-cycle recovery rate for heading saturation. | | `heading_saturation_error_rad` | `float` | `0.01` | Heading error below which saturation de-rating engages for the heading axis. | | `heading_recovery_error_rad` | `float` | `0.005` | Heading error threshold for considering heading saturation fully recovered. | #### Motion profile constraints These three sub-blocks are **populated automatically by `raccoon calibrate characterize_drive`** and should not be edited manually. They are commented out in the scaffold until characterization is run. ```yaml motion_pid: linear: max_velocity: 0.0 # m/s acceleration: 0.0 # m/s² deceleration: 0.0 # m/s² lateral: max_velocity: 0.0 acceleration: 0.0 deceleration: 0.0 angular: max_velocity: 0.0 # rad/s acceleration: 0.0 # rad/s² deceleration: 0.0 # rad/s² ``` Each sub-block maps to `AxisConstraints`: | Key | Type | Description | |-----|------|-------------| | `max_velocity` | `float` | Physical speed ceiling for the axis. The motion profiler will not request more than this. | | `acceleration` | `float` | Maximum acceleration rate used to ramp up from rest. | | `deceleration` | `float` | Maximum deceleration rate used to ramp down to the target. | When all three values in a block are `0.0` the profiler for that axis is disabled and the PID controller drives directly without trapezoidal profiling. --- ### `physical` Geometric description of the robot body. Used to compute sensor positions relative to the robot center and to populate `_wheel_positions` in the generated `Robot` class for collision and pose reasoning. | Key | Type | Default | Description | |-----|------|---------|-------------| | `width_cm` | `float` | `15.0` | Robot body width in centimetres (left-to-right). | | `length_cm` | `float` | `15.0` | Robot body length in centimetres (front-to-back). | | `rotation_center.x_cm` | `float` | `width_cm / 2` | X position of the point the robot rotates around, measured from the left edge in centimetres. | | `rotation_center.y_cm` | `float` | `length_cm / 2` | Y position of the rotation centre, measured from the rear edge in centimetres. | | `start_pose.x_cm` | `float` | `30.0` | Starting X position on the game field in centimetres. Used for absolute pose tracking when a field map is available. | | `start_pose.y_cm` | `float` | `20.0` | Starting Y position on the game field in centimetres. | | `start_pose.theta_deg` | `float` | `90.0` | Starting heading in degrees. `0` = facing right (+X), `90` = facing up (+Y). | | `sensors` | `list` | `[]` | List of sensor placement descriptors. Each entry has `name`, `x_cm`, `y_cm`, and optionally `clearance_cm`. The `name` must match a key in `definitions`. | **Sensor entry sub-keys:** | Key | Type | Description | |-----|------|-------------| | `name` | `string` | Must match a sensor key in `definitions`. | | `x_cm` | `float` | Sensor X position from the robot's left edge in centimetres. | | `y_cm` | `float` | Sensor Y position from the robot's rear edge in centimetres. | | `clearance_cm` | `float` | Vertical distance the sensor sits above the floor (default `0`). Used for line-detection geometry compensation. | --- ## `config/hardware.yml` Defines all hardware objects. Every key (other than `_motors` and `_servos`) maps to one attribute on the generated `Defs` class. Keys must be valid Python identifiers. `hardware.yml` also contains the lines: ```yaml _motors: !include-merge 'motors.yml' _servos: !include-merge 'servos.yml' ``` These merge all motor and servo definitions from the separate files directly into the `definitions` namespace. ### Required entry: `button` Every project must define a hardware entry named `button` of type `DigitalSensor`. The calibration and wait-for-light steps use `button` as the confirmation input. The code generator will reject a project that lacks this entry. ```yaml button: type: DigitalSensor port: 10 ``` ### Built-in entry: `imu` An `IMU` entry named `imu` is always generated, even if not listed in `hardware.yml`. If an `imu` key is present, any extra parameters in it are forwarded to the `Imu()` constructor. If absent, `imu = Imu()` is emitted with no arguments. ```yaml imu: type: IMU ``` ### Hardware type: `DigitalSensor` Maps to `libstp.hal.DigitalSensor`. | Key | Type | Default | Description | |-----|------|---------|-------------| | `type` | `string` | — | Must be `DigitalSensor`. | | `port` | `integer` | — | **Required.** Hardware port number. | ### Hardware type: `AnalogSensor` Maps to `libstp.hal.AnalogSensor`. | Key | Type | Default | Description | |-----|------|---------|-------------| | `type` | `string` | — | Must be `AnalogSensor`. | | `port` | `integer` | — | **Required.** Hardware port number. | All `AnalogSensor` instances are also collected into the `Defs.analog_sensors` list, which is used by the analog-channel calibration steps. ### Hardware type: `IRSensor` Maps to `libstp.sensor_ir.IRSensor` (subclass of `AnalogSensor`). | Key | Type | Default | Description | |-----|------|---------|-------------| | `type` | `string` | — | Must be `IRSensor`. | | `port` | `integer` | — | **Required.** Hardware port number (analog channel). | | `calibrationFactor` | `float` | `1.0` | Raw ADC scale factor applied before probability computation. **Set automatically by `raccoon calibrate sensors`.** | ### Hardware type: `SensorGroup` Maps to `libstp.step.motion.sensor_group.SensorGroup`. Binds a pair of IR sensors together with shared defaults for threshold, speed, and line-follow PID. At least one of `left` or `right` is required. | Key | Type | Default | Description | |-----|------|---------|-------------| | `type` | `string` | — | Must be `SensorGroup`. | | `left` | `string` | — | Name of the left sensor in `definitions`. Referenced as a bare attribute (`defs.name`), not a string. | | `right` | `string` | — | Name of the right sensor in `definitions`. | | `threshold` | `float` | `0.7` | Default confidence threshold (0.0–1.0) for black/white classification. | | `speed` | `float` | `1.0` | Default motion speed fraction (0.0–1.0) for sensor-triggered moves. | | `follow_speed` | `float` | `0.8` | Default speed used by line-following steps. | | `follow_kp` | `float` | `0.5` | Proportional gain for the line-follow PID controller. | | `follow_ki` | `float` | `0.02` | Integral gain for the line-follow PID. | | `follow_kd` | `float` | `0.0` | Derivative gain for the line-follow PID. | Example: ```yaml front: type: SensorGroup left: front_left_ir_sensor right: front_right_ir_sensor threshold: 0.6 follow_kp: 0.6 ``` --- ## `config/motors.yml` Defines drive and auxiliary motor objects. All entries are merged into the `definitions` namespace by `hardware.yml`. Each key must be a valid Python identifier and must match the name referenced in `drive.kinematics`. ### Motor entry Maps to `libstp.hal.Motor`. | Key | Type | Default | Description | |-----|------|---------|-------------| | `type` | `string` | — | Must be `Motor`. | | `port` | `integer` | — | **Required.** Hardware port number (0–3 on the Wombat). | | `inverted` | `bool` | `false` | When `true`, all velocity and position commands are negated before sending to hardware. Use this to correct for motors mounted in reverse. | | `calibration` | `mapping` | See below | Motor calibration data. Sub-keys are described below. | #### `calibration` sub-keys | Key | Type | Default (C++) | Description | Set by calibration | |-----|------|---------------|-------------|-------------------| | `ticks_to_rad` | `float` | `2π / 1440 ≈ 0.00436` | Encoder ticks per radian of shaft rotation. Converts raw position counter increments to radians. The scaffold default `0.00002` is a placeholder; the correct value depends on the encoder resolution and gear ratio. | `raccoon calibrate rpm` | | `vel_lpf_alpha` | `float` | `0.5` | Low-pass filter coefficient for the velocity estimate derived from BEMF ticks. `1.0` = no filtering (raw BEMF); `0.0` = completely frozen. Lower values smooth at the cost of lag. | `raccoon calibrate motors` | | `ff.kS` | `float` | `0.0` | Feedforward static friction constant. Added to every non-zero velocity command to overcome stiction. | `raccoon calibrate motors` | | `ff.kV` | `float` | `0.0` | Feedforward velocity constant. Scales the target velocity to compute an open-loop motor command. | `raccoon calibrate motors` | | `ff.kA` | `float` | `0.0` | Feedforward acceleration constant. | `raccoon calibrate motors` | | `pid.kp` | `float` | `0.0` | Proportional gain for the per-motor velocity PID loop. | `raccoon calibrate motors` | | `pid.ki` | `float` | `0.0` | Integral gain for the per-motor velocity PID. | `raccoon calibrate motors` | | `pid.kd` | `float` | `0.0` | Derivative gain for the per-motor velocity PID. | `raccoon calibrate motors` | | `bemf_scale` | `float` | — | Linear scale applied to raw BEMF ticks before converting to rad/s. | `raccoon calibrate rpm` | | `bemf_offset` | `float` | — | Constant offset added after `bemf_scale` to zero out BEMF noise at rest. | `raccoon calibrate rpm` | | `ticks_per_revolution` | `float` | — | Encoder ticks per full shaft revolution, computed from RPM data. Informational; not read at runtime. | `raccoon calibrate rpm` | The `ff` and `pid` sub-keys within `calibration` are nested mappings. The code generator uses docstring introspection to detect that `MotorCalibration` accepts `ff` and `pid` as nested `Feedforward` and `PidGains` objects respectively, so the YAML nesting is preserved correctly. Minimal scaffold entry: ```yaml left_motor: type: Motor port: 0 inverted: false calibration: ticks_to_rad: 0.00002 vel_lpf_alpha: 1.0 ``` After motor calibration: ```yaml left_motor: type: Motor port: 0 inverted: false calibration: ticks_to_rad: 0.00436 vel_lpf_alpha: 0.8 bemf_scale: 0.000312 bemf_offset: -0.0041 ticks_per_revolution: 1440.0 ff: kS: 0.042 kV: 0.981 kA: 0.003 pid: kp: 0.12 ki: 0.0 kd: 0.0 ``` --- ## `config/servos.yml` Defines servo hardware objects with optional named positions. All entries are merged into the `definitions` namespace by `hardware.yml`. ### Servo entry Maps to `libstp.hal.Servo` wrapped in `libstp.step.servo.preset.ServoPreset` when `positions` is specified. | Key | Type | Default | Description | |-----|------|---------|-------------| | `type` | `string` | — | Must be `Servo`. | | `port` | `integer` | — | **Required.** Servo port number (0–5 on the Wombat). | | `positions` | `mapping` | — | Optional. A mapping from position name to servo angle in degrees. When present, the entry is wrapped in a `ServoPreset` and each key becomes a callable attribute on `Defs` (e.g., `Defs.arm_servo.up()`). | | `offset` | `float` | `0` | Angle offset in degrees added to every position value. Useful when a servo is remounted slightly off from its original alignment without needing to update each position individually. Only effective when `positions` is also specified. | The `port` is the only argument passed to the underlying `Servo` constructor. The `positions` and `offset` keys are consumed by `ServoPreset` at the wrapper level and are not forwarded to the hardware object. Example: ```yaml arm_servo: type: Servo port: 0 offset: 0 positions: up: 30 down: 160 claw_servo: type: Servo port: 1 positions: open: 30 closed: 135 ``` Generated code (simplified): ```python class Defs: arm_servo = ServoPreset(Servo(0), positions={'up': 30, 'down': 160}, offset=0) claw_servo = ServoPreset(Servo(1), positions={'open': 30, 'closed': 135}) ``` Usage in a mission: ```python seq([ Defs.arm_servo.up(), Defs.claw_servo.open(), Defs.claw_servo.closed(speed=120), # eased motion at 120 deg/s ]) ``` --- ## `config/missions.yml` An ordered YAML list of missions to run. The code generator uses this to produce the `missions`, `setup_mission`, and `shutdown_mission` attributes on the `Robot` class. ### Entry formats **Normal mission** (runs in sequence with others): ```yaml - MyMission ``` **Typed mission** (dict with a single key): ```yaml - SetupMission: setup - AnotherMission: normal - TeardownMission: shutdown ``` The value is one of three strings: | Type | Description | |------|-------------| | `normal` | Added to `Robot.missions` list. Presented to the user for selection at runtime. | | `setup` | Assigned to `Robot.setup_mission`. Runs automatically before any normal mission is selected. Only one setup mission is allowed; if multiple are listed, the last one wins. | | `shutdown` | Assigned to `Robot.shutdown_mission`. Runs automatically after the program ends (timeout or explicit shutdown). | If a bare string entry (without a type key) is used, it is treated as `normal`. Each mission class name is converted to snake_case to derive the source filename. `DriveToBoxMission` → `src/missions/drive_to_box_mission.py`. The scaffold generates: ```yaml - SetupMission: setup ``` --- ## `config/connection.yml` Controls how `raccoon` connects to the robot over the network and copies files via SSH/SFTP. | Key | Type | Default | Description | |-----|------|---------|-------------| | `pi_address` | `string` | `192.168.4.1` | IP address or hostname of the Wombat controller. When connected to the Wombat's Wi-Fi hotspot, the default `192.168.4.1` is correct. | | `pi_port` | `integer` | `8421` | TCP port on which the raccoon daemon (`raccoon server`) is listening on the Pi. | | `pi_user` | `string` | `pi` | SSH username used for SFTP file transfers. | | `remote_path` | `string` | *(empty)* | Absolute path on the Pi where the project is deployed. When empty, raccoon uses a default path derived from the project UUID. | | `auto_connect` | `bool` | `true` | When `true`, raccoon CLI commands that require a connection (`run`, `sync`, `calibrate`, etc.) will attempt to connect automatically without prompting. Set to `false` to always require an explicit `raccoon connect`. | --- ## Calibration-populated keys summary Several keys are written back to config files automatically by raccoon calibration commands. These should not be edited by hand unless you know the exact values. | Config path | Calibration command | What it measures | |-------------|---------------------|------------------| | `definitions..calibration.ticks_to_rad` | `raccoon calibrate rpm` | Encoder ticks per radian (requires physical rotation measurement jig) | | `definitions..calibration.bemf_scale` | `raccoon calibrate rpm` | Linear BEMF scaling factor | | `definitions..calibration.bemf_offset` | `raccoon calibrate rpm` | BEMF zero-offset correction | | `definitions..calibration.ticks_per_revolution` | `raccoon calibrate rpm` | Informational ticks-per-revolution value | | `definitions..calibration.vel_lpf_alpha` | `raccoon calibrate motors` | LPF coefficient for velocity estimate | | `definitions..calibration.ff.kS` | `raccoon calibrate motors` | Motor static friction (stiction) feedforward | | `definitions..calibration.ff.kV` | `raccoon calibrate motors` | Motor velocity feedforward constant | | `definitions..calibration.ff.kA` | `raccoon calibrate motors` | Motor acceleration feedforward constant | | `definitions..calibration.pid.kp` | `raccoon calibrate motors` | Per-motor velocity PID proportional gain | | `definitions..calibration.pid.ki` | `raccoon calibrate motors` | Per-motor velocity PID integral gain | | `definitions..calibration.pid.kd` | `raccoon calibrate motors` | Per-motor velocity PID derivative gain | | `robot.motion_pid.linear.max_velocity` | `raccoon calibrate characterize_drive` | Measured maximum forward speed (m/s) | | `robot.motion_pid.linear.acceleration` | `raccoon calibrate characterize_drive` | Measured linear acceleration (m/s²) | | `robot.motion_pid.linear.deceleration` | `raccoon calibrate characterize_drive` | Measured linear deceleration (m/s²) | | `robot.motion_pid.lateral.max_velocity` | `raccoon calibrate characterize_drive` | Measured maximum strafe speed (m/s) — mecanum only | | `robot.motion_pid.lateral.acceleration` | `raccoon calibrate characterize_drive` | Measured lateral acceleration (m/s²) — mecanum only | | `robot.motion_pid.lateral.deceleration` | `raccoon calibrate characterize_drive` | Measured lateral deceleration (m/s²) — mecanum only | | `robot.motion_pid.angular.max_velocity` | `raccoon calibrate characterize_drive` | Measured maximum turn speed (rad/s) | | `robot.motion_pid.angular.acceleration` | `raccoon calibrate characterize_drive` | Measured angular acceleration (rad/s²) | | `robot.motion_pid.angular.deceleration` | `raccoon calibrate characterize_drive` | Measured angular deceleration (rad/s²) | | `definitions..calibrationFactor` | `raccoon calibrate sensors` | IR sensor K-means calibration factor | --- # IMU URL: /02-programming/14-imu/ # IMU The Wombat's IMU (Inertial Measurement Unit) is the primary source of heading information for the robot. It is what lets `turn_left(90)` know that exactly 90 degrees have elapsed, and what keeps the robot driving straight rather than veering sideways. ## The Sensor Hardware The Wombat uses an **InvenSense MPU-9250**, a nine-axis MEMS sensor package that combines three independent sensors: | Sensor | Measures | Unit | |--------|----------|------| | Gyroscope | Angular velocity around each axis | degrees/s | | Accelerometer | Linear acceleration along each axis | m/s² | | Magnetometer (AK8963) | Magnetic field strength along each axis | µT | The chip is connected to the **STM32 coprocessor** via SPI. The STM32 reads the sensor, runs orientation estimation firmware, and forwards the results to the Raspberry Pi over a shared SPI buffer at approximately 75 Hz. User programs on the Pi access these values through the libstp HAL. ### Axes and Orientation The IMU defines three axes relative to the Wombat's board: - **X**: Points forward (toward the front of the robot) - **Y**: Points left (toward the left side of the robot) - **Z**: Points upward (perpendicular to the table surface) For a robot sitting flat on a table, the gyroscope's Z axis measures yaw — the rotation the robot performs when turning in place. That is the axis the drive system uses for heading estimation. The **sign convention** used by libstp is counter-clockwise positive (CCW = positive, CW = negative). This is the standard mathematical convention. The firmware reports heading in a clockwise-positive convention, so `getHeading()` internally negates the firmware value before returning it. ## The Digital Motion Processor (DMP) The MPU-9250 includes a **Digital Motion Processor** (DMP) — a small dedicated compute block inside the sensor chip. The DMP runs a proprietary 6-axis fusion algorithm (gyroscope + accelerometer) entirely on-chip at 200 Hz, without using the host CPU. The STM32 reads the DMP's quaternion output from a FIFO buffer at 50 Hz and exposes it to the Pi at ~75 Hz. The DMP also handles gyroscope bias calibration internally at startup: it estimates the bias of each gyro axis while the robot is still, and subtracts it from subsequent readings. This calibrated gyroscope signal is what the rest of the pipeline uses. The heading value available via `imu.get_heading()` comes from this DMP pipeline. In dynamic conditions — the kind of fast rotation that a Botball match involves — the DMP's 6-axis fusion has been shown to track heading with a Mean Absolute Error of around 12° over a 2-minute test, which is significantly better than raw gyroscope integration or software-based filters running at the lower 75 Hz rate available on the Pi. ## How the Drive System Uses the IMU The IMU is the heading source for all motion primitives. It is used in two distinct ways: ### Turns `turn_left()` and `turn_right()` use a **profiled PID controller** whose measured variable is the IMU heading. When a turn is commanded, the odometry is reset (heading back to zero), and the PID drives the motors until the IMU-integrated heading matches the target angle. The controller declares done only when the heading error is within tolerance *and* the filtered angular velocity has dropped below a settling threshold — preventing the robot from stopping prematurely while still rotating. ```python turn_left(90) # Rotates until IMU heading = +90° turn_right(45) # Rotates until IMU heading = -45° ``` ### Straight Driving `drive_forward()` and related steps use a **heading-hold PID** running in parallel with the distance PID. At the moment the drive starts, the current heading is captured. On each control cycle, the heading PID measures how far the robot has drifted from that initial heading and applies a corrective angular velocity. A well-tuned heading PID makes the robot drive in a straight line even on uneven surfaces or with slight motor imbalances. The heading error signal is: `yaw_error = current_heading - initial_heading`. If the robot drifts right, `yaw_error` becomes negative, and the heading PID commands a left correction. ## Orientation Estimation: Why One Sensor Is Not Enough Raw gyroscope integration is simple — integrate angular velocity over time to get angle — but gyroscopes have **bias**: a small non-zero output even when stationary. A bias of only 0.05 °/s accumulates to 3° of heading error per minute, and several centimetres of sideways offset over a match. Without correction, heading drift is unavoidable. The accelerometer and magnetometer provide long-term stable references (gravity and magnetic north respectively) but are noisy and susceptible to vibration and motor-induced interference. Sensor fusion combines both types to get the benefits of each. ### The DMP's Approach The DMP's 6-axis fusion combines the gyroscope and accelerometer. The accelerometer's gravity vector provides a reference for roll and pitch, and the gyroscope provides high-rate rotation tracking. In static tests, this keeps roll and pitch below 0.35°. However, in static yaw tests the DMP showed surprising behaviour: it drifted 19° over 10 minutes — about six times more than raw gyroscope integration (3° drift) over the same period. This extra drift is most likely caused by **gravity-vector coupling**: when the DMP corrects roll and pitch using the accelerometer, small errors in its gravity estimate leak into the yaw channel. This is a known artefact of 6-axis fusion when gravity is not perfectly aligned with the sensor Z axis. Despite this static disadvantage, during fast dynamic rotation (the actual working condition for Botball) the DMP outperforms software alternatives because it runs at the full internal 200 Hz rate. ### Software Fusion Comparison A **Mahony complementary filter** (6-axis, implemented in software) was tested using the same sensor data. It showed only 3° of yaw drift over 10 minutes — matching raw gyroscope integration and far better than the DMP in static conditions. The Mahony filter uses a PI correction term with gains k_P = 2.0 and k_I = 0.005 to correct gyroscope integration using the accelerometer reference. Because it uses the DMP's already-bias-corrected gyroscope data as input, it benefits from the DMP's signal conditioning without inheriting its fusion artefact. Adding the magnetometer (9-axis Mahony) reduces static yaw drift to under 0.2 °/h — about 150× better than the DMP in static conditions. The filter converges to magnetic north within 1.3 seconds. However, this advantage reverses during fast rotation: the magnetometer's correction term continuously pulls the heading estimate toward magnetic north, fighting the gyroscope during rapid yaw changes, and the resulting Mean Absolute Error during a 2-minute dynamic test was 113° — the worst of all tested methods. ### Summary of Trade-offs | Method | Static yaw drift | Dynamic MAE | Notes | |--------|-----------------|-------------|-------| | DMP 6-axis | 109 °/h | **12°** | Best for motion; runs on-chip at 200 Hz | | Mahony 6-axis | 21 °/h | 32° | Good static; software filter at 75 Hz | | Raw gyro integration | 27 °/h | 30° | Simplest; no correction | | Mahony 9-axis (with magnetometer) | **0.19 °/h** | 113° | Best static; degrades under fast rotation | **For Botball competition use, the DMP is the right choice.** Matches involve fast turns and the 12° dynamic MAE is far better than the alternatives. The DMP is what libstp uses by default. ## Calibration Low-cost MEMS sensors like the MPU-9250 have systematic errors that require calibration. The standard MEMS error model is: ``` y = K(x + b) + n ``` Where `y` is the raw reading, `x` is the true value, `b` is bias, `K` is a scale factor, and `n` is noise. On a tested Wombat unit, the uncalibrated Z-axis accelerometer read 19.43 m/s² instead of the expected 9.81 m/s² — a scale error of almost 2×. The magnetometer had a hard-iron offset of 14.87 µT along Z, roughly 30% of the local geomagnetic field. ### Gyroscope Bias The DMP handles gyroscope bias automatically. At startup, while the robot is stationary, the DMP measures the mean output of each gyro axis and stores it as the bias. Subsequent readings subtract this bias. This process happens before the robot begins moving and requires the robot to be still for a brief settling period (typically under a second). The calibrated gyroscope Z-axis bias is around 0.005 °/s, which over 10 minutes gives approximately 3° of drift — the theoretical minimum for a gyroscope without a magnetic reference. ### Accelerometer Calibration Accelerometer calibration corrects for per-axis biases and scale factors. The method requires no special equipment: 1. Place the robot with the Z axis vertical and collect static samples (~30,000 samples on a test unit). 2. Compute the per-axis bias as the mean of those samples, then subtract 9.81 m/s² from the gravity axis to get the true bias. 3. Compute the scale factor for each axis from the ratio of measured gravity magnitude to expected gravity. 4. Validate by rotating the robot freely and confirming that the corrected acceleration vectors form a sphere centred at the origin in 3D space (ellipsoid fitting on ~25,000 free-rotation samples). The corrected accelerometer output is: ``` a_corr = K_a^-1 * (a_raw - b_a) ``` ### Magnetometer Calibration Magnetometer calibration is more involved because nearby ferromagnetic parts — motor housing, mounting hardware, battery — distort the local field. Two types of distortion are corrected: - **Hard-iron offset** (`b_m`): A constant additive shift from permanently magnetised parts near the sensor. Shifts the measurement sphere off-centre. - **Soft-iron distortion** (`S_m`): A multiplicative stretch and rotation caused by magnetically soft (non-permanent) materials. Deforms the sphere into an ellipsoid. The combined error model is: ``` m_raw = S_m * m_true + b_m + n_m ``` Calibration collects magnetic field measurements during slow full 3D rotation of the robot (~12,000 samples over 8 minutes), then fits an ellipsoid to the data using least-squares. The result is a hard-iron vector and a soft-iron matrix. At runtime, correction is applied as: ``` m_corr = S_m^-1 * (m_raw - b_m) ``` Outlier rejection using MAD (Median Absolute Deviation) removes transient disturbances (e.g. from another robot nearby) before fitting. **The magnetometer calibration must be repeated whenever the robot's mechanical configuration changes** — adding or moving motors, batteries, or metal structural parts all change the local magnetic field and invalidate the previous calibration. Motor currents also contribute to magnetic distortion during operation, which is an open question for the in-match case. ### Calibration Tools Calibration scripts and collected data are available at: https://github.com/ToberoCat/wombat-imu-calibration The calibration scripts are Python-based and run offline on data collected from the Wombat. ## Accessing IMU Data Directly Most missions do not need to read the IMU directly — the motion system handles it. But if you need raw sensor values: ```python from libstp.hal import IMU imu = IMU() # Full read: returns (accel, gyro, magneto) as tuples of (x, y, z) accel, gyro, magneto = imu.read() # accel: (ax, ay, az) in m/s² # gyro: (wx, wy, wz) in rad/s # magneto: (mx, my, mz) in µT # Angular velocity only (rad/s) wx, wy, wz = imu.get_angular_velocity() # Firmware-computed heading (radians, CCW-positive) heading_rad = imu.get_heading() # Gravity-compensated linear acceleration (m/s²) lax, lay, laz = imu.get_linear_acceleration() # Configure which body axis is used as the turn (yaw) axis # Options: "world_z" (default), "body_x", "body_y", "body_z" # Prefix with "-" to negate: "-body_z" imu.set_yaw_rate_axis_mode("world_z") # Angular rate around the configured turn axis (rad/s) yaw_rate = imu.get_yaw_rate() ``` The firmware-computed heading (`get_heading()`) is the value used by the motion system. It is the DMP quaternion integrated into a scalar heading, converted from firmware's clockwise-positive convention to libstp's CCW-positive convention. ### Turn Axis Modes By default the system tracks yaw rotation as the world-Z axis — which is correct when the robot is flat on a table. If your robot turns while tilted (e.g. a robot that tips up on ramps), you can configure the relevant body axis instead: | Mode | Description | |------|-------------| | `"world_z"` | Yaw in the world frame (default, robot flat on surface) | | `"body_x"` | Robot's X axis (forward/back roll) | | `"body_y"` | Robot's Y axis (left/right roll) | | `"body_z"` | Robot's Z axis (raw body yaw, without world correction) | This is configured on the odometry level and is passed to the IMU automatically: ```python from libstp import Stm32OdometryConfig config = Stm32OdometryConfig() config.turn_axis = "body_z" # Use raw body Z for a tilted robot ``` ## The `after_degrees()` Stop Condition `after_degrees(deg)` is a stop condition that fires after the robot has rotated by a given angle. Unlike `after_cm()` which uses encoder-based odometry, `after_degrees()` reads the **absolute IMU heading** directly. This means it is not affected by odometry resets that occur at the start of each motion step. ```python # Turn right until 60 degrees have elapsed, with safety timeout turn_right(speed=0.5).until(after_degrees(60) | after_seconds(3)) # Combined: drive forward, stop when the robot has drifted 15 degrees drive_forward(speed=0.3).until(after_degrees(15) | after_cm(50)) ``` **Implementation detail:** `after_degrees()` records the absolute heading at start, then on each check computes the angular difference accounting for wrapping at ±180°: ```python delta = abs(current - start_heading) if delta > π: delta = 2π - delta ``` This means it measures total unsigned rotation, regardless of direction. It works for both left and right turns. Unlike an angle-based `turn_left(deg)`, which uses a full PID controller to reach a precise target, `after_degrees()` is a simpler "how far have we rotated" check. Use `turn_left(deg)` when you need precision; use `after_degrees(deg)` inside a `.until()` when you need a rotation threshold as part of more complex logic. ## Practical Notes ### Let the IMU Settle The DMP needs a moment at startup to estimate gyroscope bias. The system calls `waitForReady()` at odometry initialization and waits up to 1 second for the IMU to produce valid data. Do not command motion immediately at the start of a program without letting this settle. The standard `wait_for_light()` gate serves this purpose implicitly — by the time the light fires, the IMU has been running long enough. ### Mounting Matters The MPU-9250 is soldered to the Wombat PCB, so its orientation is fixed relative to the board. For heading estimation to be correct, the Wombat must be mounted flat — the Z axis must be pointing up. If the Wombat is mounted sideways or at an angle, the world-Z heading computation will be wrong. Use the `turn_axis` configuration to specify the correct body axis if your mounting is non-standard. ### Motors Disturb the Magnetometer Motor currents and the ferromagnetic parts inside motors create a time-varying magnetic field near the sensor. During operation, this distorts the magnetometer reading. This is one reason the 9-axis Mahony filter (which uses the magnetometer) performs worse during dynamic testing than the 6-axis DMP: the magnetometer reference is contaminated when motors are running. The DMP's 6-axis approach — gyroscope and accelerometer only — is immune to this problem and is the correct choice for in-match navigation. ### Temperature and Warm-Up MEMS gyroscopes exhibit temperature-dependent bias. The DMP's startup calibration measures bias at the current temperature. If the robot was stored in a cold environment and then placed on a warm table, the bias will shift during warm-up and the heading will drift slightly. For competition use, power on the robot several minutes before the match start to let the sensor reach thermal equilibrium. ### Noise and Vibration The accelerometer is sensitive to vibration from motors and the surface. At high drive speeds, motor vibration appears as high-frequency noise in the accelerometer readings. This does not affect the DMP's heading output much (the gyroscope dominates during motion), but it will show up in `get_linear_acceleration()` readings. If you are reading accelerometer data directly for vibration detection or distance estimation, expect significant noise during motor operation. ### When to Recalibrate - **Gyroscope bias**: Handled automatically by the DMP at every power-on. No manual action needed. - **Accelerometer**: Recalibrate if you add significant weight to the Wombat, move the board to a different mounting angle, or observe that the gravity reading is noticeably wrong. In practice, the accelerometer calibration is stable across power cycles. - **Magnetometer**: Recalibrate whenever the robot's mechanical configuration changes — different motors, different battery placement, added metal parts. This includes between competition seasons. Do not skip magnetometer calibration if you plan to use 9-axis heading estimation. --- # Making Your Robot Competition Ready URL: /02-programming/15-competition-ready/ # Making Your Robot Competition Ready At some point your robot stops being a development toy and has to survive on a real table, in front of real judges, with no laptop tethered to it. Competition mode is different from running from the Web IDE or with `raccoon run`: the robot has to start on a **light signal**, it has to **stop itself after 120 seconds** without anyone pressing a key, and it has to do it reliably whether the Wi-Fi works or not. This page walks you through the three things you need to turn your development robot into a competition robot: 1. **Wait-for-light** — starting the robot from the competition start lamp 2. **`shutdown_in`** — the automatic emergency stop timer 3. **The shutdown mission** — the safe cleanup that runs when time is up Each of these is opt-in. Without them, your robot still works — it just won't be legal at competition. ## The Competition Flow Before touching any config, it helps to understand what the framework does on your behalf when everything is wired up correctly. This is what happens from the moment the robot boots at the table to the moment the match ends: ```mermaid graph TD Boot["Robot boots"] --> Setup["setup_mission runs
servos home, calibration"] Setup --> Gate{"wait_for_light_sensor
defined?"} Gate -->|Yes| WFL["wait_for_light gate
warmup → test → armed"] Gate -->|No| Button["Wait for button press"] WFL --> Clock["T=0: mission clock starts"] Button --> Clock Clock --> Main["Main missions run
in list order"] Main --> Timer{"shutdown_in
reached?"} Timer -->|Yes, timer fires| Cancel["Current mission cancelled"] Timer -->|No, missions finish| Done["Missions complete"] Cancel --> Shutdown["shutdown_mission runs
arms up, servos off"] Done --> Shutdown Shutdown --> Stop["Robot idle"] style Setup fill:#4CAF50,color:#fff style WFL fill:#FFA726,color:#fff style Clock fill:#42A5F5,color:#fff style Main fill:#66BB6A,color:#fff style Shutdown fill:#EF5350,color:#fff ``` The three highlighted stages — the **wait-for-light gate**, the **`shutdown_in` timer**, and the **shutdown mission** — are the pieces you explicitly enable. Everything else (running the setup mission, starting the mission clock, executing missions in order) happens automatically. ## Wait for Light Botball matches start when a lamp above the table turns on. The robot has to detect that lamp and begin its first main mission the instant the light fires — not a second later, not on a false trigger from the overhead ballroom lighting. The full algorithm (Kalman filter, warmup, test mode, armed state) is covered in the **[Wait for Light algorithm page]({{< ref "algorithms/wait-for-light" >}})**. This section is just about *enabling* it for competition. ### Hardware: Mount the Sensor Downward The framework's light-start algorithm expects the sensor to be mounted **facing down** at the table surface, with no black tape or shielding. When the start lamp fires above the robot, light reflects off the table and reaches the sensor. This mount is up to 76% less noisy than a horizontal mount, because the sensor's field of view is dominated by the table (a stable reflector) rather than by people moving around the venue. Pick a spot on your chassis where: - The sensor has a clear view of the table below it - It isn't shaded by any mechanism that moves during the match - It's over the starting area (not dangling off the edge) ### Definition: Name the Sensor Exactly The framework looks up the start sensor **by name**. It has to be called exactly `wait_for_light_sensor` in your hardware config — not `start_sensor`, not `light_sensor_1`, not anything else. The pre-start gate finds it by attribute name on your `Defs` class. In `config/hardware.yml` (included from `raccoon.project.yml`): ```yaml definitions: button: type: DigitalSensor port: 10 wait_for_light_sensor: # Exact name required type: AnalogSensor port: 2 # ... other sensors ``` Once this entry exists and the code is regenerated, the framework automatically inserts the light-start gate between your setup mission and your first main mission. You don't call `wait_for_light()` yourself — the framework does it. > **Omitting the sensor:** If `wait_for_light_sensor` is not defined, the robot will fall back to starting on a physical button press. That's fine for development on your desk, but at competition you need the lamp signal — the judges won't let you touch the robot after placing it. ### Exclude It from Calibration The wait-for-light sensor is usually a raw light-dependent resistor or photodiode pointed at the table — not an IR line sensor. It doesn't need the black-vs-white threshold that `calibrate()` computes for line sensors. Exclude it explicitly: ```python seq([ Defs.arm.up(), Defs.claw.closed(), calibrate(distance_cm=50, exclude_ir_sensors=[ Defs.wait_for_light_sensor, ]), ]) ``` If you forget this, `calibrate()` will try to find a black/white threshold for the downward-facing sensor, which never sees a black line — it will either pick nonsense values or slow the calibration down waiting for a transition. ### Using It at the Table Because the gate runs automatically, the workflow on match day is: 1. Power on the robot. The setup mission runs — servos move to their home positions, calibration completes. 2. The UI enters **warmup** — the Kalman filter is building its baseline of ambient light (about 1 second). 3. The UI enters **test mode** — cover/uncover the sensor, or toggle a flashlight above it, to verify detection works. The test counter ticks up on each successful trigger. **The robot will not start during test mode**, so you can freely experiment. 4. Once you see successful triggers, press the button. The gate transitions to **armed**. A small "needs clear" gate prevents an immediate false start if the lamp is still on. 5. Place the robot in its start position and step back. 6. The judge turns on the start lamp. The gate fires, the mission clock starts at T=0, and your first main mission begins. If the test mode never triggers, something is wrong: the sensor isn't seeing the lamp, the `drop_fraction` is too high, or the sensor is shielded. **Fix it in test mode, not in the armed state.** That's what test mode exists for. ### Manually Tuning Sensitivity The defaults work for most setups. If your venue has dim overhead lighting or the start lamp is unusually weak, you may need to make the detector more sensitive. This is done by calling `wait_for_light()` manually with custom parameters — but you only need this if the automatic gate isn't detecting the lamp. ```python # More sensitive — triggers on a 10% drop instead of 15% wait_for_light(Defs.wait_for_light_sensor, drop_fraction=0.10, confirm_count=2) ``` See the **[Wait for Light algorithm page]({{< ref "algorithms/wait-for-light" >}})** for the full parameter reference. ## The `shutdown_in` Timer Every Botball match has a hard time limit. If your robot is still driving when the clock hits zero, you can knock over your own scoring, hit the partner robot, or damage the game elements. The rules require the robot to stop on its own. The framework enforces this with a single config value: `shutdown_in`. ### Enabling It `shutdown_in` lives in `config/robot.yml`: ```yaml shutdown_in: 120 # seconds ``` That's all you need. The framework arms a timer the moment the mission clock starts (after the wait-for-light gate fires) and when the timer expires, **it cancels whatever mission is currently running and jumps straight to the shutdown mission**. The mission code doesn't need to check the clock — the cancellation happens automatically at the next `await` point, cleanly, without leaving motors spinning. ### Choosing a Value For standard Botball matches, `120` is the right number — it matches the 2-minute match length. Some tournaments run shorter or longer matches; check your specific event's rules. | Value | Meaning | |-------|---------| | `120` | Standard Botball match length (2 minutes) | | `60` | Short-format or timed practice runs | | `0` | **Disabled** — the robot runs until missions finish or you power it off. Only use during development. | Setting `shutdown_in: 0` is useful when you're debugging a long-running mission on your bench and don't want the timer to kill it mid-test. **Never leave it at 0 when you travel to competition** — a forgotten zero is the kind of mistake that ends a match with a disqualification. ### Why It's Cooperative, Not Forcible The shutdown timer relies on the single-threaded asyncio model (see **[Advanced Topics → No Threads]({{< ref "11-advanced" >}})**). When the timer fires, the framework cancels the running mission's asyncio task. Because every step yields control at its `await` points, the cancellation propagates within a few milliseconds — no motor is left spinning, no half-finished operation corrupts state. This is **why you must never use `time.sleep()` or block the event loop** in custom steps. If a step doesn't yield, the cancellation can't propagate, and the shutdown timer can't actually stop the robot. The 120-second timer depends on cooperative scheduling to work at all. A single synchronous `time.sleep(5)` in your code means 5 seconds of the shutdown timer being unable to fire. ## The Shutdown Mission When the timer fires, the framework hands control to a special mission — the **shutdown mission**. Its job is to leave the robot in a safe state: arms up, claws off motors, no limbs sticking into other robots' paths. ### Writing One A shutdown mission is just a regular `Mission` subclass with a short, safe sequence: ```python from libstp import * from src.hardware.defs import Defs class M99ShutdownMission(Mission): def sequence(self) -> Sequential: return seq([ # Lift anything that could snag another robot Defs.arm.up(), Defs.claw.open(), # Release all servo holding torque fully_disable_servos(), ]) ``` Rules of thumb for shutdown missions: - **Keep it short.** There's no hard time limit after the main timer expires, but long shutdown missions are a code smell — if you need 30 seconds of recovery, something is wrong with your main mission. - **No driving.** Do not add `drive_forward()` or similar to the shutdown mission. If the main timer fires while you're mid-drive, the last thing you want is *more* driving. - **No sensor waits.** Don't use `wait_for_digital()` or any blocking condition — if the sensor never fires, the shutdown mission hangs forever. - **`fully_disable_servos()` at the end.** This removes holding torque so servos don't burn out if the robot sits there waiting for someone to retrieve it. ### Registering It In `config/missions.yml`: ```yaml - M00SetupMission: setup - M99ShutdownMission: shutdown - M01DriveToConeMission - M02CollectConeMission ``` The tag `shutdown` is what makes it a shutdown mission. Only one shutdown mission is allowed per robot — if you list multiple, the last one wins. If you don't list one at all, **no shutdown cleanup runs when the timer fires** — the current mission is simply cancelled and the robot goes idle with whatever state the cancellation left it in. This is usually fine (the cancellation already stops motors), but you won't get the "arms up" safety. ## The Competition-Ready Checklist Before you travel, walk through this list once. Every item here is something we've seen teams forget at the table. ### Hardware Config - [ ] `wait_for_light_sensor` is defined in `config/hardware.yml` with the **exact** name. - [ ] The sensor is physically mounted **facing down** and isn't shadowed by any moving part. - [ ] The sensor is excluded from `calibrate(..., exclude_ir_sensors=[...])`. - [ ] `button` is defined and physically reachable for the test-mode → armed transition. ### Robot Config - [ ] `shutdown_in: 120` in `config/robot.yml` (or your event's match length). - [ ] `shutdown_in` is **not** `0`. ### Missions - [ ] `SetupMission: setup` is listed first in `config/missions.yml`. - [ ] A `ShutdownMission: shutdown` is listed. - [ ] The shutdown mission has no drive steps, no sensor waits, and ends with `fully_disable_servos()`. - [ ] Main missions are listed in the order you want them to execute. ### Testing on the Bench - [ ] Run the full program from `raccoon run`. The setup mission finishes, then the UI enters warmup → test mode. - [ ] Cover and uncover the light sensor (or shine a flashlight above it). The test counter should tick up on every trigger. If it doesn't, fix the sensor position or lower `drop_fraction`. - [ ] Press the button. The UI should transition to "armed". - [ ] Simulate the start light (flashlight works). The main missions should start immediately. - [ ] Let the program run to completion. Verify the shutdown mission runs after the main missions finish. - [ ] Now run it again, but this time set `shutdown_in: 10` temporarily and confirm the shutdown mission fires when the timer expires mid-mission. **Put it back to 120 before traveling.** ### At the Table - [ ] Power on the robot. Setup mission runs. - [ ] Test mode activates. Test the light detection from the actual starting position, with the actual lamp — **not** with a flashlight. Ambient lighting at the venue is different from your workshop. - [ ] Confirm at least 2–3 successful test triggers before arming. - [ ] Press the button to arm. Do not touch the robot again. - [ ] Match starts. The robot runs. 120 seconds later, the shutdown mission fires and the robot safes itself. ## Related Topics - **[Wait for Light algorithm]({{< ref "algorithms/wait-for-light" >}})** — the Kalman filter, warmup, test mode, and armed state in detail. - **[Advanced Topics → Async Execution Model]({{< ref "11-advanced" >}})** — why the `shutdown_in` timer depends on cooperative scheduling. - **[Synchronizing Two Robots]({{< ref "03a-synchronizing-robots" >}})** — the mission clock that `shutdown_in` counts against is the same clock that `wait_for_checkpoint()` uses. - **[Configuration Reference → `shutdown_in`]({{< ref "13-configuration-reference" >}})** — the full config schema entry. ---