Introduction
Kinetic is a fast, Rust-native motion planning library for robotics. It computes collision-free paths, solves inverse kinematics, and generates time-parameterized trajectories — all without ROS2, without a GPU, and without external dependencies.
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
let result = plan("ur5e", &[0.0, -1.0, 0.8, 0.0, 0.0, 0.0],
&Goal::joints([1.0, -0.5, 0.3, 0.2, -0.3, 0.5]))?;
}
import kinetic
traj = kinetic.plan("ur5e", start, kinetic.Goal.joints([1.0, -0.5, 0.3, 0.2, -0.3, 0.5]))
That’s it. No launch files, no YAML configuration, no parameter server. Load a robot, define a goal, get a trajectory.
What Kinetic Does
- Forward & Inverse Kinematics — 10 IK solvers including analytical (OPW, Paden-Kahan) and iterative (DLS, FABRIK, SQP, Bio-IK). FK in 324 ns, IK in 10 us.
- Collision Detection — SIMD-accelerated sphere-based checking with CAPT broadphase. Self-collision in 9 ns, environment collision in 507 ns.
- Motion Planning — 14 algorithms: RRT-Connect, RRT*, BiRRT*, BiTRRT, EST, KPIECE, PRM, CHOMP, STOMP, GCS, Cartesian, constrained, and dual-arm.
- Trajectory Generation — Time-optimal (TOTP), trapezoidal, jerk-limited S-curve, and cubic spline profiles with per-joint limit enforcement.
- Reactive Control — 500 Hz servo with twist/jog/pose-tracking inputs, RMP multi-policy blending, collision deceleration, and singularity avoidance.
- Task Planning — Pick, place, and multi-stage sequences with grasp generation and DAG-based stage composition.
- GPU Optimization — Optional wgpu compute shaders for parallel-seed trajectory optimization (cross-vendor: NVIDIA, AMD, Intel, Apple).
- Python Bindings — Full API via PyO3 with numpy integration. Type stubs for IDE autocomplete.
- 54 Built-in Robots — UR, Franka Panda, KUKA, ABB, Fanuc, Kinova, xArm, and 47 more, ready to use.
Performance vs MoveIt2
| Operation | Kinetic | MoveIt2 | Speedup |
|---|---|---|---|
| Forward kinematics (7-DOF) | 324 ns | 5-10 us | 15-30x |
| Inverse kinematics (DLS) | 10.6 us | 5 ms (KDL) | 470x |
| Self-collision check | 9 ns | 50-100 us | 5,000x |
| Environment collision (10 obstacles) | 507 ns | 50-100 us | 100x |
| Servo control tick | 9.9 us | ~1 ms | 100x |
Architecture
graph LR
subgraph Foundation
Core[kinetic-core<br/>Pose, Goal, Trajectory]
Robot[kinetic-robot<br/>URDF, 52 robots]
end
subgraph Physics
FK[kinetic-kinematics<br/>FK, IK, Jacobian]
Col[kinetic-collision<br/>SIMD, CAPT, LOD]
Dyn[kinetic-dynamics<br/>Featherstone]
end
subgraph Planning
Scene[kinetic-scene<br/>Obstacles, ACM]
Plan[kinetic-planning<br/>14 planners]
Traj[kinetic-trajectory<br/>TOTP, S-curve]
React[kinetic-reactive<br/>Servo, RMP]
end
subgraph Integration
Exec[kinetic-execution<br/>500Hz streaming]
GPU[kinetic-gpu<br/>wgpu optimization]
Task[kinetic-task<br/>Pick, Place]
Py[kinetic-python<br/>PyO3 + numpy]
end
Core --> FK --> Plan
Core --> Robot --> FK
Robot --> Col --> Plan
Core --> Traj
Col --> Scene --> Plan
Plan --> Task
Plan --> Exec
FK --> React
Col --> GPU
Dyn -.-> Exec
style Core fill:#4a9eff,color:#fff
style Plan fill:#ff6b6b,color:#fff
style GPU fill:#ffd93d,color:#000
style Py fill:#3ddc84,color:#000
What Kinetic Does NOT Do
Kinetic is a planning and kinematics library, not a complete robotics framework. It does not include:
- Hardware drivers — Use terra (HORUS HAL), ros2_control, or your own driver
- Perception / object detection — Feed detected shapes to kinetic’s Scene API
- Visualization — Export trajectories to JSON/CSV, use matplotlib (Python), or connect via horus-monitor
- ROS2 middleware — Kinetic runs standalone. For ROS2 integration, use the horus-kinetic bridge
Who Kinetic Is For
- Robotics engineers building manipulation systems who need fast, reliable motion planning
- Teams migrating from MoveIt2 who want 10-5000x faster operations without ROS2 dependency
- Python developers prototyping robot behavior with numpy-native APIs
- Product teams deploying kinetic in production with 54 pre-configured robots and 85%+ test coverage
Getting Started
New to kinetic? Start here:
- Installation —
cargo add kineticorpip install kinetic - Hello World — Your first motion plan in 5 lines
- Your First Robot — Load a robot and compute FK
- Your First Plan — Plan, time-parameterize, and validate
Coming from another framework?
- From MoveIt2 — Step-by-step migration with config translation
- From Other Frameworks — Concept mapping for cuRobo, Drake, OMPL, PyBullet
New to robotics?
- Robotics Primer — What are joints, FK, IK, and planning? Explained from zero, no prior knowledge needed
Want to understand the algorithms?
- Core Concepts — Glossary, FK, IK, collision, planning, trajectories explained
- Planner Selection — Decision flowchart for choosing the right algorithm
Installation
Rust
Add kinetic to your project:
cargo add kinetic
Or add it to your Cargo.toml manually:
[dependencies]
kinetic = "0.1"
Requirements: Rust 1.75 or later. Works on Linux (primary), macOS, and Windows (experimental).
Python
pip install kinetic
This installs a native extension built with PyO3 — no Rust toolchain needed on your machine.
Requirements: Python 3.9 or later. numpy is installed automatically.
From Source
git clone https://github.com/softmata/kinetic.git
cd kinetic
cargo build --release
For the Python bindings:
cd crates/kinetic-python
pip install maturin
maturin develop --release
Verify Installation
Rust:
cargo run --example plan_simple -p kinetic
You should see output like:
=== KINETIC Simple Planning ===
One-liner: 14 waypoints, 237ms, path length 2.145
Python:
python -c "import kinetic; r = kinetic.Robot('ur5e'); print(f'{r.name}: {r.dof} DOF')"
Expected output:
ur5e: 6 DOF
Next
Hello World
Let’s plan a robot motion in 5 lines.
Rust
use kinetic::prelude::*;
fn main() -> Result<()> {
let start = [0.0, -1.0, 0.8, 0.0, 0.0, 0.0];
let goal = Goal::joints([1.0, -0.5, 0.3, 0.2, -0.3, 0.5]);
let result = plan("ur5e", &start, &goal)?;
println!("{} waypoints in {:?}", result.num_waypoints(), result.planning_time);
Ok(())
}
Python
import kinetic
import numpy as np
start = np.array([0.0, -1.0, 0.8, 0.0, 0.0, 0.0])
goal = kinetic.Goal.joints(np.array([1.0, -0.5, 0.3, 0.2, -0.3, 0.5]))
traj = kinetic.plan("ur5e", start, goal)
print(f"{traj.num_waypoints} waypoints, {traj.duration:.3f}s")
What Just Happened?
In those 5 lines, kinetic:
-
Loaded the UR5e robot — a 6-DOF industrial arm from Universal Robots. Kinetic parsed its URDF model, extracted joint limits, and built a collision sphere model. (See: Robots and URDF)
-
Defined start and goal — joint configurations in radians. The UR5e has 6 joints, so each array has 6 values.
startis where the arm is now;goalis where you want it. (See: Glossary → Joint Configuration) -
Planned a path — using RRT-Connect, a sampling-based algorithm that grows two search trees (one from start, one from goal) until they connect. The result is a sequence of collision-free waypoints through joint space. (See: Motion Planning)
-
Time-parameterized the path — converted the geometric path into a timed trajectory with velocity and acceleration profiles that respect the robot’s joint limits. (See: Trajectory Generation)
-
Returned the result — a
PlanningResult(Rust) orTrajectory(Python) containing timed waypoints you can send to a robot controller.
What Do the Numbers Mean?
start = [0.0, -1.0, 0.8, 0.0, 0.0, 0.0]— Joint angles in radians. Joint 1 at 0°, joint 2 at -57°, joint 3 at 46°, etc.goal = [1.0, -0.5, 0.3, 0.2, -0.3, 0.5]— The target joint configuration.14 waypoints— The planner found a path through 14 intermediate configurations.237ms— Planning took 237 milliseconds (this varies — RRT is probabilistic).
Try This
- Change the goal to
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0](zero configuration) and plan again - Try a different robot: replace
"ur5e"with"franka_panda"(7 DOF — the arrays need 7 values) - Print all waypoints:
for wp in result.waypoints { println!("{:?}", wp); }
Next
Your First Robot
Before planning motions, let’s understand how kinetic represents a robot.
Loading a Robot
Kinetic ships with 54 pre-configured robots. Load one by name:
Rust:
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
let robot = Robot::from_name("ur5e")?;
println!("Robot: {} ({} DOF)", robot.name, robot.dof);
println!("Joints: {:?}", robot.joints.iter().map(|j| &j.name).collect::<Vec<_>>());
}
Python:
import kinetic
robot = kinetic.Robot("ur5e")
print(f"Robot: {robot.name} ({robot.dof} DOF)")
print(f"Joints: {robot.joint_names}")
Output:
Robot: ur5e (6 DOF)
Joints: ["shoulder_pan_joint", "shoulder_lift_joint", "elbow_joint",
"wrist_1_joint", "wrist_2_joint", "wrist_3_joint"]
Forward Kinematics
Forward kinematics (FK) answers: “If the joints are at these angles, where is the end-effector?”
Rust:
#![allow(unused)]
fn main() {
let joints = [0.0, -1.5708, 0.0, -1.5708, 0.0, 0.0]; // UR5e "home" position
let pose = robot.fk(&joints)?;
let t = pose.translation();
println!("End-effector position: x={:.3}, y={:.3}, z={:.3}", t.x, t.y, t.z);
let (roll, pitch, yaw) = pose.rpy();
println!("Orientation (RPY): roll={:.2}°, pitch={:.2}°, yaw={:.2}°",
roll.to_degrees(), pitch.to_degrees(), yaw.to_degrees());
}
Python:
import numpy as np
joints = np.array([0.0, -1.5708, 0.0, -1.5708, 0.0, 0.0])
pose_4x4 = robot.fk(joints)
print(f"End-effector position: {pose_4x4[:3, 3]}")
print(f"Rotation matrix:\n{pose_4x4[:3, :3]}")
The result is a pose — a position (x, y, z in meters) plus an orientation (rotation matrix or quaternion). This tells you exactly where the robot’s tool tip is in 3D space.
What’s Inside a Robot?
A kinetic Robot contains:
| Component | What it is | Example |
|---|---|---|
| Joints | Rotary or linear actuators | shoulder_pan_joint (revolute, ±360°) |
| Links | Rigid bodies between joints | upper_arm_link (0.425m long) |
| Joint limits | Min/max angle, velocity, effort | [-2π, 2π] rad, 3.14 rad/s |
| Planning groups | Named subsets of joints | "arm": all 6 joints |
| Named poses | Pre-defined configurations | "home": [0, -π/2, 0, -π/2, 0, 0] |
| Collision geometry | Simplified shapes for collision | Spheres approximating each link |
All of this comes from the robot’s URDF (Unified Robot Description Format) file — an XML file that describes the robot’s geometry, joints, and limits. Kinetic parses URDF automatically when you call from_name().
Named Poses
Most robots come with pre-defined poses:
Rust:
#![allow(unused)]
fn main() {
if let Some(home) = robot.named_pose("home") {
let pose = robot.fk(&home)?;
println!("Home EE position: {:?}", pose.translation());
}
}
Python:
home = robot.named_pose("home")
if home is not None:
pose = robot.fk(home)
print(f"Home EE position: {pose[:3, 3]}")
Joint Limits
Every joint has physical limits. Kinetic enforces them:
Rust:
#![allow(unused)]
fn main() {
for (i, limit) in robot.joint_limits.iter().enumerate() {
println!("Joint {}: [{:.2}, {:.2}] rad, max vel {:.2} rad/s",
i, limit.lower, limit.upper, limit.velocity);
}
}
Loading Custom Robots
If your robot isn’t in the built-in list, load it from a URDF file:
Rust:
#![allow(unused)]
fn main() {
let robot = Robot::from_urdf("path/to/my_robot.urdf")?;
}
Python:
robot = kinetic.Robot.from_urdf("path/to/my_robot.urdf")
For MoveIt2 users, kinetic reads SRDF files too:
#![allow(unused)]
fn main() {
let robot = Robot::from_urdf_srdf("my_robot.urdf", "my_robot.srdf")?;
}
See the Custom Robots guide for creating a full kinetic configuration.
Available Robots
See the Supported Robots reference for the complete list of 52 built-in robots with manufacturer, DOF, and IK solver.
A quick sample:
| Robot | Manufacturer | DOF | Config name |
|---|---|---|---|
| UR5e | Universal Robots | 6 | ur5e |
| Franka Panda | Franka Emika | 7 | franka_panda |
| KUKA iiwa7 | KUKA | 7 | kuka_iiwa7 |
| xArm6 | UFACTORY | 6 | xarm6 |
| Kinova Gen3 | Kinova | 7 | kinova_gen3 |
Try This
- Load
"franka_panda"(7 DOF) — note it has one more joint than the UR5e - Compute FK at the zero configuration
[0.0; 7]— where is the Panda’s EE? - Compare the EE position of two different joint configurations — how far apart are they?
- Check the joint limits — which joint has the smallest range?
Next
Your First Plan
Now let’s plan a collision-free motion, time-parameterize it, and validate it for execution on a real robot.
Step 1: Create a Planner
Rust:
use kinetic::prelude::*;
fn main() -> Result<()> {
let robot = Robot::from_name("ur5e")?;
let planner = Planner::new(&robot)?;
Python:
import kinetic
import numpy as np
robot = kinetic.Robot("ur5e")
planner = kinetic.Planner(robot)
Planner::new() auto-detects the robot’s kinematic chain (which joints to plan for) and builds a collision model. You can customize it with .with_config(PlannerConfig::realtime()) for faster but less optimal plans.
Step 2: Define Start and Goal
#![allow(unused)]
fn main() {
let start = vec![0.0, -1.57, 0.0, -1.57, 0.0, 0.0];
let goal = Goal::joints([1.0, -1.0, 0.5, -1.0, -0.5, 0.3]);
}
start = np.array([0.0, -1.57, 0.0, -1.57, 0.0, 0.0])
goal = kinetic.Goal.joints(np.array([1.0, -1.0, 0.5, -1.0, -0.5, 0.3]))
Goals can be specified in several ways:
| Goal type | Rust | Python | When to use |
|---|---|---|---|
| Joint angles | Goal::joints([...]) | Goal.joints(array) | You know the exact configuration |
| Cartesian pose | Goal::Pose(pose) | Goal.pose(matrix_4x4) | You know where the EE should be |
| Named pose | Goal::named("home") | Goal.named("home") | Using pre-defined configurations |
Step 3: Plan
#![allow(unused)]
fn main() {
let result = planner.plan(&start, &goal)?;
println!("Path: {} waypoints, {:.1?}", result.num_waypoints(), result.planning_time);
println!("Path length: {:.3} rad", result.path_length());
}
traj = planner.plan(start, goal)
print(f"Path: {traj.num_waypoints} waypoints, {traj.duration:.3f}s")
The planner returns a sequence of joint configurations (waypoints) that form a collision-free path from start to goal.
What if planning fails? The planner returns an error with a reason:
StartInCollision— your start configuration collides with somethingGoalUnreachable— the goal is outside the robot’s workspacePlanningTimeout— the planner ran out of time (tryPlannerConfig::offline()for more time)
See Troubleshooting for systematic diagnosis.
Step 4: Time-Parameterize
The planner outputs a geometric path (positions only). To execute on a real robot, you need velocities and timing:
#![allow(unused)]
fn main() {
let vel_limits = robot.velocity_limits();
let acc_limits = robot.acceleration_limits();
let timed = kinetic::trajectory::trapezoidal_per_joint(
&result.waypoints, &vel_limits, &acc_limits,
).map_err(|e| KineticError::Other(e))?;
println!("Trajectory: {:.3}s, {} timed waypoints",
timed.duration.as_secs_f64(), timed.waypoints.len());
}
vel = np.array(robot.velocity_limits)
acc = np.array(robot.acceleration_limits)
timed = traj.time_parameterize("trapezoidal", vel, acc)
print(f"Trajectory: {timed.duration:.3f}s, {timed.num_waypoints} waypoints")
The trapezoidal profile accelerates, cruises, and decelerates each joint, respecting its velocity and acceleration limits. Other profiles are available:
| Profile | Best for | Smoothness |
|---|---|---|
trapezoidal | General use | Velocity-continuous |
totp | Time-optimal | Velocity-continuous |
jerk_limited | Delicate tasks | Acceleration-continuous |
cubic_spline | Smooth motion | C2-continuous |
Step 5: Validate
Before sending to a real robot, validate the trajectory:
#![allow(unused)]
fn main() {
// Check every waypoint is within joint limits
for wp in &timed.waypoints {
for (j, &pos) in wp.positions.iter().enumerate() {
assert!(pos >= robot.joint_limits[j].lower - 1e-6);
assert!(pos <= robot.joint_limits[j].upper + 1e-6);
}
}
println!("All waypoints within joint limits");
}
violations = timed.validate(
np.array([-6.28] * 6), # position lower limits
np.array([6.28] * 6), # position upper limits
vel, acc
)
if not violations:
print("Trajectory is valid")
else:
print(f"Found {len(violations)} violations")
Step 6: Export
Export the trajectory for your robot controller:
#![allow(unused)]
fn main() {
let json = trajectory_to_json(&timed);
std::fs::write("trajectory.json", &json)?;
println!("Exported {} bytes", json.len());
Ok(())
}
}
times, positions, velocities = timed.to_numpy()
# times: shape (N,), positions: shape (N, 6), velocities: shape (N, 6)
# Plot with matplotlib
import matplotlib.pyplot as plt
for j in range(6):
plt.plot(times, positions[:, j], label=f"Joint {j}")
plt.xlabel("Time (s)")
plt.ylabel("Position (rad)")
plt.legend()
plt.savefig("trajectory.png")
Complete Rust Example
use kinetic::prelude::*;
fn main() -> Result<()> {
let robot = Robot::from_name("ur5e")?;
let planner = Planner::new(&robot)?;
let start = vec![0.0, -1.57, 0.0, -1.57, 0.0, 0.0];
let goal = Goal::joints([1.0, -1.0, 0.5, -1.0, -0.5, 0.3]);
let result = planner.plan(&start, &goal)?;
let vel = robot.velocity_limits();
let acc = robot.acceleration_limits();
let timed = kinetic::trajectory::trapezoidal_per_joint(
&result.waypoints, &vel, &acc,
).map_err(|e| KineticError::Other(e))?;
println!("{} waypoints, {:.3}s trajectory", timed.waypoints.len(), timed.duration.as_secs_f64());
Ok(())
}
Try This
- Add an obstacle: create a
Scene, add a box withscene.add_box(...), and plan with.with_scene(&scene)— see Planning with Obstacles - Try
PlannerConfig::realtime()vsPlannerConfig::offline()— how does planning time and path quality change? - Use a
Goal::Pose(...)instead ofGoal::joints(...)— kinetic will solve IK internally - Export to CSV with
trajectory_to_csv(&timed)and plot in your favorite tool
Next
Next Steps
You can now load robots, compute FK, and plan motions. Here’s where to go based on what you want to do.
“I want to…”
…avoid obstacles during planning
Add collision objects to a Scene and plan around them.
→ Planning with Obstacles (Rust) or Planning in Python
…control a robot in real-time
Use Servo for teleoperation with joystick, spacemouse, or programmatic twist commands at 500 Hz.
→ Servo Control (Rust) or Servo Control in Python
…do pick and place
Plan approach, grasp, retreat, transport, and place with task composition.
→ Pick and Place (Rust) or Pick and Place in Python
…choose the right IK solver
10 solvers with different trade-offs. The flowchart tells you which to use.
…choose the right planner
14 algorithms. RRT-Connect is the default, but GCS is globally optimal and EST handles narrow passages.
…use Python
Full API with numpy integration, type stubs for IDE autocomplete, and matplotlib-friendly trajectory export.
…understand the algorithms
Learn what FK, IK, RRT, and RMP actually do, with diagrams and examples.
→ Core Concepts — start with the Glossary
…deploy to production
Checklist for safety validation, error handling, monitoring, and trajectory verification.
…migrate from MoveIt2
Step-by-step migration with config translation and API mapping.
…add my own robot
Load custom URDFs and create kinetic configuration files.
…integrate with HORUS
Use kinetic as PlannerNode, ServoNode, and SceneNode in the HORUS robotics framework.
…optimize with GPU
Parallel-seed trajectory optimization using wgpu compute shaders.
…contribute to kinetic
Add planners, IK solvers, robot configs, or documentation.
→ Contributing and Extending Kinetic
Learning Path
If you want to learn kinetic systematically, follow this order:
- Core Concepts — understand the fundamentals
- FK and IK Tutorial — kinematics hands-on
- Planning Basics — your first planner
- Planning with Obstacles — scenes and collision
- Servo Control — real-time control
- Pick and Place — complete workflow
- Production Deployment — deploying on real robots
Robotics Primer
New to robotics? This page explains the core ideas you need before using kinetic. No prior robotics knowledge assumed – just basic programming.
What is a robot arm?
A robot arm is a chain of rigid metal pieces (links) connected by motorized hinges (joints). Think of your own arm: your upper arm and forearm are links, your elbow is a joint.
graph LR
Base[Base<br/>bolted to table] --- J1((Joint 1<br/>rotate left/right))
J1 --- L1[Link 1<br/>shoulder]
L1 --- J2((Joint 2<br/>tilt up/down))
J2 --- L2[Link 2<br/>upper arm]
L2 --- J3((Joint 3<br/>tilt up/down))
J3 --- L3[Link 3<br/>forearm]
L3 --- J4((Joint 4<br/>rotate wrist))
J4 --- EE[End-effector<br/>gripper / tool]
style Base fill:#666,color:#fff
style EE fill:#3ddc84,color:#000
style J1 fill:#4a9eff,color:#fff
style J2 fill:#4a9eff,color:#fff
style J3 fill:#4a9eff,color:#fff
style J4 fill:#4a9eff,color:#fff
The end-effector (EE) is whatever’s at the tip: a gripper, a welding torch, a camera, or a suction cup. That’s the part that does useful work.
Degrees of freedom (DOF)
Each joint adds one degree of freedom – one independent axis the robot can move along. A typical industrial arm has 6 DOF (6 joints), which is exactly enough to place the end-effector at any position and orientation in 3D space. A 7-DOF arm (like the Franka Panda) has one extra joint, giving it flexibility to reach the same pose in multiple ways.
| DOF | What it means | Example robots |
|---|---|---|
| 3 | Can reach positions, but can’t control orientation | Simple pick-place |
| 6 | Full position + orientation control | UR5e, KUKA, ABB |
| 7 | Redundant – multiple ways to reach the same pose | Franka Panda, KUKA iiwa |
Joint values: the robot’s “configuration”
At any moment, each joint has an angle (in radians). The list of all joint angles is called the configuration or joint state. For a 6-DOF robot, it’s a list of 6 numbers:
joints = [0.0, -1.57, 0.0, -1.57, 0.0, 0.0] # 6 angles in radians
This is the most fundamental concept in kinetic: nearly everything takes a list of joint values as input.
Forward kinematics: “where is the tool?”
Given a configuration (list of joint angles), forward kinematics (FK) computes where the end-effector is in 3D space. The result is a pose: a position (XYZ in meters) plus an orientation (which way the tool points).
graph LR
Joints["[0.0, -1.57, 0.0, -1.57, 0.0, 0.0]<br/>joint angles"] -->|FK| Pose["Position: (0.4, 0.0, 0.6)<br/>Orientation: pointing down"]
style Joints fill:#4a9eff,color:#fff
style Pose fill:#3ddc84,color:#000
FK is fast (324 nanoseconds) and always has exactly one answer.
robot = kinetic.Robot("ur5e")
pose = robot.fk(joints) # 4x4 matrix encoding position + orientation
print(pose[:3, 3]) # XYZ position in meters
Inverse kinematics: “how do I reach this spot?”
Inverse kinematics (IK) is the reverse: given a desired pose for the end-effector, find the joint angles that put it there.
graph LR
Pose["Target: reach (0.4, 0.2, 0.5)<br/>pointing down"] -->|IK| Joints["[0.3, -1.2, 0.5, ...]<br/>joint angles"]
style Pose fill:#3ddc84,color:#000
style Joints fill:#4a9eff,color:#fff
IK is harder than FK because:
- There might be multiple solutions (elbow up vs. elbow down)
- There might be no solution (the target is out of reach)
- Near singularities, solutions become unstable
Kinetic has 10 different IK solvers – it automatically picks the best one for your robot.
target = robot.fk([0.5, -1.0, 0.5, -0.5, 0.5, 0.0])
joints = robot.ik(target) # finds joint angles that reach this pose
Collision detection: “will I hit something?”
Before moving, we need to check: will the robot collide with the table, a wall, or itself? Collision detection answers this in under a microsecond by approximating the robot’s links as simple shapes (spheres) and checking for overlaps.
graph TB
Config[Joint angles] --> FK[Compute link positions]
FK --> Spheres[Approximate each link<br/>as spheres]
Spheres --> Check{Any overlaps?}
Check -->|No| Safe["Safe to move"]
Check -->|Yes| Blocked["Collision! Don't move"]
style Safe fill:#3ddc84,color:#000
style Blocked fill:#ff6b6b,color:#fff
scene = kinetic.Scene(robot)
scene.add("table", kinetic.Shape.cuboid(0.5, 0.5, 0.01), table_pose)
print(scene.check_collision(joints)) # True or False
Motion planning: “find a safe path”
You know where you are (start) and where you want to be (goal). But you can’t just move in a straight line – there might be obstacles in the way.
A motion planner explores possible paths through the robot’s configuration space and finds one that avoids all collisions. Think of it like a GPS navigator for the robot’s joints.
graph LR
Start["Start config<br/>[0, -1.57, 0, ...]"] --> Planner["Motion Planner<br/>explores safe paths"]
Planner --> Path["Path: 50 waypoints<br/>all collision-free"]
Obs["Obstacles<br/>table, mug, wall"] --> Planner
style Start fill:#4a9eff,color:#fff
style Path fill:#3ddc84,color:#000
style Obs fill:#ff6b6b,color:#fff
planner = kinetic.Planner(robot, scene=scene)
goal = kinetic.Goal.joints([1.0, -0.8, 0.6, -0.5, 0.8, 0.3])
traj = planner.plan(start, goal)
Trajectory: “how fast should I move?”
A planner gives you a path (a list of waypoints), but not when to be at each point. A real motor needs to know: how fast should I spin at each moment? Trajectory generation adds timing, velocity, and acceleration to the path so the robot moves smoothly within its physical limits.
graph LR
Path["Path<br/>50 waypoints<br/>no timing"] --> TimeParam["Time parameterization<br/>add velocity + acceleration"]
TimeParam --> Traj["Trajectory<br/>50 waypoints<br/>with timestamps"]
Traj --> Motor["Send to motors<br/>at 500 Hz"]
style Path fill:#4a9eff,color:#fff
style Traj fill:#3ddc84,color:#000
style Motor fill:#ffd93d,color:#000
# traj already has timing (planner does it automatically)
print(f"Duration: {traj.duration:.2f} seconds")
# Sample at any point in time
joints_at_halfway = traj.sample(traj.duration / 2)
URDF: the robot’s blueprint
How does kinetic know what a UR5e looks like? From a URDF file (Unified Robot Description Format) – an XML file that describes every link’s shape, every joint’s type and limits, and how they connect. Think of it as a blueprint.
Kinetic ships 52 built-in robots so you don’t need to write URDF files.
Just say Robot("ur5e") and it loads everything.
Putting it all together
A typical robotics workflow in kinetic:
graph TB
Load["1. Load robot<br/>Robot('ur5e')"] --> Scene["2. Build scene<br/>add table, objects"]
Scene --> Plan["3. Plan motion<br/>find collision-free path"]
Plan --> Time["4. Time parameterize<br/>add velocity limits"]
Time --> Validate["5. Validate<br/>check all limits"]
Validate --> Execute["6. Execute<br/>stream to hardware at 500Hz"]
style Load fill:#4a9eff,color:#fff
style Execute fill:#3ddc84,color:#000
import kinetic
import numpy as np
# 1. Load
robot = kinetic.Robot("ur5e")
# 2. Scene
scene = kinetic.Scene(robot)
scene.add("table", kinetic.Shape.cuboid(0.5, 0.5, 0.01), table_pose)
# 3. Plan
planner = kinetic.Planner(robot, scene=scene)
start = np.array([0.0, -1.57, 0.0, -1.57, 0.0, 0.0])
goal = kinetic.Goal.joints(np.array([1.0, -0.8, 0.6, -0.5, 0.8, 0.3]))
traj = planner.plan(start, goal)
# 4-5. Already done (planner auto-parameterizes and validates)
# 6. Execute
def send(positions, velocities):
my_robot_driver.set_joints(positions)
executor = kinetic.RealTimeExecutor(rate_hz=500)
result = executor.execute(traj, send)
print(f"Done in {result['actual_duration']:.2f}s")
Jargon cheat sheet
| Term | Plain English |
|---|---|
| DOF | Number of independent joints |
| Configuration | List of all joint angles |
| FK | Joint angles in, tool position out |
| IK | Tool position in, joint angles out |
| C-space | The imaginary space where each joint angle is one dimension |
| Waypoint | One configuration along a path |
| Trajectory | A path with timestamps and velocities |
| End-effector | The tool at the tip of the arm |
| Singularity | A configuration where the robot “locks up” mathematically |
| URDF | XML file describing a robot’s geometry and joints |
| Scene | The robot’s environment (tables, walls, objects) |
| Planner | Algorithm that finds collision-free paths |
Next
Ready to code? Start here:
- Installation – get kinetic running
- Hello World – your first 5 lines
- Python Quickstart – full Python tutorial
Glossary
Alphabetical definitions of key terms used throughout kinetic. Each entry gives a concise definition followed by a short explanation. Cross-references point to the concept page where the term is discussed in depth.
ACM (Allowed Collision Matrix) — A symmetric boolean matrix specifying which link pairs can safely be ignored during collision checking. Adjacent links in the kinematic tree always collide geometrically, so the ACM lets planners skip those known-benign pairs. Built from SRDF <disable_collisions> entries. See Collision Detection.
C-space (Configuration Space) — The space of all possible joint-value vectors for a robot. Each point in C-space is a vector of DOF joint values that fully determines the robot’s posture. A 6-DOF arm lives in a 6-dimensional C-space. Motion planning searches for collision-free paths through C-space. See Motion Planning.
CCD (Continuous Collision Detection) — Collision checking that considers the swept volume between two configurations, not just the start and end. Prevents the robot from passing through thin obstacles between waypoints. See Collision Detection.
Configuration — A specific set of joint values that fully determines every link’s pose. In kinetic, represented as &[f64] of length robot.dof or as a JointValues struct. The zero configuration has all joints at 0; the mid-configuration centers each joint between its limits. See Forward Kinematics.
Constraint — A requirement imposed on a planned motion. Position constraints restrict the end-effector to a region of space. Orientation constraints restrict which way it faces. Joint constraints restrict individual joint ranges. In kinetic, the Constraint type is used by planners. See Motion Planning.
DOF (Degrees of Freedom) — The number of independent joint values that define a configuration. A typical industrial arm has 6 DOF (exactly enough for arbitrary 6D pose). A 7-DOF arm (e.g., Franka Panda) has one redundant DOF, giving a null space. Stored as robot.dof. See Robots and URDF.
DLS (Damped Least Squares) — An iterative IK solver that uses the pseudo-inverse of the Jacobian with an added damping term (Levenberg-Marquardt style). The damping prevents instability near singularities at the cost of slightly slower convergence. In kinetic: IKSolver::DLS { damping: 0.05 }. See Inverse Kinematics.
EE (End-Effector) — The functional tip of a kinematic chain, typically where a tool or gripper is attached. Defined by a parent link and an optional grasp-frame offset. In kinetic: Robot::end_effectors and the EndEffector struct. See Robots and URDF.
FABRIK (Forward And Backward Reaching Inverse Kinematics) — An iterative IK solver that alternates between reaching forward from the base and backward from the target. Works position-first, then refines orientation. Fast and intuitive but less precise on orientation than DLS. In kinetic: IKSolver::FABRIK. See Inverse Kinematics.
FK (Forward Kinematics) — Computing the end-effector pose from joint values by chaining transforms through the kinematic tree. The fundamental operation in robotics: given angles, where is the tool? In kinetic: forward_kinematics(), robot.fk(). See Forward Kinematics.
FrameTree — A named-frame graph that stores SE(3) transforms between coordinate frames. Equivalent to ROS2 TF2 but standalone. Supports chain resolution (A to C via A to B to C) and automatic inversion. In kinetic: FrameTree::new(), set_transform(), lookup_transform(). See Coordinate Frames.
Goal — The target specification for a motion plan. Can be a Cartesian pose, a joint configuration, a named pose, or a set of constraints. In kinetic, the Goal enum specifies what the planner is trying to reach. See Motion Planning.
IK (Inverse Kinematics) — Finding joint values that place the end-effector at a desired pose. The inverse of FK. Often has multiple solutions, no solution, or infinite solutions. In kinetic: solve_ik(), robot.ik(). See Inverse Kinematics.
Isometry3 — The nalgebra type Isometry3<f64> representing a rigid-body transform: rotation (as UnitQuaternion) plus translation. Preserves distances and angles. The mathematical representation of SE(3) used throughout kinetic. See Coordinate Frames.
Jacobian — A 6-by-DOF matrix mapping joint velocities to end-effector spatial velocity. Rows 0-2 are linear velocity (m/s), rows 3-5 are angular velocity (rad/s). Used for IK, manipulability analysis, and velocity control. In kinetic: jacobian(). See Forward Kinematics.
Joint (revolute) — A joint that rotates around an axis within position limits. The most common joint type in robot arms. Each revolute joint contributes one rotational DOF. In kinetic: JointType::Revolute. See Forward Kinematics.
Joint (prismatic) — A joint that translates along an axis within position limits. Common in linear stages and Cartesian robots. Each prismatic joint contributes one translational DOF. In kinetic: JointType::Prismatic. See Forward Kinematics.
Joint (continuous) — A revolute joint with no position limits. It can spin indefinitely (e.g., a wheel). Treated identically to revolute for FK/IK but limit checking is skipped. In kinetic: JointType::Continuous. See Robots and URDF.
Joint (fixed) — A non-actuated joint that rigidly attaches two links. Contributes zero DOF. Used for sensor mounts, tool flanges, and structural connections. In kinetic: JointType::Fixed. See Robots and URDF.
Joint Limits — Position, velocity, effort, and acceleration bounds on a joint. Position limits define the range of legal values (radians or meters). Stored in JointLimits and enforced by planners and IK solvers. See Robots and URDF.
KinematicChain — An ordered subset of joints connecting a base link to a tip link. Extracted from the robot’s kinematic tree for FK/IK computation. In kinetic: KinematicChain::extract(&robot, "base_link", "ee_link"). See Forward Kinematics.
Manipulability — A scalar measuring how dexterous the robot is at a given configuration. Computed as sqrt(det(J * J^T)) (Yoshikawa’s measure). Zero at singularities, high when the robot can move freely in all directions. In kinetic: manipulability(). See Forward Kinematics.
Null Space — The set of joint velocities that produce zero end-effector velocity. Exists only for redundant robots (DOF > 6). Null-space motion lets the robot reconfigure internally without moving the tool. Used for secondary objectives like joint centering. In kinetic: NullSpace enum. See Inverse Kinematics.
OPW (Ortho-Parallel-Wrist) — An analytical IK solver for 6-DOF robots with a spherical wrist (last three axes intersect). Returns up to 8 closed-form solutions in under 50 microseconds. Covers UR, ABB, KUKA, Fanuc, and similar industrial arms. In kinetic: IKSolver::OPW. See Inverse Kinematics.
Planning Group — A named subset of joints used for planning. Separates the arm from the gripper, or the left arm from the right. Defined in a TOML config or SRDF file. In kinetic: PlanningGroup with base_link, tip_link, and joint_indices. See Robots and URDF.
Pose — A position plus an orientation in 3D space; a point in SE(3). In kinetic, the Pose struct wraps Isometry3<f64> and provides constructors like Pose::from_xyz_rpy() and Pose::from_xyz_quat(). See Coordinate Frames.
Quaternion — A 4-component representation of 3D rotation (qx, qy, qz, qw). Unit quaternions avoid gimbal lock, interpolate smoothly, and compose efficiently. In kinetic, UnitQuaternion<f64> from nalgebra is the primary rotation representation. See Coordinate Frames.
RMP (Riemannian Motion Policy) — A reactive control framework that combines multiple motion policies on a Riemannian manifold. Each policy pulls the robot toward a goal, away from obstacles, or along a preferred direction. See Reactive Control.
RRT (Rapidly-exploring Random Tree) — A sampling-based motion planner that grows a tree from the start configuration toward random C-space samples. Probabilistically complete: guaranteed to find a path if one exists, given enough time. In kinetic: multiple RRT variants. See Motion Planning.
SE(3) — The Special Euclidean group in 3D: the set of all rigid-body transforms (rotation + translation). Every pose in the physical world is a point in SE(3). Mathematically, SE(3) = SO(3) x R^3. In kinetic, represented by Isometry3<f64> and the Pose wrapper. See Coordinate Frames.
Self-Collision — A collision between two links of the same robot. Must be checked during planning to prevent the arm from hitting itself. The ACM (from SRDF) disables expected self-collision pairs. See Collision Detection.
Singularity — A configuration where the Jacobian loses rank, meaning the robot cannot move in some direction. At a singularity, manipulability is zero and IK becomes ill-conditioned. Common examples: fully extended arm, wrist axes aligned. See Forward Kinematics, Inverse Kinematics.
SRDF (Semantic Robot Description Format) — An XML companion to URDF that adds planning-level semantics: planning groups, disabled collision pairs, end-effectors, and named poses. Originated from MoveIt. In kinetic: SrdfModel::from_file(), Robot::from_urdf_srdf(). See Robots and URDF.
TOTP (Time-Optimal Trajectory Parameterization) — An algorithm that computes the fastest timing for a geometric path while respecting velocity and acceleration limits. Turns a path (sequence of waypoints) into a time-parameterized trajectory. See Trajectory Generation.
Trajectory — A time-parameterized sequence of waypoints: each waypoint has joint values plus a timestamp. Distinguished from a path (no timing). In kinetic, Trajectory stores timed waypoints in struct-of-arrays layout. See Trajectory Generation.
URDF (Unified Robot Description Format) — An XML format describing a robot’s links, joints, geometry, and limits. The standard robot model format in ROS and the robotics industry. In kinetic: Robot::from_urdf(), Robot::from_urdf_string(). See Robots and URDF.
Waypoint — A single configuration (joint values) along a planned path. A trajectory is built from waypoints plus timing information. In kinetic, Waypoint holds positions, and TimedWaypoint adds a timestamp. See Trajectory Generation.
Workspace — The set of all positions (and orientations) the end-effector can reach. The workspace is bounded by arm geometry and joint limits. Points outside the workspace have no IK solution. In kinetic: ReachabilityMap samples the workspace. See Inverse Kinematics.
Bio-IK — An evolutionary IK solver using a population-based strategy (CMA-ES inspired). Maintains a population of candidate joint configurations, selects the fittest, and mutates. Effective for highly redundant robots (7+ DOF) and multi-objective IK. See Inverse Kinematics.
CHOMP (Covariant Hamiltonian Optimization for Motion Planning) — A trajectory optimizer that minimizes a cost functional over the entire trajectory simultaneously, using covariant gradient descent. Smooths paths while avoiding obstacles. See Motion Planning.
Condition Number — The ratio of the largest to smallest singular value of the Jacobian. Measures proximity to singularity. Below 50 is good; above 100 is marginal; above 1000 is near-singular. Reported in IKSolution::condition_number. See Inverse Kinematics.
JointValues — A named vector of joint positions. Wraps Vec<f64> with indexing, arithmetic, and interop with nalgebra. Length equals robot.dof. The primary type for passing joint configurations through the kinetic API. See Forward Kinematics.
PRM (Probabilistic Road Map) — A sampling-based planner that pre-builds a graph of collision-free configurations connected by local paths. Multi-query: the graph is reusable across different start/goal pairs. Efficient when the robot repeatedly plans in the same environment. See Motion Planning.
Seed Configuration — The starting joint values for an iterative IK solver. The solver converges toward the nearest solution to the seed. Choosing a good seed avoids unnecessary arm reconfiguration and helps escape local minima. In kinetic: IKConfig::seed. See Inverse Kinematics.
STOMP (Stochastic Trajectory Optimization for Motion Planning) — A trajectory optimizer that explores the cost landscape by sampling noisy trajectories around a seed. Does not require gradient information, making it robust to non-smooth cost functions. See Motion Planning.
Twist — A 6-DOF velocity vector: three linear components (m/s) and three angular components (rad/s). Represents spatial velocity of a rigid body. In kinetic: the Twist struct. Related to the Jacobian: twist = J * joint_velocities. See Forward Kinematics.
Wrench — A 6-DOF force/torque vector: three force components (N) and three torque components (Nm). Represents forces and moments applied to a rigid body. In kinetic: the Wrench struct. Dual of Twist under the power pairing. See Glossary.
See Also
- Robots and URDF — how robots are loaded and configured
- Coordinate Frames — SE(3), poses, and the FrameTree
- Forward Kinematics — computing end-effector poses
- Inverse Kinematics — finding joint angles for a target pose
- Motion Planning — collision-free path search
- Trajectory Generation — timing and smoothing of paths
- Collision Detection — self-collision and environment checking
- Reactive Control — real-time motion policies
Robots and URDF
A robot in kinetic is a kinematic tree of links connected by joints. This page explains how kinetic represents that tree, how it loads robot models from standard file formats, and what additional metadata (planning groups, end-effectors, collision pairs) enriches the model for motion planning.
The Robot struct
The Robot struct is the central data structure in kinetic. Every FK, IK, and planning operation takes a &Robot. It contains:
- Joints — an ordered list of all joints, including fixed ones. Each joint has a type, an axis, an origin transform, and optional limits.
- Links — an ordered list of all links (rigid bodies). Each link has a name and optional geometry.
- Active joints — indices of non-fixed joints. The number of active joints is the robot’s DOF.
- Planning groups — named subsets of joints for planning (e.g., “arm” vs. “gripper”).
- End-effectors — tool-tip definitions with parent link and grasp frame offset.
- Named poses — pre-defined joint configurations like “home” or “tucked”.
- Joint limits — position, velocity, effort, and acceleration bounds.
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
let robot = Robot::from_name("ur5e")?;
println!("Name: {}", robot.name); // "ur5e"
println!("DOF: {}", robot.dof); // 6
println!("Joints: {}", robot.joints.len()); // includes fixed joints
println!("Active: {}", robot.active_joints.len()); // 6
}
Loading robots from URDF
URDF (Unified Robot Description Format) is an XML format that describes the kinematic and visual properties of a robot. It defines links, joints, origins, axes, and limits. Kinetic parses URDF files into a Robot.
#![allow(unused)]
fn main() {
// From a file path
let robot = Robot::from_urdf("path/to/my_robot.urdf")?;
// From an XML string (useful for testing and embedded models)
let robot = Robot::from_urdf_string(r#"
<robot name="simple">
<link name="base_link"/>
<link name="tool"/>
<joint name="j1" type="revolute">
<parent link="base_link"/>
<child link="tool"/>
<origin xyz="0 0 0.5"/>
<axis xyz="0 0 1"/>
<limit lower="-3.14" upper="3.14" velocity="2.0" effort="100"/>
</joint>
</robot>
"#)?;
}
A URDF file encodes the kinematic tree using <link> and <joint> elements. Each <joint> connects a parent link to a child link, has a type (revolute, prismatic, continuous, or fixed), an origin transform (where the joint frame sits relative to the parent), an axis of motion, and limits.
Joint types
Kinetic supports four joint types, matching the URDF spec:
| Type | Motion | DOF | Limits | Typical use |
|---|---|---|---|---|
| Revolute | Rotates around axis | 1 | Bounded | Most arm joints |
| Prismatic | Translates along axis | 1 | Bounded | Linear stages, lifts |
| Continuous | Rotates without bounds | 1 | None | Wheels, spindles |
| Fixed | No motion | 0 | N/A | Sensor mounts, flanges |
#![allow(unused)]
fn main() {
use kinetic_robot::JointType;
match joint.joint_type {
JointType::Revolute => println!("Rotation with limits"),
JointType::Prismatic => println!("Translation with limits"),
JointType::Continuous => println!("Unlimited rotation"),
JointType::Fixed => println!("Rigid connection"),
}
}
The 52 built-in robots
Kinetic ships with 52 pre-configured robots covering major manufacturers and research platforms. Each has a URDF file and a kinetic.toml configuration defining planning groups, named poses, IK solver preferences, and collision settings.
Load any built-in robot by name:
#![allow(unused)]
fn main() {
let panda = Robot::from_name("franka_panda")?; // 7-DOF research arm
let ur5e = Robot::from_name("ur5e")?; // 6-DOF industrial
let xarm7 = Robot::from_name("xarm7")?; // 7-DOF collaborative
let fetch = Robot::from_name("fetch")?; // 8-DOF mobile manipulator
}
Coverage includes Universal Robots (UR3e through UR30), KUKA (iiwa7, iiwa14, KR6), ABB (IRB1200, IRB4600, YuMi), Fanuc (CRX-10iA, LR Mate 200iD), xArm (5/6/7), Franka Panda, Kinova Gen3, Sawyer, Baxter, and many more. See the robot_configs/ directory for the full list.
Planning groups and end-effectors
A planning group isolates a kinematic sub-chain for planning. A dual-arm robot might have “left_arm” and “right_arm” groups. A mobile manipulator might have “arm” (the joints that matter for manipulation) and “base” (the mobile platform).
#![allow(unused)]
fn main() {
let robot = Robot::from_name("franka_panda")?;
// Access a planning group
let arm = &robot.groups["arm"];
println!("Base: {}", arm.base_link); // "panda_link0"
println!("Tip: {}", arm.tip_link); // "panda_link8"
println!("Joints: {:?}", arm.joint_indices);
// Access end-effectors
if let Some(ee) = robot.end_effectors.get("hand") {
println!("EE parent: {}", ee.parent_link);
}
}
End-effectors define where the tool is. The EndEffector struct stores the parent link, the parent planning group, and an optional grasp-frame offset (Pose) from the parent link to the tool center point (TCP).
Named poses
Named poses are pre-defined joint configurations. Common examples: “home” (a safe starting posture), “tucked” (folded compactly), “ready” (poised for manipulation).
#![allow(unused)]
fn main() {
let robot = Robot::from_name("ur5e")?;
// Look up a named pose
if let Some(home) = robot.named_pose("home") {
println!("Home joints: {:?}", home.as_slice());
robot.check_limits(&home)?; // always valid
}
// Built-in utility configurations
let zero = robot.zero_configuration(); // all joints at 0
let mid = robot.mid_configuration(); // center of joint ranges
}
SRDF: semantic annotation
SRDF (Semantic Robot Description Format) is an XML companion to URDF that adds planning-level metadata. It originates from MoveIt and is widely used in the ROS ecosystem. Kinetic parses SRDF to set up:
- Planning groups — chains of joints or explicit joint lists
- Disabled collision pairs — link pairs to skip during self-collision checking
- End-effectors — tool-tip definitions
- Named group states — pre-defined joint configurations
#![allow(unused)]
fn main() {
// Load URDF + SRDF together
let robot = Robot::from_urdf_srdf(
"robot.urdf",
"robot.srdf",
)?;
// Or apply SRDF to an existing robot
use kinetic_robot::srdf::SrdfModel;
let mut robot = Robot::from_urdf("robot.urdf")?;
let srdf = SrdfModel::from_file("robot.srdf")?;
srdf.apply_to_robot(&mut robot)?;
}
A typical SRDF <disable_collisions> section lists pairs of links that are always in contact (adjacent links) or geometrically cannot collide (distant links). This builds the Allowed Collision Matrix (ACM) used by collision checkers.
MJCF and SDF support
Beyond URDF, kinetic also loads models from two other common formats:
- MJCF (MuJoCo XML) — the native format for MuJoCo physics simulations. Common in RL research and sim-to-real workflows.
- SDF (SDFormat) — the model format used by Gazebo. Supports multiple models, world descriptions, and sensor definitions.
#![allow(unused)]
fn main() {
// Load from MJCF
let robot = Robot::from_mjcf("model.mjcf")?;
// Load from SDF
let robot = Robot::from_sdf("model.sdf")?;
// Auto-detect format from extension
let robot = Robot::from_file("model.urdf")?; // detects .urdf
let robot = Robot::from_file("model.mjcf")?; // detects .mjcf
let robot = Robot::from_file("model.sdf")?; // detects .sdf
}
Robot::from_file() inspects the file extension and dispatches to the appropriate loader. For .xml files, it tries MJCF first (more common in robotics), falling back to SDF.
Configuration files
Each built-in robot has a kinetic.toml alongside its URDF. This TOML file defines the planning-level configuration that URDF cannot express: planning groups, end-effectors, named poses, IK solver preference (e.g., OPW for 6-DOF spherical wrist robots), and collision settings. When you call Robot::from_name("ur5e"), kinetic finds the robot_configs/ur5e/ directory and loads both the URDF and the TOML config.
If you are bringing your own robot, you can either load just the URDF (the robot will have no planning groups or named poses) or add a TOML config / SRDF file to get the full feature set.
See Also
- Glossary — definitions of DOF, joint types, planning group, EE, and SRDF
- Coordinate Frames — how joint origins and link frames form a transform tree
- Forward Kinematics — computing link poses from joint values
- Inverse Kinematics — how IK solver selection depends on robot geometry
- Motion Planning — how planning groups define what gets planned
Coordinate Frames
Every point on a robot exists in some coordinate frame. The base of the robot defines one frame. Each joint defines another. The end-effector tool tip is yet another. Understanding how these frames relate to each other – and how to transform points between them – is the foundation of all robotics computation.
What is a coordinate frame?
A coordinate frame is an origin point plus three orthogonal axes (X, Y, Z). Any position in 3D space can be described as coordinates relative to a frame. A point at (0.5, 0, 0.3) means “0.5 meters along the frame’s X axis and 0.3 meters along its Z axis.”
But a point described in one frame has different coordinates in another frame. If the robot’s base frame is rotated 90 degrees relative to the world frame, a point at (1, 0, 0) in the base frame is at (0, 1, 0) in the world frame. Coordinate transforms convert between frames.
Position and orientation
A rigid body in 3D space has six degrees of freedom: three for position (x, y, z) and three for orientation (how it is rotated). Together, position and orientation form a pose.
Position is straightforward: a 3D vector [x, y, z].
Orientation has several representations, each with trade-offs:
| Representation | Parameters | Pros | Cons |
|---|---|---|---|
| Rotation matrix | 3x3 (9 values) | Intuitive, composes by multiplication | Redundant, can drift from orthogonality |
| Euler angles | 3 values (roll, pitch, yaw) | Human-readable | Gimbal lock, order-dependent |
| Quaternion | 4 values (qx, qy, qz, qw) | Compact, no gimbal lock, smooth interpolation | Less intuitive |
| Axis-angle | axis (3) + angle (1) | Geometric meaning | Singular at zero rotation |
Kinetic uses unit quaternions (UnitQuaternion<f64> from nalgebra) as the primary rotation representation. Euler angles are supported for input/output convenience, but all internal computation uses quaternions.
Why quaternions over Euler angles
Euler angles describe rotation as three sequential rotations (e.g., first yaw around Z, then pitch around Y, then roll around X). They are intuitive for small rotations but suffer from gimbal lock: when two axes align (e.g., pitch = 90 degrees), one degree of freedom is lost and the system cannot represent arbitrary rotations smoothly.
Quaternions avoid this entirely. A unit quaternion (qx, qy, qz, qw) with magnitude 1 uniquely represents any 3D rotation. Composing two rotations is a quaternion multiplication. Interpolating between two orientations (slerp) produces smooth, constant-speed rotation. No singularities, no ambiguity.
#![allow(unused)]
fn main() {
use kinetic_core::{Pose, UnitQuaternion, Vector3};
// Euler angles: human-readable but susceptible to gimbal lock
let pose = Pose::from_xyz_rpy(1.0, 0.0, 0.5, 0.0, 1.5708, 0.0);
// Quaternion: unambiguous and composable
let pose = Pose::from_xyz_quat(1.0, 0.0, 0.5, 0.0, 0.7071, 0.0, 0.7071);
// Extract either representation from a Pose
let (roll, pitch, yaw) = pose.rpy();
let q: &UnitQuaternion<f64> = pose.rotation();
}
SE(3): the rigid-body transform group
SE(3) (Special Euclidean group in 3D) is the mathematical group of all rigid-body transforms: rotation plus translation. Every pose lives in SE(3). Two key properties make SE(3) essential:
- Composition: if T_AB transforms from frame A to frame B, and T_BC from B to C, then T_AC = T_AB * T_BC. Chaining transforms is just multiplication.
- Invertibility: every transform has an inverse. If T_AB goes from A to B, then T_AB.inverse() goes from B to A.
SE(3) = SO(3) x R^3, where SO(3) is the group of 3D rotations and R^3 is 3D translation.
Isometry3 and the Pose type
In kinetic, SE(3) transforms are represented by nalgebra::Isometry3<f64>. An Isometry3 stores a UnitQuaternion<f64> (rotation) and a Translation3<f64> (position). It guarantees that the rotation component stays a valid rotation – no scale, shear, or drift.
The Pose type wraps Isometry3<f64> with ergonomic constructors:
#![allow(unused)]
fn main() {
use kinetic_core::Pose;
// Identity: origin, no rotation
let origin = Pose::identity();
// Translation only
let p = Pose::from_xyz(0.5, 0.0, 0.3);
// Position + Euler angles (radians)
let p = Pose::from_xyz_rpy(0.5, 0.0, 0.3, 0.0, 0.0, 1.5708);
// Position + quaternion (qx, qy, qz, qw)
let p = Pose::from_xyz_quat(0.5, 0.0, 0.3, 0.0, 0.0, 0.7071, 0.7071);
// From a 4x4 homogeneous matrix
let p = Pose::from_matrix(&matrix_4x4);
// Compose: A_to_C = A_to_B * B_to_C
let a_to_c = a_to_b.compose(&b_to_c);
// Inverse: B_to_A from A_to_B
let b_to_a = a_to_b.inverse();
}
Pose dereferences to Isometry3<f64>, so you can call any nalgebra method on it directly.
Homogeneous transformation matrices
A 4x4 homogeneous matrix combines rotation and translation into a single matrix:
T = | R t | R = 3x3 rotation matrix
| 0 1 | t = 3x1 translation vector
Matrix multiplication chains transforms, matching SE(3) composition. Homogeneous matrices are widely used in textbooks, URDF origins, and DH parameters. Kinetic can convert to/from them:
#![allow(unused)]
fn main() {
let mat: nalgebra::Matrix4<f64> = pose.to_matrix();
let recovered = Pose::from_matrix(&mat);
}
Internally, kinetic prefers Isometry3 because it cannot accumulate numerical drift (rotation is always a valid quaternion), whereas a 4x4 matrix must be periodically re-orthogonalized.
The FrameTree
A robot system has many coordinate frames: world, base_link, each joint, each link, camera, tool tip. The FrameTree manages these relationships. It is conceptually equivalent to ROS2 TF2 but has no ROS dependency.
#![allow(unused)]
fn main() {
use kinetic_core::{FrameTree, Pose};
let tree = FrameTree::new();
// Register transforms between frames
tree.set_transform("world", "base_link", base_pose.isometry().clone(), 0.0);
tree.set_transform("base_link", "camera", camera_cal.isometry().clone(), 0.0);
// Look up any transform -- chains automatically
let cam_in_world = tree.lookup_transform("world", "camera")?;
// Inverse works too
let world_in_cam = tree.lookup_transform("camera", "world")?;
}
Key features of FrameTree:
- Chain resolution: looking up “world” to “camera” automatically chains through “base_link” using BFS.
- Automatic inversion: if only B-to-A is stored, looking up A-to-B returns the inverse.
- Static transforms: calibration transforms (sensor mounts) that never change. Marked with
set_static_transform()and preserved when dynamic transforms are cleared. - FK integration:
update_from_fk()populates link poses from forward kinematics results, skipping static transforms. - Thread safety: concurrent reads via
RwLock. Multiple threads can look up transforms simultaneously.
#![allow(unused)]
fn main() {
// Static calibration (never overwritten by FK)
tree.set_static_transform("base_link", "camera", camera_cal);
// Update from FK results
tree.update_from_fk(&link_poses, timestamp);
// Clear dynamic transforms (static survive)
tree.clear_dynamic();
// List all known frames
let frames = tree.list_frames();
}
Pose distance metrics
Comparing two poses requires measuring both position difference and orientation difference. Kinetic provides both:
#![allow(unused)]
fn main() {
let a = Pose::from_xyz_rpy(1.0, 0.0, 0.0, 0.0, 0.0, 0.0);
let b = Pose::from_xyz_rpy(1.0, 0.5, 0.0, 0.0, 0.0, 1.57);
// Euclidean distance between translations (meters)
let d_pos = a.translation_distance(&b); // 0.5
// Angular distance between orientations (radians)
let d_rot = a.rotation_distance(&b); // ~1.57
}
IK solvers use these metrics to define convergence: a solution is accepted when position error is below position_tolerance (default 0.1 mm) and orientation error is below orientation_tolerance (default 1 mrad).
Comparison with ROS2 TF2
If you are familiar with ROS2, kinetic’s frame management is the equivalent of tf2. The key differences:
| Feature | ROS2 TF2 | Kinetic FrameTree |
|---|---|---|
| Transport | Published over DDS topics | In-process, no middleware |
| Time queries | Buffer of timestamped transforms | Single latest transform per pair |
| Thread safety | Callback-based | RwLock for concurrent reads |
| Dependencies | Requires ROS2 runtime | Standalone, no external deps |
| Static transforms | Separate /tf_static topic | set_static_transform() method |
The FrameTree intentionally keeps one transform per frame pair (the latest) rather than buffering a time history. This matches the needs of real-time motion planning, where you always want the current transform, not one from 200 ms ago. If you need time-history queries, publish transforms to a horus topic and maintain your own buffer.
See Also
- Glossary — definitions of SE(3), Isometry3, quaternion, Pose, and FrameTree
- Forward Kinematics — how FK computes link poses that populate the FrameTree
- Robots and URDF — how URDF joint origins define the static transform tree
- Inverse Kinematics — how IK uses Pose as the target specification
Forward Kinematics
Forward kinematics (FK) answers the most fundamental question in robotics: given a set of joint values, where is the end-effector? It transforms a vector of joint angles (and/or displacements) into a 6D pose in Cartesian space.
graph LR
Q["Joint values<br/>[q1, q2, ..., qN]"] --> FK["Chain multiply<br/>T1 · T2 · ... · TN"]
FK --> Pose["EE Pose (4×4)<br/>position + orientation"]
FK --> AllPoses["All link poses<br/>for collision checking"]
Q --> Jac["Jacobian (6×N)<br/>∂pose/∂joints"]
Jac --> Manip["Manipulability<br/>√det(J·Jᵀ)"]
style Q fill:#4a9eff,color:#fff
style Pose fill:#3ddc84,color:#000
style Manip fill:#ffd93d,color:#000
What FK computes
A robot arm is a chain of rigid links connected by joints. Each joint has a fixed origin (where it sits relative to its parent link) and a variable motion (rotation or translation). FK multiplies these transforms from the base link to the tip link, accumulating the total transform.
For a chain with N joints:
T_ee = T_origin_1 * T_motion_1 * T_origin_2 * T_motion_2 * ... * T_origin_N * T_motion_N
Each T_origin_i is a fixed SE(3) transform (from the URDF <origin> element). Each T_motion_i depends on the joint type and current value:
- Revolute / Continuous: rotation around the joint axis by
q_iradians. - Prismatic: translation along the joint axis by
q_imeters. - Fixed: identity (no motion).
The result T_ee is the end-effector pose in the base frame – a Pose in kinetic.
The kinematic chain
FK requires knowing which joints to traverse. A KinematicChain extracts the ordered set of joints from a base link to a tip link. You can extract one manually or let kinetic auto-detect it from planning groups:
#![allow(unused)]
fn main() {
use kinetic_kinematics::KinematicChain;
use kinetic_robot::Robot;
let robot = Robot::from_name("ur5e")?;
// Explicit chain extraction
let chain = KinematicChain::extract(&robot, "base_link", "ee_link")?;
println!("DOF: {}", chain.dof); // 6
println!("All joints: {}", chain.all_joints.len()); // includes fixed
// Auto-detect from planning groups
let chain = KinematicChain::auto_detect(&robot)?;
}
The chain distinguishes between all_joints (every joint in the path, including fixed) and active_joints (only joints that move). The dof is the count of active joints, which matches the length of the joint-value vector FK expects.
Computing FK
The primary FK function takes a robot, a chain, and joint values:
#![allow(unused)]
fn main() {
use kinetic_kinematics::forward_kinematics;
let robot = Robot::from_name("ur5e")?;
let chain = KinematicChain::extract(&robot, "base_link", "ee_link")?;
// Compute end-effector pose
let joints = [0.0, -1.5708, 1.5708, 0.0, 1.5708, 0.0];
let ee_pose = forward_kinematics(&robot, &chain, &joints)?;
println!("Position: {:?}", ee_pose.translation());
println!("Orientation (RPY): {:?}", ee_pose.rpy());
}
The convenience trait RobotKinematics adds FK directly on Robot, auto-detecting the chain:
#![allow(unused)]
fn main() {
use kinetic_kinematics::RobotKinematics;
let robot = Robot::from_name("franka_panda")?;
let pose = robot.fk(&[0.0, -0.7854, 0.0, -2.3562, 0.0, 1.5708, 0.7854])?;
}
All-links FK
Sometimes you need every link’s pose, not just the end-effector. forward_kinematics_all returns a Vec<Pose> with one entry per link in the chain (including the base):
#![allow(unused)]
fn main() {
use kinetic_kinematics::forward_kinematics_all;
let poses = forward_kinematics_all(&robot, &chain, &joints)?;
// poses[0] = base link (identity if root)
// poses[1] = after first joint
// ...
// poses[N] = end-effector (same as forward_kinematics result)
}
This is used by collision checking (each link needs a pose for its collision geometry), visualization (rendering every link), and Jacobian computation (needs joint-frame positions).
Batch FK
When evaluating many configurations (e.g., during sampling-based planning), fk_batch avoids per-call overhead:
#![allow(unused)]
fn main() {
use kinetic_kinematics::fk_batch;
// 3 configurations for a 6-DOF arm = 18 values
let configs = vec![
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, // config 0
0.5, -0.3, 0.8, 0.0, 1.0, -0.5, // config 1
1.0, 1.0, -1.0, 0.5, 0.0, 0.5, // config 2
];
let poses = fk_batch(&robot, &chain, &configs, 3)?;
// poses[0], poses[1], poses[2] = EE pose for each config
}
The Jacobian
The Jacobian is a 6-by-DOF matrix that maps joint velocities to end-effector spatial velocity:
[v_x] [ ] [q_dot_1]
[v_y] [ ] [q_dot_2]
[v_z] = [ J ] * [ ... ]
[w_x] [ ] [q_dot_N]
[w_y] [ ]
[w_z] [ ]
Rows 0-2 are linear velocity (m/s). Rows 3-5 are angular velocity (rad/s). Each column corresponds to one active joint.
For a revolute joint at position p_j with axis z_j:
- Linear column: z_j x (p_ee - p_j) – the tangential velocity at the EE due to rotation
- Angular column: z_j – the rotation axis itself
For a prismatic joint with axis z_j:
- Linear column: z_j – translation direction
- Angular column: zero – prismatic motion does not rotate
#![allow(unused)]
fn main() {
use kinetic_kinematics::jacobian;
let j = jacobian(&robot, &chain, &joints)?;
println!("Jacobian shape: {}x{}", j.nrows(), j.ncols()); // 6x6 for UR5e
}
The Jacobian is central to robotics. IK solvers invert it (approximately) to find joint updates. Velocity control uses it to convert desired end-effector velocities to joint commands. Singularity analysis examines when it loses rank.
Manipulability
The manipulability index quantifies how dexterous the robot is at a given configuration. It uses Yoshikawa’s measure:
w = sqrt(det(J * J^T))
- w > 0: the robot can move in all directions. Higher is better.
- w = 0: the robot is at a singularity – it has lost the ability to move in at least one direction.
- w near 0: approaching singularity – motion in some direction requires very high joint velocities.
For under-actuated chains (DOF < 6), kinetic computes the product of singular values instead, which handles rank deficiency gracefully.
#![allow(unused)]
fn main() {
use kinetic_kinematics::manipulability;
let m = manipulability(&robot, &chain, &joints)?;
if m < 0.001 {
println!("Warning: near singularity (manipulability = {})", m);
}
}
Manipulability is used as a null-space objective in IK (maximize dexterity while reaching the target) and as a cost in planning (prefer paths that stay away from singularities).
Singularities
A singularity occurs when the Jacobian loses rank (manipulability drops to zero). At a singularity, the robot cannot generate velocity in some Cartesian direction, IK becomes ill-conditioned, and solutions may jump discontinuously. Common types: shoulder (wrist center on base axis), elbow (arm fully extended), and wrist (two wrist axes align). Kinetic’s IK solvers handle singularities through damping (DLS) or avoidance (null-space objectives). The IKSolution::condition_number field reports Jacobian conditioning at the solution.
How joint types affect FK
Each joint type contributes a different transform to the chain. Revolute/Continuous joints rotate around their axis by the joint value in radians. Prismatic joints translate along their axis by the joint value in meters. Fixed joints contribute their origin transform but no motion. A chain with mixed joint types works identically – FK multiplies each transform in sequence:
#![allow(unused)]
fn main() {
// Mixed chain: revolute + prismatic
let chain = KinematicChain::extract(&robot, "base", "slider")?;
// joint[0] = rotation (radians), joint[1] = extension (meters)
let pose = forward_kinematics(&robot, &chain, &[1.57, 0.15])?;
}
See Also
- Glossary — definitions of FK, Jacobian, manipulability, singularity, and DOF
- Coordinate Frames — the SE(3) transforms that FK chains together
- Robots and URDF — where joint origins and axes come from
- Inverse Kinematics — the reverse problem: pose to joints
- Trajectory Generation — timing the paths that FK validates
Inverse Kinematics
Inverse kinematics (IK) answers the question: given a desired end-effector pose, what joint values place the tool there? It is the reverse of forward kinematics, and it is fundamentally harder.
Why IK is hard
Forward kinematics is a function: one set of joint values produces exactly one pose. IK is not a function – it is a relation with three complications:
-
Multiple solutions: A 6-DOF arm typically has up to 8 or even 16 configurations that all place the end-effector at the same pose. Think of an elbow-up vs. elbow-down posture reaching the same point.
-
No solution: If the target pose is outside the robot’s workspace (too far, behind the base, or requiring an orientation the kinematics cannot produce), no joint values exist.
-
Singularities: At certain configurations, the Jacobian loses rank. Near singularities, tiny changes in target pose require enormous joint-velocity changes, and iterative solvers become unstable.
These properties make IK one of the most studied problems in robotics. Kinetic provides multiple solver strategies, each suited to different robot geometries and use cases.
graph TD
Target[Target Pose] --> Auto{Auto-select solver}
Auto -->|6-DOF spherical wrist| OPW[OPW<br/>closed-form, 8 solutions<br/>< 50us]
Auto -->|6-DOF general| SP[Paden-Kahan<br/>decomposition, 16 solutions]
Auto -->|7-DOF| SP7[Subproblem7DOF<br/>36 sweep samples]
Auto -->|Fallback| DLS[DLS<br/>Levenberg-Marquardt<br/>adaptive damping]
OPW --> Best[Select best solution<br/>nearest to seed]
SP --> Best
SP7 --> Best
DLS --> Check{Converged?}
Check -->|No| Restart[Random restart<br/>up to 10 seeds]
Restart --> DLS
Check -->|Yes| Best
Best --> Result[IK Solution<br/>joints + error + converged]
style OPW fill:#3ddc84,color:#000
style SP fill:#3ddc84,color:#000
style DLS fill:#4a9eff,color:#fff
style Result fill:#ffd93d,color:#000
Analytical solvers
Analytical solvers exploit the specific geometry of a robot to compute closed-form solutions. They are fast (microseconds), exact (no iteration, no convergence tolerance), and return all solutions.
OPW: spherical wrist robots
The OPW (Ortho-Parallel-Wrist) solver handles the most common industrial robot geometry: 6 revolute joints where the last three axes intersect at a point (the wrist center). This structure decouples position from orientation:
- Wrist center is computed from the target pose and a pre-extracted tool offset.
- Position IK (joints 1-3) is solved using arm-plane geometry and the law of cosines.
- Orientation IK (joints 4-6) is solved via Euler decomposition of the remaining rotation.
This yields up to 2 x 2 x 2 = 8 solutions. The solver selects the one closest to the seed configuration.
Performance: under 50 microseconds, typically under 10 microseconds. No iteration.
#![allow(unused)]
fn main() {
use kinetic_kinematics::{solve_ik, IKConfig, IKSolver};
let config = IKConfig {
solver: IKSolver::OPW,
..Default::default()
};
let solution = solve_ik(&robot, &chain, &target_pose, &config)?;
// solution.joints: the best of up to 8 analytical solutions
}
OPW covers UR (UR3e through UR30), ABB, KUKA KR, Fanuc, xArm6, and most other 6-DOF industrial arms with a spherical wrist.
Paden-Kahan subproblem decomposition
For 6-DOF robots whose geometry does not match OPW assumptions but whose joints can be decomposed into canonical geometric subproblems, kinetic uses Paden-Kahan decomposition:
- Subproblem 1: rotation around a single axis to align two points.
- Subproblem 2: rotation around two intersecting axes.
- Subproblem 3: rotation to reach a specified distance.
The solver decomposes the full IK problem into a sequence of subproblems, each with a closed-form solution. Up to 16 solutions are returned.
#![allow(unused)]
fn main() {
let config = IKConfig {
solver: IKSolver::Subproblem,
..Default::default()
};
}
7-DOF subproblem decomposition
For 7-DOF arms (e.g., Franka Panda, KUKA iiwa), one joint is redundant. The 7-DOF subproblem solver sweeps the redundant joint across its range, solving the resulting 6-DOF problem analytically at each sample point. The num_samples parameter controls sweep resolution (default 36, meaning every 10 degrees).
#![allow(unused)]
fn main() {
let config = IKConfig {
solver: IKSolver::Subproblem7DOF { num_samples: 72 },
..Default::default()
};
}
Iterative solvers
When the robot’s geometry does not admit an analytical solution, or when you need to handle constraints and secondary objectives, iterative solvers converge numerically.
DLS (Damped Least Squares)
DLS is kinetic’s default iterative solver. It uses a Levenberg-Marquardt approach: at each iteration, compute the 6D pose error, then update joint values using the damped pseudo-inverse of the Jacobian:
delta_q = J^T * (J * J^T + lambda^2 * I)^-1 * error
The damping factor lambda prevents instability near singularities. Higher damping = more stable but slower convergence. Kinetic uses adaptive damping that increases near singularities.
#![allow(unused)]
fn main() {
let config = IKConfig {
solver: IKSolver::DLS { damping: 0.05 },
max_iterations: 100,
position_tolerance: 1e-4, // 0.1 mm
orientation_tolerance: 1e-3, // ~0.06 degrees
..Default::default()
};
let solution = solve_ik(&robot, &chain, &target, &config)?;
}
DLS converges well for most configurations. Near singularities, it sacrifices precision for stability – the IKSolution::degraded flag indicates when the solver fell back to a less accurate transpose method.
FABRIK (Forward And Backward Reaching IK)
FABRIK is a geometric IK solver that does not use the Jacobian at all. Instead, it alternates between:
- Forward pass: starting from the end-effector, adjust each joint position to reach toward the target.
- Backward pass: starting from the base, adjust each joint position to satisfy link-length constraints.
FABRIK is naturally position-focused. It converges quickly for position-only IK and handles long chains well. Orientation is refined after position convergence.
#![allow(unused)]
fn main() {
let config = IKConfig {
solver: IKSolver::FABRIK,
max_iterations: 200,
..Default::default()
};
}
SQP and Bio-IK
SQP (Sequential Quadratic Programming) formulates IK as a constrained optimization, naturally handling constraints like keeping a cup upright. Bio-IK uses a population-based evolutionary strategy (CMA-ES inspired) that is particularly effective for highly redundant robots (7+ DOF), multi-objective IK, and escaping local minima that trap gradient-based solvers.
Auto-selection
With IKSolver::Auto (the default), kinetic picks the best solver automatically. It first checks the robot’s kinetic.toml preference. If that is not set or not compatible, it tests: OPW (if 6-DOF spherical wrist), then Subproblem (if 6-DOF decomposable), then Subproblem7DOF (if 7-DOF), then falls back to DLS. Built-in robots have preferences pre-configured.
#![allow(unused)]
fn main() {
let ur5 = Robot::from_name("ur5e")?;
let sol = ur5.ik(&target)?; // auto-selects OPW (6-DOF spherical wrist)
let panda = Robot::from_name("franka_panda")?;
let sol = panda.ik(&target)?; // auto-selects Subproblem7DOF or DLS
}
Null-space objectives
For redundant robots (DOF > 6), the null space is the set of joint velocities that produce zero end-effector motion. This internal reconfigurability can be exploited for secondary objectives:
- Manipulability (
NullSpace::Manipulability): bias toward configurations with high Yoshikawa manipulability, staying away from singularities. - Joint centering (
NullSpace::JointCentering): bias toward mid-range joint values, avoiding joint limits. - Minimal displacement (
NullSpace::MinimalDisplacement): stay close to the seed configuration, producing the smallest motion.
#![allow(unused)]
fn main() {
use kinetic_kinematics::{IKConfig, NullSpace};
let config = IKConfig {
null_space: Some(NullSpace::JointCentering),
..Default::default()
};
}
Null-space objectives do not affect whether IK converges – they only influence which of the infinite solutions the solver finds.
IKConfig and IKSolution
IKConfig controls solver selection, convergence criteria, and secondary objectives. IKSolution reports the result with diagnostics for hardware safety.
#![allow(unused)]
fn main() {
use kinetic_kinematics::{IKConfig, IKSolver, IKMode, NullSpace};
let config = IKConfig {
solver: IKSolver::Auto, // which solver
mode: IKMode::Full6D, // Full6D, PositionOnly, or PositionFallback
max_iterations: 100, // iterative solver limit
position_tolerance: 1e-4, // 0.1 mm
orientation_tolerance: 1e-3, // ~0.06 degrees
check_limits: true, // enforce joint limits
seed: Some(vec![0.0; 7]), // starting config
null_space: None, // secondary objective
num_restarts: 8, // random restarts to escape local minima
};
let solution = solve_ik(&robot, &chain, &target, &config)?;
// solution.converged — did it meet tolerance?
// solution.degraded — did DLS fall back to transpose near singularity?
// solution.condition_number — Jacobian conditioning (<50 good, >100 marginal, >1000 singular)
}
Three IK modes are available: Full6D (default, matches both position and orientation), PositionOnly (ignores orientation – useful for vacuum grippers), and PositionFallback (tries Full6D, retries with PositionOnly if it fails). Use IKConfig::position_only() or IKConfig::with_fallback() as shorthands.
For batch solving, solve_ik_batch processes multiple targets in one call, returning Vec<Option<IKSolution>>.
See Also
- Glossary — definitions of IK, DLS, FABRIK, OPW, null space, singularity, and workspace
- Forward Kinematics — the forward problem that IK inverts; Jacobian and manipulability
- Coordinate Frames — the Pose type that IK targets
- Robots and URDF — how robot geometry determines which IK solver applies
- Motion Planning — planners that call IK to evaluate candidate configurations
- Trajectory Generation — timing the joint-space paths that IK produces
Collision Detection
A robot arm waving through open air is easy. The hard part is guaranteeing it never crashes into the table, the human operator, or itself. Every motion planner, servo controller, and trajectory validator in kinetic depends on collision detection to answer one question: “is this configuration safe?”
Getting that answer wrong has consequences. A false negative (missed collision) means damaged hardware or injured people. A false positive (phantom collision) means the robot freezes or takes a wasteful detour. Kinetic’s collision system is designed to be fast enough for real-time control loops and conservative enough to never miss a real collision.
graph TB
Q[Is config safe?] --> FK[FK all links]
FK --> Sphere[Update sphere positions]
Sphere --> Self{Self-collision?}
Self -->|Check| ACM[Filter by ACM<br/>skip adjacent pairs]
ACM --> SIMD[SIMD sphere-sphere<br/>AVX2 / NEON / scalar]
Sphere --> Env{Environment?}
Env --> CAPT[CAPT broadphase<br/>O(1) grid lookup]
CAPT -->|Near| LOD{Distance?}
LOD -->|> 10cm| SphereOnly[Sphere check<br/>~500ns]
LOD -->|2-10cm| Convex[Convex hull<br/>~5-20us]
LOD -->|< 2cm| Mesh[Exact mesh<br/>~50us]
SIMD --> Result{Collision?}
SphereOnly --> Result
Convex --> Result
Mesh --> Result
Result -->|No| Safe[Safe]
Result -->|Yes| Blocked[Blocked]
style Safe fill:#3ddc84,color:#000
style Blocked fill:#ff6b6b,color:#fff
style SIMD fill:#ffd93d,color:#000
style CAPT fill:#ffd93d,color:#000
Two Approaches to Collision Geometry
Robot links are complex 3D shapes – cast aluminum housings, cable bundles, custom brackets. There are two fundamental strategies for checking whether these shapes intersect with obstacles.
Exact mesh collision loads the full triangle mesh of each link and tests mesh-mesh intersection. This gives precise answers but is expensive: a single check can take 10-50 microseconds depending on mesh complexity. For a motion planner that evaluates thousands of configurations per millisecond, that cost is prohibitive.
Sphere approximation replaces each link with a set of bounding spheres. Checking whether two spheres overlap is a single distance comparison – trivially fast. The trade-off is conservatism: spheres overestimate the link volume, so the system may report collisions that the exact mesh would not. In practice, this conservatism is a feature, not a bug. A small safety margin prevents the robot from threading needles between obstacles.
Kinetic uses sphere approximation as its primary collision backend, with exact mesh available as a refinement layer when precision matters.
RobotSphereModel and SpheresSoA
RobotSphereModel approximates each link of a robot with bounding spheres,
generated from the URDF collision geometry (boxes, cylinders, meshes):
#![allow(unused)]
fn main() {
use kinetic_collision::{RobotSphereModel, SphereGenConfig};
// Coarse: fewer spheres, faster checks
let model = RobotSphereModel::from_robot(&robot, &SphereGenConfig::coarse());
// Fine: tighter fit, more accurate but slower
let model = RobotSphereModel::from_robot(&robot, &SphereGenConfig::fine());
}
The SphereGenConfig controls the density-accuracy trade-off. Coarse mode
uses up to 8 spheres per geometry primitive with a 10cm max radius. Fine mode
uses up to 64 spheres at 2cm max radius. The default sits in between.
Internally, all sphere data is stored in SpheresSoA – a Structure-of-Arrays
layout where the x, y, z coordinates and radii live in separate contiguous
arrays rather than interleaved structs. This layout is critical for SIMD
vectorization: the CPU can load 4 x-coordinates at once (on AVX2) and
process them in a single instruction.
#![allow(unused)]
fn main() {
// SpheresSoA stores spheres as parallel arrays:
// x: [x0, x1, x2, x3, ...]
// y: [y0, y1, y2, y3, ...]
// z: [z0, z1, z2, z3, ...]
// radius: [r0, r1, r2, r3, ...]
// link_id: [l0, l1, l2, l3, ...]
}
CAPT: Sub-10ns Environment Queries
The Collision-Affording Point Tree (CAPT) is a 3D voxel grid that pre-computes the answer to “how close is this point to any obstacle?” Each cell stores the minimum clearance – the largest sphere that could be placed at that cell center without intersecting an obstacle.
Checking whether a sphere at position (x, y, z) with radius r collides with the environment reduces to a single grid lookup:
collision = grid[ix][iy][iz] < r
Build cost is O(N * V) where N is the number of obstacles and V is the number of affected voxels. But query cost is O(1) per point – a single array index plus a comparison. For a robot with 100 spheres, the entire environment collision check takes under a microsecond.
#![allow(unused)]
fn main() {
use kinetic_collision::{CollisionEnvironment, CollisionPointTree, AABB};
let env = CollisionEnvironment::build(
obstacle_spheres,
0.05, // 5cm grid resolution
AABB::symmetric(10.0), // 20m x 20m x 20m workspace
);
let colliding = env.check_collision(&robot_spheres);
let clearance = env.min_distance(&robot_spheres);
}
SIMD Acceleration
Distance calculations between sphere sets are embarrassingly parallel. Kinetic dispatches to the widest available SIMD instruction set at runtime:
| Tier | Width | Throughput | Platform |
|---|---|---|---|
| AVX2 | 256-bit | 4 f64 per cycle | x86_64 |
| NEON | 128-bit | 2 f64 per cycle | aarch64 (always) |
| Scalar | 64-bit | 1 f64 per cycle | All platforms |
Detection happens once at startup. On x86_64, the runtime checks for the
avx2 feature flag. On aarch64, NEON is mandatory and always used. There is
no configuration needed – the fastest available path is selected
automatically.
Self-Collision and the ACM
A robot can collide with itself. Consider a 7-DOF arm folding back on itself – the wrist might strike the shoulder. Self-collision checking tests every pair of robot link spheres against each other.
But adjacent links (connected by a joint) always geometrically overlap at the joint origin. Checking these pairs would report permanent false collisions. The Allowed Collision Matrix (ACM) solves this by listing link pairs that should be skipped:
#![allow(unused)]
fn main() {
use kinetic_collision::AllowedCollisionMatrix;
// Auto-built from URDF: skips all parent-child link pairs
let acm = AllowedCollisionMatrix::from_robot(&robot);
// Manually allow additional pairs (e.g., gripper + grasped object)
acm.allow("left_finger", "bolt");
}
The ACM is resolved to index-based pairs (ResolvedACM) for fast runtime
lookups. During self-collision checking, any pair listed in the ACM is
skipped without computing distances.
Two-Tier LOD: Speed When Possible, Precision When Needed
For most configurations, spheres give the right answer quickly. But near
obstacles, the conservatism of sphere approximation can be too aggressive –
rejecting configurations that are actually safe. The TwoTierCollisionChecker
combines both approaches:
- Sphere broadphase (~500ns): Check all sphere pairs using SIMD. If clearance is large, return immediately – no collision.
- Mesh narrowphase (<50us): If sphere distance is within the
refinement_margin, call the exact parry3d-f64 mesh backend for a precise answer.
#![allow(unused)]
fn main() {
use kinetic_collision::{TwoTierCollisionChecker, SphereGenConfig};
let checker = TwoTierCollisionChecker::new(
&robot,
&SphereGenConfig::default(),
0.02, // refinement margin: 2cm triggers exact check
0.01, // safety margin: 1cm added to all distances
);
}
In practice, 99% of queries are resolved by the sphere broadphase. The mesh narrowphase fires only for near-contact configurations, keeping the average check time under 1 microsecond.
Continuous Collision Detection (CCD)
Discrete collision checks test a single configuration. But a robot moving between two safe configurations might pass through an obstacle in between – the “tunneling” problem. CCD checks the entire swept volume between two configurations using conservative advancement:
- Start at t=0. Interpolate configuration, compute FK, check clearance.
- If clearance d > 0, advance time by d / v_max (maximum sphere velocity).
- Repeat until collision is found or t > 1.
The conservative guarantee: the advancement step never exceeds the clearance-to-velocity ratio, so collisions cannot be skipped.
#![allow(unused)]
fn main() {
use kinetic_collision::{ContinuousCollisionDetector, CCDConfig};
let ccd = ContinuousCollisionDetector::new(
&sphere_model, &environment, CCDConfig::default()
);
// Returns Some(t) if collision at time t in [0, 1], None if safe
let collision_time = ccd.check(&q_start, &q_end, &fk_poses_start);
}
See Also
- Motion Planning – how planners use collision checking to find safe paths
- Reactive Control – collision deceleration in the servo loop
- Robots and URDF – where collision geometry comes from
Motion Planning
Motion planning answers a deceptively simple question: given a robot’s current joint configuration and a desired goal, find a sequence of configurations that moves the robot from one to the other without hitting anything. The difficulty is that “without hitting anything” turns a geometric path problem into a search through a high-dimensional space riddled with forbidden regions.
graph LR
Start[Start Config] --> IK{Goal type?}
IK -->|Joints| RRT
IK -->|Pose| SolveIK[Solve IK] --> RRT
IK -->|Named| Lookup[Resolve Name] --> RRT
RRT[Planner<br/>RRT-Connect / RRT* / EST / ...] --> Short[Shortcutting<br/>50-80% reduction]
Short --> Smooth[Smoothing<br/>B-spline / cubic]
Smooth --> Time[Time Parameterization<br/>TOTP / trapezoidal / S-curve]
Time --> Valid{Validate<br/>limits?}
Valid -->|Pass| Traj[Trajectory]
Valid -->|Fail| Err[Error]
style Start fill:#4a9eff,color:#fff
style Traj fill:#3ddc84,color:#000
style Err fill:#ff6b6b,color:#fff
Configuration Space
A robot with n joints has n degrees of freedom. Each joint angle is one dimension, and the full set of joint angles defines a point in configuration space (C-space). For a 6-DOF arm, C-space is a 6-dimensional hypercube where each axis spans that joint’s limits.
An obstacle in the physical workspace (a table, a wall) maps to a complicated forbidden region in C-space. The shape of this region depends on the robot’s geometry and kinematics – a single box in the workspace can produce a tangled, non-convex volume in C-space. This is why planning directly in workspace coordinates does not work: the forbidden regions are too complex to represent analytically. Instead, planners explore C-space by sampling configurations and checking each one for collisions.
Sampling-Based Planning
The dominant approach is sampling-based planning: randomly sample configurations in C-space, check which ones are collision-free, and connect nearby free samples to build a path.
RRT (Rapidly-exploring Random Tree)
RRT grows a tree from the start configuration toward random samples:
- Sample a random configuration q_rand in C-space.
- Find the nearest node q_near in the tree.
- Extend from q_near toward q_rand by a step size.
- If the extension is collision-free, add the new node.
- Repeat until the tree reaches the goal.
RRT is probabilistically complete: given enough time, it will find a path if one exists. But it is not optimal – the path will be jagged and suboptimal because the random sampling produces unnecessary detours.
RRT-Connect (Bidirectional)
RRT-Connect grows two trees simultaneously – one from start, one from goal – and tries to connect them. This dramatically speeds up planning because the trees grow toward each other. In kinetic, RRT-Connect is the default planner and handles the vast majority of planning queries in under 50ms.
Why Plans Vary
Because RRT uses random sampling, running the same planner twice with the same start and goal produces different paths. This is expected behavior, not a bug. The randomness is what allows the planner to explore the space efficiently without exhaustive search. Post-processing (shortcutting and smoothing) reduces the variance by optimizing whichever path was found.
Kinetic’s 14 Planners
| Planner | Strategy | Best For |
|---|---|---|
| RRT-Connect | Bidirectional tree | General-purpose (default) |
| RRT* | Asymptotically optimal | Path cost minimization |
| BiRRT* | Bidirectional + optimal | Faster convergence to optimal |
| BiTRRT | Transition-based | Cost-aware exploration |
| EST | Expansive Space Tree | Narrow passages |
| KPIECE | Cell decomposition | High-dimensional spaces |
| PRM | Probabilistic Roadmap | Multi-query same environment |
| GCS | Graphs of Convex Sets | Globally optimal (pre-computed regions) |
| CHOMP | Gradient descent on cost | Smooth trajectories near initial guess |
| STOMP | Stochastic optimization | Derivative-free, handles discontinuities |
| Cartesian | Straight-line in task space | Linear end-effector motion |
| Constrained | RRT with manifold projection | Orientation/position constraints |
| Dual-arm | Coordinated two-arm | Bimanual manipulation |
| IRIS | Convex decomposition | Region pre-computation for GCS |
Post-Processing: Shortcutting and Smoothing
Raw RRT paths are collision-free but unnecessarily long. Kinetic applies two post-processing steps:
Shortcutting picks two random waypoints on the path and checks if the straight-line segment between them is collision-free. If it is, all intermediate waypoints are removed. Repeated shortcutting iterations progressively straighten the path.
Smoothing fits a B-spline or cubic spline through the remaining waypoints, producing a C2-continuous path without sharp corners. Smoothing is important for trajectory generation – sharp corners cause infinite acceleration which no physical robot can execute.
#![allow(unused)]
fn main() {
use kinetic_planning::{shortcut, smooth_bspline};
let shortened = shortcut(&path, 100, &collision_checker);
let smooth = smooth_bspline(&shortened, 0.01);
}
Optimization-Based Planners
Sampling-based planners find a path. Optimization-based planners find a better path by minimizing a cost function over the entire trajectory.
CHOMP (Covariant Hamiltonian Optimization for Motion Planning) treats the trajectory as a curve and runs gradient descent to minimize a sum of smoothness cost and obstacle cost. It needs a good initial guess (typically a straight line in C-space) and can get stuck in local minima.
STOMP (Stochastic Trajectory Optimization for Motion Planning) is derivative-free: it generates noisy trajectory samples, evaluates their cost, and updates toward lower-cost regions. STOMP handles cost functions that are not differentiable (e.g., binary collision checks).
GCS (Graphs of Convex Sets) takes a different approach entirely. It decomposes the free C-space into convex regions (using IRIS), builds a graph where nodes are regions and edges connect overlapping regions, then solves a convex optimization to find the globally optimal path through the graph. GCS is the only planner that provides global optimality guarantees, but it requires an expensive pre-computation step.
The Planner Facade
Kinetic provides a unified Planner API that handles algorithm selection,
collision setup, IK resolution, and post-processing behind a clean interface:
#![allow(unused)]
fn main() {
use kinetic_planning::{Planner, PlannerType};
use kinetic_core::{Goal, PlannerConfig};
let robot = Robot::from_name("ur5e")?;
let planner = Planner::new(&robot)?
.with_scene(&scene)
.with_planner_type(PlannerType::RRTConnect);
let result = planner.plan(&start_joints, &Goal::Joints(goal_joints))?;
println!("Found path with {} waypoints in {:?}",
result.num_waypoints(), result.planning_time);
}
For the simplest case, the one-liner plan() function handles everything:
#![allow(unused)]
fn main() {
use kinetic_planning::plan;
let result = plan("ur5e", &start_joints, &Goal::Joints(goal_joints))?;
}
Goal types are resolved automatically:
Goal::Joints– plan directly in C-spaceGoal::Pose– solve IK first, then plan to the IK solutionGoal::Named– look up a named configuration, then planGoal::Relative– compute FK, apply offset, solve IK, then plan
PlannerConfig Presets
Three presets cover common use cases:
#![allow(unused)]
fn main() {
use kinetic_core::PlannerConfig;
// Default: 50ms timeout, 100 shortcut passes, smoothing enabled
let config = PlannerConfig::default();
// Real-time: 10ms timeout, 20 shortcut passes, no smoothing
let config = PlannerConfig::realtime();
// Offline: 500ms timeout, 500 shortcut passes, smoothing enabled
let config = PlannerConfig::offline();
}
The real-time preset sacrifices path quality for speed – useful when the planner runs inside a reactive loop that re-plans every cycle. The offline preset is for one-shot plans where execution quality matters more than planning speed.
See Also
- Collision Detection – how the planner checks configurations for safety
- Trajectory Generation – converting the geometric path to a timed trajectory
- Inverse Kinematics – how pose goals are resolved to joint configurations
- Reactive Control – real-time alternative when full planning is too slow
Trajectory Generation
A motion planner produces a path – a sequence of joint configurations with no notion of time. A real robot cannot execute a path directly. It needs to know when to be at each configuration: how fast to move, when to accelerate, and when to stop. Trajectory generation converts a geometric path into a trajectory – the same configurations with timestamps, velocities, and accelerations at each point.
graph LR
Path["Geometric Path<br/>[q0, q1, ..., qN]"] --> Profile{Profile}
Profile -->|Fastest| TOTP["TOTP<br/>time-optimal<br/>phase-plane"]
Profile -->|Industrial| Trap["Trapezoidal<br/>accel → cruise → decel"]
Profile -->|Smooth| Scurve["S-curve<br/>7-phase jerk-limited"]
Profile -->|Continuous| Spline["Cubic Spline<br/>C2 continuous"]
TOTP --> Timed
Trap --> Timed
Scurve --> Timed
Spline --> Timed
Timed["Timed Trajectory<br/>q(t), qd(t), qdd(t)"] --> Validate{Validate}
Validate -->|OK| Execute[Execute at 500Hz]
Validate -->|Violation| Fix[Slow down / replan]
style Path fill:#4a9eff,color:#fff
style Timed fill:#3ddc84,color:#000
style Fix fill:#ff6b6b,color:#fff
Why Timing Matters
Every robot joint has physical limits. Motors have maximum speeds. Gearboxes have maximum torques, which constrain acceleration. Violating these limits causes the motor controller to reject the command or the robot to fault.
A segment that moves a joint by 2 radians needs more time than one that moves it by 0.1 radians at the same velocity limit. Trajectory generation computes the correct timing for each segment so no joint ever exceeds its limits.
Trapezoidal Velocity Profile
The simplest industrial profile. Each segment accelerates at maximum acceleration, cruises at maximum velocity, then decelerates to rest:
velocity
^
| ___________
| / \
| / \
| / \
+--/--+---+---+---+--\--> time
accel cruise decel
If the segment is too short to reach maximum velocity, the cruise phase vanishes and the profile becomes triangular. The slowest joint determines the segment duration; all other joints are scaled to preserve the path shape.
#![allow(unused)]
fn main() {
use kinetic_trajectory::trapezoidal;
let path = vec![start_joints, via_point, goal_joints];
let traj = trapezoidal(&path, 1.5, 3.0)?; // max_vel=1.5 rad/s, max_accel=3.0 rad/s^2
}
Trapezoidal profiles are fast to compute and widely used in industrial automation. Their limitation is discontinuous acceleration at phase transitions – velocity is continuous but acceleration jumps instantaneously.
TOTP: Time-Optimal Time Parameterization
TOTP finds the fastest possible trajectory along a geometric path subject to per-joint velocity and acceleration limits. It uses phase-plane analysis to compute switching points between maximum acceleration and maximum deceleration for each joint independently.
The result is a trajectory where at least one joint is always at its limit. This is equivalent to MoveIt2’s TOTG.
#![allow(unused)]
fn main() {
use kinetic_trajectory::totp;
let traj = totp(
&path,
&velocity_limits, // per-joint max velocity (rad/s)
&acceleration_limits, // per-joint max acceleration (rad/s^2)
0.001, // path resolution
)?;
}
TOTP is the default choice when execution speed matters. For delicate tasks, prefer jerk-limited profiles.
Jerk-Limited S-Curve
Trapezoidal profiles produce infinite jerk (rate of change of acceleration) at phase boundaries. This causes vibration and wear. The S-curve profile constrains jerk to a finite maximum, producing seven phases:
Phase 1: Jerk+ (acceleration ramps up)
Phase 2: Const accel (acceleration holds at max)
Phase 3: Jerk- (acceleration ramps down to zero)
Phase 4: Cruise (constant velocity, zero acceleration)
Phase 5: Jerk- (deceleration ramps up)
Phase 6: Const decel (deceleration holds at max)
Phase 7: Jerk+ (deceleration ramps down to zero)
For short segments, some phases may vanish. The result is smoother motion with continuous acceleration – critical for painting, polishing, and CNC.
#![allow(unused)]
fn main() {
use kinetic_trajectory::jerk_limited;
let traj = jerk_limited(
&path,
2.0, // max velocity (rad/s)
5.0, // max acceleration (rad/s^2)
50.0, // max jerk (rad/s^3)
)?;
}
Cubic Spline Interpolation
Cubic splines fit cubic polynomials through waypoints, producing a
C2-continuous trajectory. Time is distributed proportionally to segment
arc-length. Unlike profile-based methods, splines do not enforce limits
during fitting – validate the result with TrajectoryValidator.
#![allow(unused)]
fn main() {
use kinetic_trajectory::cubic_spline_time;
// Auto-compute duration from path length
let traj = cubic_spline_time(&path, None, Some(&velocity_limits))?;
// Or specify total duration explicitly
let traj = cubic_spline_time(&path, Some(3.0), None)?;
}
Trajectory Blending
Robots rarely execute a single motion. A pick-and-place cycle chains multiple trajectories. Without blending, the robot stops at each boundary. Blending creates smooth parabolic transitions so the robot never pauses:
#![allow(unused)]
fn main() {
use kinetic_trajectory::{blend, blend_sequence};
// Blend two trajectories with a 0.2s transition
let combined = blend(&traj_approach, &traj_grasp, 0.2)?;
// Or blend an entire sequence
let full_cycle = blend_sequence(&[traj1, traj2, traj3, traj4], 0.15)?;
}
The blend region spans blend_duration seconds centered on the connection
point. Within this region, position is interpolated parabolically to
ensure continuous velocity and acceleration.
Sampling a Trajectory
TimedTrajectory supports interpolated sampling at arbitrary times via
sample_at(). A real-time controller calls this at its control rate:
#![allow(unused)]
fn main() {
use std::time::Duration;
let traj = trapezoidal(&path, 1.5, 3.0)?;
let dt = Duration::from_secs_f64(0.002); // 500Hz
let mut t = Duration::ZERO;
while t <= traj.duration() {
let wp = traj.sample_at(t);
send_to_robot(&wp.positions);
t += dt;
}
}
Validation
Before sending a trajectory to real hardware, TrajectoryValidator checks
every waypoint against position limits, velocity limits, acceleration limits,
and optionally jerk bounds. A 5% safety factor (configurable) is applied
by default so that limits are not hit exactly at their maximum:
#![allow(unused)]
fn main() {
use kinetic_trajectory::{TrajectoryValidator, ValidationConfig};
let validator = TrajectoryValidator::new(
&position_lower,
&position_upper,
&velocity_limits,
&acceleration_limits,
ValidationConfig {
safety_factor: 1.05, // 5% margin
max_position_jump: 0.5, // max 0.5 rad between waypoints
max_jerk: Some(100.0), // optional jerk bound (rad/s^3)
},
);
match validator.validate(&traj) {
Ok(()) => { /* safe to execute */ }
Err(violations) => {
for v in &violations {
eprintln!("Joint {} at waypoint {}: {:?} (actual={:.3}, limit={:.3})",
v.joint_index, v.waypoint_index,
v.violation_type, v.actual_value, v.limit_value);
}
}
}
}
Running validation before execution is the last line of defense against sending a dangerous trajectory to hardware.
See Also
- Motion Planning – how geometric paths are generated
- Reactive Control – real-time control that bypasses trajectory generation
- Forward Kinematics – computing end-effector motion from joint trajectories
Reactive Control
Motion planning computes a complete path before execution begins. If a new obstacle appears mid-motion, the robot does not react. For teleoperation, visual servoing, and dynamic environments, this is unacceptable.
Reactive control operates in a tight loop: read state, compute a small joint-space correction, apply it, repeat. The controller generates commands at 500Hz, fast enough that each correction is small and motion appears smooth.
graph LR
subgraph "RMP (Riemannian Motion Policies)"
P1[Reach Target<br/>gain=10] --> Blend
P2[Avoid Obstacles<br/>gain=25] --> Blend
P3[Joint Limits<br/>gain=15] --> Blend
P4[Singularity<br/>gain=5] --> Blend
P5[Self-Collision<br/>gain=20] --> Blend
P6[Damping<br/>coeff=0.5] --> Blend
Blend["Metric-weighted blend<br/>a = (ΣMi)⁻¹ Σ(Mi·ai)"]
end
Input[Twist / Jog / Pose] --> Jacobian[J⁻¹ pseudoinverse]
Jacobian --> Blend
Blend --> Clamp[Velocity / accel limits]
Clamp --> Filter[EMA / Butterworth]
Filter --> Cmd["Joint Command<br/>q, qd, qdd"]
Cmd -->|2ms| Loop((500Hz loop))
Loop --> Input
style Blend fill:#4a9eff,color:#fff
style Cmd fill:#3ddc84,color:#000
When to Use Reactive Control
Use planning when start, goal, and environment are known in advance. Use reactive control when:
- Teleoperation: a human sends velocity commands via joystick or spacemouse.
- Dynamic obstacles: obstacles move during execution (people, other robots).
- Visual servoing: a camera provides a target pose that updates every frame.
- Force-guided tasks: assembly or insertion where contact forces modify motion.
The Servo Loop
Servo converts high-level commands into safe joint velocities through a
pipeline: Jacobian inverse, velocity/acceleration limits, collision
deceleration. Three input modes are supported:
- Twist: Cartesian velocity (6D twist mapped to joints via Jacobian).
- JointJog: Direct velocity command for a single joint.
- PoseTracking: Proportional control toward a moving target pose.
#![allow(unused)]
fn main() {
use kinetic_reactive::{Servo, ServoConfig, InputType};
let servo = Servo::new(&robot, &scene, ServoConfig::default())?;
// Teleoperation: send a Cartesian twist
let cmd = servo.send_twist(&twist)?;
// cmd.positions, cmd.velocities, cmd.accelerations
// Or jog a single joint
let cmd = servo.send_joint_jog(2, 0.5)?; // joint 2 at 0.5 rad/s
// Or track a moving target
let cmd = servo.track_pose(&target_pose)?;
}
Singularity Handling
At a singularity, the Jacobian loses rank and some Cartesian directions become unachievable. The naive pseudoinverse produces infinite joint velocities. Kinetic uses the damped pseudoinverse:
J_damped = J^T (J J^T + lambda^2 I)^{-1}
The damping factor lambda prevents blow-up. When manipulability drops below
the threshold (default: 0.02), damping increases and tracking accuracy is
sacrificed for safety. ServoState reports proximity:
#![allow(unused)]
fn main() {
let state = servo.state();
if state.is_near_singularity {
warn!("Manipulability: {:.4} -- approaching singularity", state.manipulability);
}
}
Collision Deceleration
The servo checks obstacle proximity at a configurable rate (default: 100Hz). When the nearest obstacle is close, the controller intervenes:
- Slowdown zone (default: 15cm): scale down commanded velocities proportionally. At 15cm, full speed. At 3cm, nearly stopped.
- Emergency stop (default: 3cm): zero all velocities immediately.
The servo returns a
ServoError::EmergencyStopso the caller can distinguish a stop from a normal command.
These distances are configurable via ServoConfig:
#![allow(unused)]
fn main() {
let config = ServoConfig {
slowdown_distance: 0.15, // start slowing at 15cm
stop_distance: 0.03, // emergency stop at 3cm
collision_check_hz: 100.0, // check rate
..ServoConfig::default()
};
}
RMP: Riemannian Motion Policies
Real tasks require balancing multiple competing objectives simultaneously: reach the target, avoid obstacles, stay away from joint limits, avoid singularities. RMP provides a principled framework for combining these. Each objective is a policy that produces:
- A desired acceleration in its own task space.
- A Riemannian metric tensor that weights how important that acceleration is.
Policies are pulled back to joint space via the Jacobian, then combined via metric-weighted averaging:
a_combined = (Sum M_i)^{-1} Sum(M_i * a_i)
The metric tensors encode directional importance. An obstacle avoidance policy has a high metric toward the obstacle and near-zero elsewhere, so it strongly prevents motion into the obstacle without interfering with motion parallel to its surface.
#![allow(unused)]
fn main() {
use kinetic_reactive::{RMP, PolicyType};
let mut rmp = RMP::new(&robot)?;
// Attract end-effector toward target
rmp.add(PolicyType::ReachTarget {
target_pose: goal_pose,
gain: 10.0,
});
// Repel from scene obstacles
rmp.add(PolicyType::AvoidObstacles {
scene: scene.clone(),
influence_distance: 0.3,
gain: 20.0,
});
// Stay away from joint limits
rmp.add(PolicyType::JointLimitAvoidance {
margin: 0.1, // radians from limit
gain: 5.0,
});
// Avoid singular configurations
rmp.add(PolicyType::SingularityAvoidance {
threshold: 0.05,
gain: 15.0,
});
// Velocity damping for stability
rmp.add(PolicyType::Damping { coefficient: 0.5 });
// Compute combined command at 500Hz
let dt = 0.002;
let cmd = rmp.compute(&joint_positions, &joint_velocities, dt)?;
// cmd.positions, cmd.velocities, cmd.accelerations
}
ServoConfig Presets
Three presets cover common scenarios:
| Preset | Rate | Input | Slowdown | Stop | Use Case |
|---|---|---|---|---|---|
teleop() | 500Hz | Twist | 15cm | 3cm | Joystick, spacemouse |
tracking() | 500Hz | PoseTracking | 10cm | 3cm | Following a target |
precise() | 500Hz | Twist | 8cm | 1.5cm | Assembly, insertion |
#![allow(unused)]
fn main() {
// Teleop (default)
let config = ServoConfig::teleop();
// Pose tracking with tighter collision checking
let config = ServoConfig::tracking();
// Precise manipulation with small movements per tick
let config = ServoConfig::precise();
}
The precise preset limits movement to 0.005 rad per tick with tighter
singularity damping and a 1.5cm stop distance – appropriate for tasks
where sub-millimeter positioning matters.
The 500Hz Control Pattern
#![allow(unused)]
fn main() {
let mut servo = Servo::new(&robot, &scene, ServoConfig::teleop())?;
let dt = Duration::from_secs_f64(0.002); // 500Hz
loop {
let tick_start = Instant::now();
let twist = read_twist_command();
match servo.send_twist(&twist) {
Ok(cmd) => send_to_robot(&cmd.positions),
Err(ServoError::EmergencyStop { distance, .. }) => {
warn!("Emergency stop at {distance:.3}m");
}
Err(e) => return Err(e.into()),
}
let elapsed = tick_start.elapsed();
if elapsed < dt { std::thread::sleep(dt - elapsed); }
}
}
See Also
- Motion Planning – offline planning as an alternative to reactive control
- Collision Detection – how collision deceleration detects nearby obstacles
- Inverse Kinematics – the Jacobian and singularity concepts underlying servo control
- Trajectory Generation – converting planned paths to timed trajectories
FK and IK Roundtrip
What You’ll Learn
- Load a robot from a URDF string
- Extract a kinematic chain between two links
- Compute forward kinematics (FK) to get end-effector pose
- Solve inverse kinematics (IK) to recover joint angles
- Verify FK/IK roundtrip accuracy
Prerequisites
Overview
Forward kinematics maps joint angles to an end-effector pose. Inverse kinematics does the reverse: given a desired pose, find joint angles that reach it. This tutorial walks through both directions using a 7-DOF Panda-like arm, showing the complete FK-to-IK roundtrip and verifying sub-millimeter accuracy.
Step 1: Load the Robot
use kinetic::prelude::*;
fn main() -> kinetic::core::Result<()> {
let robot = Robot::from_urdf_string(PANDA_URDF)?;
println!("Loaded '{}' — {} DOF", robot.name, robot.dof);
What this does: Parses a URDF XML string into a Robot struct containing links, joints, and joint limits.
Why: Robot is the central data structure in kinetic. It holds all the geometric and kinematic information needed for FK and IK. In production, you would use Robot::from_urdf("path/to/robot.urdf") or Robot::from_name("franka_panda") to load from a file or the built-in robot library.
Step 2: Extract the Kinematic Chain
#![allow(unused)]
fn main() {
let chain = KinematicChain::extract(&robot, "panda_link0", "panda_link8")?;
}
What this does: Walks the robot’s link tree from panda_link0 (base) to panda_link8 (end-effector flange) and extracts the ordered list of active joints along that path.
Why: A robot may have multiple chains (e.g., a dual-arm robot). KinematicChain isolates the joints relevant to one specific base-to-tip path, so FK and IK operate on exactly the right degrees of freedom.
Step 3: Compute Forward Kinematics
#![allow(unused)]
fn main() {
let q = vec![0.3, -0.5, 0.2, -1.5, 0.1, 1.0, 0.5];
let start = std::time::Instant::now();
let pose = forward_kinematics(&robot, &chain, &q)?;
let fk_time = start.elapsed();
let t = pose.translation();
println!(
"FK → position: ({:.4}, {:.4}, {:.4}) [{:?}]",
t.x, t.y, t.z, fk_time
);
}
What this does: Multiplies the chain of homogeneous transforms (one per joint) to compute the 6D pose (position + orientation) of the end-effector. The result is a Pose (an Isometry3<f64>) in the base frame.
Why: FK is the foundation of robotics — it tells you where the end-effector is given the current joint state. Kinetic’s FK runs in microseconds, making it fast enough for real-time control loops.
Step 4: Solve Inverse Kinematics
#![allow(unused)]
fn main() {
let config = IKConfig::dls()
.with_seed(robot.mid_configuration().to_vec())
.with_max_iterations(300);
let start = std::time::Instant::now();
let solution = solve_ik(&robot, &chain, &pose, &config)?;
let ik_time = start.elapsed();
println!(
"IK → converged in {} iters, pos_err={:.2e}, orient_err={:.2e} [{:?}]",
solution.iterations, solution.position_error, solution.orientation_error, ik_time
);
}
What this does: Uses Damped Least Squares (DLS) to iteratively find joint angles that place the end-effector at the target pose. The seed — mid_configuration() — gives the solver a starting guess at the midpoint of each joint’s range.
Why: DLS is the general-purpose workhorse IK solver. It handles arbitrary DOF robots and converges reliably with appropriate damping. The IKSolution struct reports convergence status, iteration count, and residual errors so you can decide whether to trust the result before commanding hardware.
Step 5: Verify the Roundtrip
#![allow(unused)]
fn main() {
let recovered_pose = forward_kinematics(&robot, &chain, &solution.joints)?;
let rt = recovered_pose.translation();
println!("Roundtrip FK → ({:.4}, {:.4}, {:.4})", rt.x, rt.y, rt.z);
let pos_diff = (t - rt).norm();
println!("Position roundtrip error: {:.2e} m", pos_diff);
Ok(())
}
}
What this does: Runs FK on the IK solution to get a second pose, then computes the Euclidean distance between the original pose and the recovered pose.
Why: The roundtrip test is the gold standard for IK validation. A position error below 1e-4 meters (0.1 mm) confirms the solver converged to a valid solution. Always verify before sending commands to real hardware.
Complete Code
use kinetic::prelude::*;
const PANDA_URDF: &str = r#"<?xml version="1.0"?>
<robot name="panda_like">
<!-- 7 revolute joints from panda_link0 to panda_link8 -->
<!-- (full URDF omitted for brevity — see examples/hello_fk_ik.rs) -->
</robot>
"#;
fn main() -> kinetic::core::Result<()> {
// 1. Load robot
let robot = Robot::from_urdf_string(PANDA_URDF)?;
println!("Loaded '{}' — {} DOF", robot.name, robot.dof);
// 2. Extract kinematic chain
let chain = KinematicChain::extract(&robot, "panda_link0", "panda_link8")?;
// 3. FK: joint angles → end-effector pose
let q = vec![0.3, -0.5, 0.2, -1.5, 0.1, 1.0, 0.5];
let pose = forward_kinematics(&robot, &chain, &q)?;
let t = pose.translation();
println!("FK → ({:.4}, {:.4}, {:.4})", t.x, t.y, t.z);
// 4. IK: end-effector pose → joint angles
let config = IKConfig::dls()
.with_seed(robot.mid_configuration().to_vec())
.with_max_iterations(300);
let solution = solve_ik(&robot, &chain, &pose, &config)?;
println!("IK → {} iters, err={:.2e}", solution.iterations, solution.position_error);
// 5. Verify roundtrip
let recovered = forward_kinematics(&robot, &chain, &solution.joints)?;
let pos_diff = (t - recovered.translation()).norm();
println!("Roundtrip error: {:.2e} m", pos_diff);
Ok(())
}
What You Learned
Robot::from_urdf_string()parses a URDF into kinetic’s internal representationKinematicChain::extract()isolates a base-to-tip joint chainforward_kinematics()computes end-effector pose in microsecondsIKConfig::dls()configures the Damped Least Squares solver with seed and iteration limitssolve_ik()returns anIKSolutionwith convergence info and residual errors- FK/IK roundtrip error should be below 1e-4 m for a valid solution
Try This
- Change the joint angles
qand observe how the FK pose changes - Try
IKConfig::fabrik()instead ofIKConfig::dls()and compare convergence speed - Use
robot.mid_configuration()vsvec![0.0; 7]as the seed and see how it affects iteration count - Load a built-in robot with
Robot::from_name("ur5e")instead of inline URDF
Next
- IK Solver Selection — choosing the right solver for your robot
- Multiple IK Solutions — finding diverse solutions with random seeds
Planning Basics
What You’ll Learn
- Plan a joint-space motion with the
plan()one-liner - Use the
Plannerbuilder for fine-grained control - Plan to a Cartesian pose goal instead of joint angles
- Interpret planning results (waypoints, path length, timing)
Prerequisites
Overview
Kinetic offers three ways to plan a motion, each trading convenience for control.
The plan() free function is a one-liner for quick prototyping. The Planner
builder lets you configure the algorithm, attach a collision scene, and set
real-time constraints. Pose goals let you specify a Cartesian target and let the
planner handle IK internally. This tutorial demonstrates all three using a UR5e.
Step 1: One-Liner Planning
use kinetic::prelude::*;
fn main() -> kinetic::core::Result<()> {
let start = vec![0.0, -1.0, 0.8, 0.0, 0.0, 0.0];
let goal = Goal::joints([1.0, -0.5, 0.3, 0.2, -0.3, 0.5]);
let result = plan("ur5e", &start, &goal)?;
println!(
"One-liner: {} waypoints, {:.1?}, path length {:.3}",
result.num_waypoints(),
result.planning_time,
result.path_length(),
);
What this does: Loads the UR5e by name, constructs a planner with default settings, and finds a collision-free path from start to goal in joint space.
Why: The plan() free function is the fastest way to get a result. It handles robot loading, planner construction, and solving in a single call. Use it for scripts, tests, and prototyping.
Step 2: Planner Builder
#![allow(unused)]
fn main() {
let robot = Robot::from_name("ur5e")?;
let planner = Planner::new(&robot)?.with_config(PlannerConfig::realtime());
let result = planner.plan(&start, &goal)?;
println!(
"Builder: {} waypoints, {:.1?}, path length {:.3}",
result.num_waypoints(),
result.planning_time,
result.path_length(),
);
}
What this does: Creates a Planner explicitly, applying PlannerConfig::realtime() which tunes the RRT algorithm for fast (sub-10ms) planning at the cost of path optimality.
Why: The builder pattern gives you control over the planning algorithm, timeout, scene, and quality trade-offs. PlannerConfig::realtime() uses fewer iterations and a shorter planning horizon, suitable for replanning in servo loops. The default config balances quality and speed.
Step 3: Pose Goal (Cartesian Target)
#![allow(unused)]
fn main() {
let target_joints = vec![0.5, -0.8, 0.5, 0.1, -0.1, 0.3];
let target_pose = planner.fk(&target_joints)?;
let t = target_pose.translation();
println!(
"Planning to Cartesian pose: ({:.3}, {:.3}, {:.3})",
t.x, t.y, t.z
);
let result = planner.plan(&start, &Goal::Pose(target_pose))?;
println!(
"Pose goal: {} waypoints, {:.1?}",
result.num_waypoints(),
result.planning_time,
);
Ok(())
}
}
What this does: Computes a Cartesian target from known joint values (via FK), then plans to that pose using Goal::Pose(...). The planner internally solves IK to convert the pose into a joint-space goal before running the RRT.
Why: In real applications, goals come from perception (camera-detected object poses) or task specifications (place the tool at position X). Goal::Pose lets you plan directly to a world-frame target without manually solving IK first.
Complete Code
use kinetic::prelude::*;
fn main() -> kinetic::core::Result<()> {
println!("=== KINETIC Simple Planning ===\n");
// --- Method 1: One-liner ---
let start = vec![0.0, -1.0, 0.8, 0.0, 0.0, 0.0];
let goal = Goal::joints([1.0, -0.5, 0.3, 0.2, -0.3, 0.5]);
let result = plan("ur5e", &start, &goal)?;
println!(
"One-liner: {} waypoints, {:.1?}, path length {:.3}",
result.num_waypoints(), result.planning_time, result.path_length(),
);
// --- Method 2: Planner builder ---
let robot = Robot::from_name("ur5e")?;
let planner = Planner::new(&robot)?.with_config(PlannerConfig::realtime());
let result = planner.plan(&start, &goal)?;
println!(
"Builder: {} waypoints, {:.1?}, path length {:.3}",
result.num_waypoints(), result.planning_time, result.path_length(),
);
// --- Method 3: Pose goal ---
let target_joints = vec![0.5, -0.8, 0.5, 0.1, -0.1, 0.3];
let target_pose = planner.fk(&target_joints)?;
let t = target_pose.translation();
println!("\nPlanning to Cartesian pose: ({:.3}, {:.3}, {:.3})", t.x, t.y, t.z);
let result = planner.plan(&start, &Goal::Pose(target_pose))?;
println!("Pose goal: {} waypoints, {:.1?}", result.num_waypoints(), result.planning_time);
Ok(())
}
What You Learned
plan("robot_name", &start, &goal)is the fastest way to get a pathPlanner::new(&robot)?.with_config(...)gives full control over algorithm parametersPlannerConfig::realtime()optimizes for low-latency planningGoal::joints(...)targets a specific joint configurationGoal::Pose(...)targets a Cartesian pose, with IK solved internallyPlanResultreports waypoint count, planning time, and path length
Try This
- Compare
PlannerConfig::default()vsPlannerConfig::realtime()— measure waypoint count and planning time - Plan the same start/goal multiple times and observe how RRT’s randomness produces different paths
- Try
Goal::Named("home".into())if the robot has named configurations in its URDF - Use
result.waypointsto iterate over the planned joint-space path
Next
- Planning with Obstacles — adding collision objects to the scene
- Pick and Place — full planning-to-execution pipeline
Planning with Obstacles
What You’ll Learn
- Create a collision
Scenewith box and cylinder obstacles - Plan collision-free paths around scene objects
- Apply trapezoidal time parameterization to produce executable trajectories
- Inspect timed waypoints with timestamps and velocities
Prerequisites
Overview
Real-world robots operate among tables, walls, and objects. Kinetic’s Scene
holds geometric obstacles that the planner avoids during path search. After
planning, a raw waypoint path needs time parameterization to become a trajectory
with timestamps, velocities, and accelerations that a controller can execute.
This tutorial builds a scene with three obstacles, plans around them, and applies
a trapezoidal velocity profile.
Step 1: Load Robot and Create Scene
use kinetic::prelude::*;
fn main() -> kinetic::core::Result<()> {
let robot = Robot::from_name("ur5e")?;
println!("Loaded '{}' ({} DOF)", robot.name, robot.dof);
let mut scene = Scene::new(&robot)?;
// Table surface
scene.add_box("table", [0.4, 0.3, 0.01], [0.5, 0.0, 0.0]);
// Box obstacle on the table
scene.add_box("box", [0.05, 0.05, 0.1], [0.4, 0.0, 0.11]);
// Cylindrical obstacle
scene.add_cylinder("pipe", 0.03, 0.15, [0.3, 0.1, 0.15]);
println!("Scene: {} obstacles", scene.num_objects());
What this does: Creates a Scene bound to the UR5e, then adds three geometric primitives. add_box takes half-extents [x, y, z] and a center position. add_cylinder takes radius, half-height, and center.
Why: The scene is the planner’s view of the world. Every obstacle is checked during RRT expansion — any candidate configuration that puts the robot in collision with a scene object is rejected. Accurate scene geometry is critical for safe motion planning.
Step 2: Plan Around Obstacles
#![allow(unused)]
fn main() {
let planner = Planner::new(&robot)?
.with_scene(&scene)
.with_config(PlannerConfig::default());
let start = vec![0.0, -1.0, 0.8, 0.0, 0.0, 0.0];
let goal = Goal::joints([1.0, -0.5, 0.3, 0.2, -0.3, 0.5]);
let result = planner.plan(&start, &goal)?;
println!(
"Path found: {} waypoints in {:.1?}",
result.num_waypoints(),
result.planning_time,
);
println!("Path length: {:.3} rad", result.path_length());
}
What this does: Attaches the scene to the planner via .with_scene(&scene), then plans a joint-space path from start to goal. The RRT samples random configurations, checks each for collisions against both the robot’s self-collision model and the scene, and grows a tree until it connects start to goal.
Why: Without .with_scene(), the planner only checks self-collision. With a scene, every candidate node in the RRT tree is validated against all obstacles. The planner automatically inflates collision geometry by a safety margin to account for real-world uncertainty.
Step 3: Time-Parameterize the Path
#![allow(unused)]
fn main() {
let max_vel = 2.0; // rad/s
let max_acc = 5.0; // rad/s^2
let timed = trapezoidal(&result.waypoints, max_vel, max_acc)
.map_err(KineticError::Other)?;
println!("Trajectory duration: {:.3?}", timed.duration());
println!("Timed waypoints: {}", timed.waypoints.len());
}
What this does: Applies a trapezoidal velocity profile to the geometric path. Each segment accelerates to max_vel, cruises, then decelerates — producing the classic trapezoidal velocity shape. The result is a TimedTrajectory where each waypoint has a timestamp.
Why: A raw path is just a sequence of joint configurations with no timing information. A controller needs to know when to reach each waypoint and how fast to move. Trapezoidal parameterization is the simplest profile that respects velocity and acceleration limits. For smoother motion, kinetic also provides cubic_spline and time_optimal parameterizations.
Step 4: Inspect the Trajectory
#![allow(unused)]
fn main() {
if let Some(first) = timed.waypoints.first() {
println!(" t=0.000s: {:6.3?}", &first.positions[..3]);
}
if let Some(last) = timed.waypoints.last() {
println!(" t={:.3}s: {:6.3?}", last.time, &last.positions[..3]);
}
Ok(())
}
}
What this does: Prints the first three joint values at the start and end of the trajectory, along with their timestamps.
Why: Inspecting boundary waypoints confirms the trajectory starts at start and ends at goal. In production, you would iterate over timed.waypoints to send position commands to a real controller at the prescribed rate.
Complete Code
use kinetic::prelude::*;
fn main() -> kinetic::core::Result<()> {
// 1. Load robot
let robot = Robot::from_name("ur5e")?;
// 2. Create scene with obstacles
let mut scene = Scene::new(&robot)?;
scene.add_box("table", [0.4, 0.3, 0.01], [0.5, 0.0, 0.0]);
scene.add_box("box", [0.05, 0.05, 0.1], [0.4, 0.0, 0.11]);
scene.add_cylinder("pipe", 0.03, 0.15, [0.3, 0.1, 0.15]);
// 3. Plan around obstacles
let planner = Planner::new(&robot)?
.with_scene(&scene)
.with_config(PlannerConfig::default());
let start = vec![0.0, -1.0, 0.8, 0.0, 0.0, 0.0];
let goal = Goal::joints([1.0, -0.5, 0.3, 0.2, -0.3, 0.5]);
let result = planner.plan(&start, &goal)?;
println!("Path: {} waypoints in {:.1?}", result.num_waypoints(), result.planning_time);
// 4. Time-parameterize
let timed = trapezoidal(&result.waypoints, 2.0, 5.0).map_err(KineticError::Other)?;
println!("Trajectory: {:.3?}, {} waypoints", timed.duration(), timed.waypoints.len());
// 5. Inspect endpoints
if let Some(first) = timed.waypoints.first() {
println!(" t=0.000s: {:6.3?}", &first.positions[..3]);
}
if let Some(last) = timed.waypoints.last() {
println!(" t={:.3}s: {:6.3?}", last.time, &last.positions[..3]);
}
Ok(())
}
What You Learned
Scene::new(&robot)creates a collision environment bound to a robotadd_boxandadd_cylinderplace geometric primitives at world-frame positions.with_scene(&scene)makes the planner collision-awaretrapezoidal()converts a geometric path into a timed trajectory with velocity/acceleration limitsTimedTrajectorywaypoints have.time,.positions, and.velocitiesfields
Try This
- Move the
boxobstacle directly between start and goal to force a longer detour - Compare
trapezoidal()with lowermax_vel(0.5 rad/s) and observe the longer trajectory duration - Add more obstacles with
scene.add_sphere("ball", radius, position)and see how planning time changes - Use
scene.remove("pipe")to remove an obstacle and observe the shorter path
Next
- Collision Checking — low-level sphere-model collision detection
- Pick and Place — full workflow from planning through execution
IK Solver Selection
What You’ll Learn
- Understand when to use each of kinetic’s five IK solvers
- Apply the decision flowchart based on robot DOF and wrist geometry
- Configure OPW, Subproblem, Subproblem7DOF, FABRIK, and DLS solvers
- Use
IKSolver::Autofor automatic solver selection
Prerequisites
Overview
Kinetic ships five IK solvers, each optimized for different robot geometries.
Choosing the right solver can mean the difference between a 2-microsecond
analytical solution and a 200-microsecond iterative one. This tutorial explains
the decision flowchart and shows code for each solver. In most cases,
IKSolver::Auto makes the right choice, but understanding the options helps
when you need to override.
Decision Flowchart
Is it a 6-DOF robot with a spherical wrist?
├── YES: Does it match OPW kinematic parameters?
│ ├── YES → OPW (closed-form, ~2 µs, up to 8 solutions)
│ └── NO → Subproblem (analytical, ~10 µs, up to 16 solutions)
└── NO:
Is it a 7-DOF robot with intersecting wrist axes?
├── YES → Subproblem7DOF (sweep + analytical, ~50 µs)
└── NO:
Do you only need position (not orientation)?
├── YES → FABRIK (fast, position-focused)
└── NO → DLS (general-purpose, any DOF)
IKSolver::Auto implements this exact logic. Override it only when you know the
geometry better than the auto-detector.
Step 1: OPW — 6-DOF Spherical Wrist Robots
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
// OPW works for UR5e, ABB IRB, KUKA KR, Fanuc — standard industrial arms
let robot = Robot::from_name("ur5e")?;
let chain = KinematicChain::from_robot(&robot)?;
let config = IKConfig::opw();
let solution = solve_ik(&robot, &chain, &target_pose, &config)?;
// Closed-form: converges in 1 iteration, ~2 µs
}
What this does: Uses the Ortho-Parallel-Wrist decomposition to compute a closed-form solution. No iterations needed.
Why: OPW is the fastest solver in kinetic. It exploits the fact that most 6-DOF industrial arms have three intersecting wrist axes, which decouples the position and orientation sub-problems. Use it for any robot whose wrist axes meet at a single point (UR, ABB, KUKA, Fanuc, etc.).
Step 2: Subproblem — General 6-DOF Robots
#![allow(unused)]
fn main() {
let config = IKConfig {
solver: IKSolver::Subproblem,
..Default::default()
};
let solution = solve_ik(&robot, &chain, &target_pose, &config)?;
// Analytical: returns up to 16 solutions
}
What this does: Decomposes the IK problem into Paden-Kahan subproblems (rotations about known axes). Returns all analytical solutions, ranked by proximity to the seed.
Why: Use Subproblem when your 6-DOF robot has intersecting wrist axes but doesn’t match OPW’s stricter kinematic parameter constraints. It is still analytical (no iterative convergence) and returns multiple solutions.
Step 3: Subproblem7DOF — 7-DOF Redundant Robots
#![allow(unused)]
fn main() {
let config = IKConfig {
solver: IKSolver::Subproblem7DOF { num_samples: 36 },
..Default::default()
};
let solution = solve_ik(&robot, &chain, &target_pose, &config)?;
// Sweeps 36 values of the redundant joint, solves analytically at each
}
What this does: Sweeps the redundant (7th) joint across its range in num_samples steps. At each sample, it locks that joint and solves the remaining 6-DOF system analytically via subproblem decomposition.
Why: A 7-DOF robot like the Franka Panda has infinite IK solutions for most targets (one extra degree of freedom). This solver discretizes the redundancy and finds the best analytical solution at each sample. Increase num_samples for finer resolution at the cost of compute time.
Step 4: FABRIK — Position-Only IK
#![allow(unused)]
fn main() {
let config = IKConfig::fabrik()
.with_mode(IKMode::PositionOnly)
.with_max_iterations(100);
let solution = solve_ik(&robot, &chain, &target_pose, &config)?;
// Fast convergence for position, ignores orientation
}
What this does: Forward And Backward Reaching IK iterates by reaching each joint toward the target (forward pass) then pulling the chain back to the base (backward pass). Naturally converges to position targets.
Why: FABRIK is geometrically intuitive and fast for position-only problems (e.g., moving a tooltip to a point without caring about wrist orientation). It struggles with full 6D pose constraints because orientation is not part of its core algorithm.
Step 5: DLS — General-Purpose Solver
#![allow(unused)]
fn main() {
let config = IKConfig::dls()
.with_seed(vec![0.0; robot.dof])
.with_max_iterations(300);
let solution = solve_ik(&robot, &chain, &target_pose, &config)?;
// Iterative: works for any DOF, any geometry
}
What this does: Damped Least Squares computes the Jacobian pseudo-inverse with a damping term, stepping toward the target each iteration. The damping factor prevents instability near singularities.
Why: DLS is the fallback solver that works for any robot — 3-DOF, 6-DOF, 7-DOF, or more. It handles full 6D pose targets on arbitrary kinematic structures. The trade-off is speed (iterative, ~100-500 µs) and the risk of local minima. Use with_restarts(5) to improve convergence on difficult targets.
Step 6: Auto Selection
#![allow(unused)]
fn main() {
// Let kinetic choose the best solver
let config = IKConfig::default(); // solver: IKSolver::Auto
let solution = solve_ik(&robot, &chain, &target_pose, &config)?;
}
What this does: IKSolver::Auto checks the robot’s DOF, wrist geometry, and optional ik_preference field in the robot config. It selects OPW > Subproblem > Subproblem7DOF > DLS in order of preference.
Why: Auto selection is the recommended default. It picks the fastest applicable solver and falls back to DLS when no analytical solver matches. Override it only when profiling reveals a better choice for your specific use case.
Complete Code
use kinetic::prelude::*;
fn main() -> kinetic::core::Result<()> {
let robot = Robot::from_name("ur5e")?;
let chain = KinematicChain::from_robot(&robot)?;
let target = forward_kinematics(&robot, &chain, &[0.5, -0.8, 0.5, 0.1, -0.1, 0.3])?;
// Compare solvers
for (name, config) in [
("Auto", IKConfig::default()),
("OPW", IKConfig::opw()),
("DLS", IKConfig::dls().with_max_iterations(300)),
("FABRIK", IKConfig::fabrik().with_max_iterations(200)),
] {
let start = std::time::Instant::now();
match solve_ik(&robot, &chain, &target, &config) {
Ok(sol) => println!(
"{:10}: {} iters, err={:.2e}, {:?}",
name, sol.iterations, sol.position_error, start.elapsed()
),
Err(e) => println!("{:10}: failed — {}", name, e),
}
}
Ok(())
}
What You Learned
- OPW is fastest for 6-DOF spherical-wrist robots (closed-form, ~2 us)
- Subproblem handles general 6-DOF with intersecting wrist axes (up to 16 solutions)
- Subproblem7DOF sweeps the redundant joint for 7-DOF robots
- FABRIK excels at position-only targets but struggles with full 6D pose
- DLS is the reliable general-purpose fallback for any robot
IKSolver::Autoselects the best solver automatically
Try This
- Load a Franka Panda (
Robot::from_name("franka_panda")) and compare Auto vs DLS timing - Benchmark OPW vs DLS on a UR5e over 10,000 solves to measure the speed difference
- Try
IKConfig::with_fallback()which attempts Full6D then falls back to PositionOnly - Set
IKConfig { solver: IKSolver::DLS { damping: 0.001 }, .. }vsdamping: 0.1and observe convergence behavior near singularities
Next
- Multiple IK Solutions — finding diverse configurations
- Servo Control — real-time IK in a control loop
Multiple IK Solutions
What You’ll Learn
- Why a single IK target can have multiple valid joint configurations
- Use different seed configurations to discover diverse solutions
- Compare solutions by joint-space distance
- Choose the best solution for your application
Prerequisites
Overview
A 7-DOF robot has infinite IK solutions for most reachable targets due to its redundant degree of freedom. Even 6-DOF robots often have 2-8 distinct solutions (elbow-up vs elbow-down, shoulder-left vs shoulder-right). By seeding the IK solver with different starting configurations, you can discover multiple distinct joint configurations that all reach the same end-effector pose. This tutorial shows how to collect, compare, and choose among them.
Step 1: Establish a Target Pose
use kinetic::prelude::*;
const PANDA_URDF: &str = include_str!("panda_urdf.txt");
fn main() -> kinetic::core::Result<()> {
let robot = Robot::from_urdf_string(PANDA_URDF)?;
let chain = KinematicChain::extract(&robot, "panda_link0", "panda_link8")?;
// Create a target by FK from known joints
let q_target = vec![0.3, -0.5, 0.2, -1.5, 0.1, 1.0, 0.5];
let target = forward_kinematics(&robot, &chain, &q_target)?;
let t = target.translation();
println!("Target pose: ({:.4}, {:.4}, {:.4})", t.x, t.y, t.z);
What this does: Computes a target pose from known joint values using FK. This guarantees the target is reachable, which is important when testing IK — an unreachable target would cause all seeds to fail.
Why: In production, targets come from perception or task planning. Here we use FK to guarantee reachability so we can focus on demonstrating the multi-solution aspect.
Step 2: Define Diverse Seeds
#![allow(unused)]
fn main() {
let seeds = [
robot.mid_configuration().to_vec(), // mid-range
vec![0.0, 0.0, 0.0, -1.5, 0.0, 1.5, 0.0], // near-zero
vec![1.0, -1.0, 1.0, -2.0, 0.5, 2.0, -0.5], // far-positive
vec![-0.5, 0.5, -0.3, -1.0, -0.5, 0.5, 1.0], // mixed
];
}
What this does: Defines four starting joint configurations spread across the robot’s workspace. Each seed puts the IK solver in a different region of configuration space.
Why: DLS is a gradient-based solver that converges to the nearest local minimum. Different seeds lead to different minima, which correspond to different physical arm configurations (e.g., elbow-up vs elbow-down). The further apart the seeds, the more likely you are to discover genuinely different solutions.
Step 3: Solve from Each Seed
#![allow(unused)]
fn main() {
let mut solutions = Vec::new();
for (i, seed) in seeds.iter().enumerate() {
let config = IKConfig::dls()
.with_seed(seed.clone())
.with_max_iterations(300);
match solve_ik(&robot, &chain, &target, &config) {
Ok(sol) => {
println!(
"Solution #{}: {} iters, pos_err={:.2e}",
i + 1, sol.iterations, sol.position_error
);
print!(" joints: [");
for (j, &v) in sol.joints.iter().enumerate() {
if j > 0 { print!(", "); }
print!("{:.3}", v);
}
println!("]");
solutions.push(sol);
}
Err(e) => {
println!("Solution #{}: failed — {}", i + 1, e);
}
}
}
}
What this does: Runs IK four times with different seeds. Each successful solve is stored. Failed solves are logged but do not stop the search — not every seed converges.
Why: Collecting multiple solutions lets you choose the one that best fits your constraints (shortest travel distance, avoids singularity, maintains elbow clearance, etc.). Failed seeds often mean the solver started too far from any valid solution, which is normal.
Step 4: Compare Solutions by Distance
#![allow(unused)]
fn main() {
if solutions.len() >= 2 {
println!("\n--- Solution distances (L2 in joint space) ---");
for i in 0..solutions.len() {
for j in (i + 1)..solutions.len() {
let dist: f64 = solutions[i].joints.iter()
.zip(solutions[j].joints.iter())
.map(|(a, b)| (a - b).powi(2))
.sum::<f64>()
.sqrt();
println!(" #{} <-> #{}: {:.4} rad", i + 1, j + 1, dist);
}
}
}
Ok(())
}
}
What this does: Computes pairwise L2 distances between all solution joint vectors. Large distances (> 1.0 rad) indicate genuinely different arm configurations.
Why: Two solutions with distance < 0.01 rad are essentially the same configuration found from different starting points. Solutions with distance > 1.0 rad represent distinct physical poses. In pick-and-place, you would pick the solution closest to the current joint state to minimize travel time.
Complete Code
use kinetic::prelude::*;
const PANDA_URDF: &str = include_str!("panda_urdf.txt");
fn main() -> kinetic::core::Result<()> {
let robot = Robot::from_urdf_string(PANDA_URDF)?;
let chain = KinematicChain::extract(&robot, "panda_link0", "panda_link8")?;
// Target pose from known joints
let q_target = vec![0.3, -0.5, 0.2, -1.5, 0.1, 1.0, 0.5];
let target = forward_kinematics(&robot, &chain, &q_target)?;
// Solve from four different seeds
let seeds = [
robot.mid_configuration().to_vec(),
vec![0.0, 0.0, 0.0, -1.5, 0.0, 1.5, 0.0],
vec![1.0, -1.0, 1.0, -2.0, 0.5, 2.0, -0.5],
vec![-0.5, 0.5, -0.3, -1.0, -0.5, 0.5, 1.0],
];
let mut solutions = Vec::new();
for (i, seed) in seeds.iter().enumerate() {
let config = IKConfig::dls().with_seed(seed.clone()).with_max_iterations(300);
match solve_ik(&robot, &chain, &target, &config) {
Ok(sol) => { solutions.push(sol); }
Err(e) => { println!("Seed #{} failed: {}", i + 1, e); }
}
}
// Compare pairwise distances
for i in 0..solutions.len() {
for j in (i + 1)..solutions.len() {
let dist: f64 = solutions[i].joints.iter()
.zip(solutions[j].joints.iter())
.map(|(a, b)| (a - b).powi(2))
.sum::<f64>().sqrt();
println!("#{} <-> #{}: {:.4} rad", i + 1, j + 1, dist);
}
}
Ok(())
}
What You Learned
- The same target pose can have multiple valid joint configurations
- Different seeds guide the iterative solver to different local minima
robot.mid_configuration()provides a safe default seed at the center of joint ranges- L2 distance in joint space measures how different two solutions are
- Distances > 1.0 rad indicate genuinely distinct arm configurations
Try This
- Add more seeds (8-10) and count how many unique solutions you find (distance > 0.5 rad)
- Use
IKConfig::dls().with_restarts(10)instead of manual seeds for automatic random restart exploration - Pick the solution closest to the current joint state using
min_byon joint distance - Try a 6-DOF robot (UR5e) and observe that it typically has fewer distinct solutions than a 7-DOF robot
- Use
solution.condition_numberto filter out near-singular solutions
Next
- Collision Checking — verifying solutions are collision-free
- Planning Basics — using the planner instead of raw IK
Collision Checking
What You’ll Learn
- Build a
RobotSphereModelfrom URDF collision geometry - Update sphere positions from FK poses
- Check self-collision (with adjacent link filtering)
- Check environment collision against obstacle spheres
- Detect the available SIMD tier and benchmark collision speed
Prerequisites
Overview
Kinetic uses bounding sphere models for fast collision detection. Each robot link is approximated by a set of spheres generated from its URDF collision geometry. After each FK update, the sphere positions are transformed to world coordinates, and collision is checked as sphere-sphere overlap — a simple distance comparison. SIMD acceleration makes this fast enough for real-time planning loops. This tutorial builds a sphere model, tests several configurations, and benchmarks collision speed.
Step 1: Load Robot and Create Sphere Model
use kinetic::collision::{adjacent_link_pairs, RobotSphereModel, SphereGenConfig, SpheresSoA};
use kinetic::prelude::*;
fn main() -> kinetic::core::Result<()> {
let robot = Robot::from_urdf_string(THREE_DOF_URDF)?;
let chain = KinematicChain::extract(&robot, "base_link", "ee_link")?;
let sphere_config = SphereGenConfig::default();
let sphere_model = RobotSphereModel::from_robot(&robot, &sphere_config);
let mut spheres = sphere_model.create_runtime();
println!("Sphere model: {} total spheres", spheres.world.len());
What this does: RobotSphereModel::from_robot reads each link’s <collision> geometry from the URDF and fits bounding spheres to approximate the shape. create_runtime() produces a mutable SpheresSoA (Structure of Arrays) that can be updated with FK poses at runtime.
Why: Sphere-based collision is orders of magnitude faster than mesh-based GJK/EPA. The SoA layout enables SIMD vectorization — four or eight sphere-sphere tests execute in a single CPU instruction. The trade-off is approximation quality, controlled by SphereGenConfig (more spheres = tighter fit, slower checks).
Step 2: Update Spheres from FK and Check Self-Collision
#![allow(unused)]
fn main() {
let q = vec![0.0, 1.0, -0.5];
let link_poses = forward_kinematics_all(&robot, &chain, &q)?;
spheres.update(&link_poses);
let skip_pairs = adjacent_link_pairs(&robot);
let self_collision = spheres.self_collision(&skip_pairs);
println!("Self-collision: {}", self_collision);
}
What this does: forward_kinematics_all computes the pose of every link in the chain (not just the end-effector). spheres.update() transforms each sphere from link-local to world coordinates. self_collision() checks all sphere pairs, skipping adjacent links that always overlap at the joint.
Why: Adjacent links share a joint and their collision geometries overlap by design — checking them would produce false positives. adjacent_link_pairs builds the skip list from the URDF’s parent-child relationships. Self-collision is critical for robots that can fold back on themselves (7-DOF arms, humanoids).
Step 3: Check Environment Collision
#![allow(unused)]
fn main() {
let mut obstacles = SpheresSoA::new();
obstacles.push(0.5, 0.0, 0.3, 0.1, 0); // x, y, z, radius, group_id
let env_collision = spheres.collides_with(&obstacles);
println!("Environment collision: {}", env_collision);
}
What this does: Creates a set of obstacle spheres (here, a single sphere at position (0.5, 0, 0.3) with radius 0.1m) and checks whether any robot sphere overlaps any obstacle sphere.
Why: Environment collision is the core check used by motion planners. During RRT expansion, every candidate configuration is tested against the scene’s obstacle spheres. Keeping the check fast (sub-microsecond) is what makes real-time planning possible.
Step 4: Test Multiple Configurations
#![allow(unused)]
fn main() {
let configs = [
("Zero config", vec![0.0, 0.0, 0.0]),
("Bent forward", vec![0.0, 1.0, -0.5]),
("Max bend", vec![0.0, 1.5, -1.5]),
];
for (name, q) in &configs {
let link_poses = forward_kinematics_all(&robot, &chain, q)?;
spheres.update(&link_poses);
let self_col = spheres.self_collision(&skip_pairs);
let env_col = spheres.collides_with(&obstacles);
let ee = forward_kinematics(&robot, &chain, q)?;
let t = ee.translation();
println!(
" {}: EE=({:.3}, {:.3}, {:.3}) self={} env={}",
name, t.x, t.y, t.z, self_col, env_col
);
}
}
What this does: Iterates through several joint configurations, updating the sphere model and checking both self and environment collision for each.
Why: This pattern — FK update, sphere update, collision check — is the inner loop of every sampling-based planner. Understanding it helps you debug unexpected planning failures (often caused by unexpected collisions at intermediate configurations).
Step 5: SIMD Tier and Benchmarking
#![allow(unused)]
fn main() {
let tier = kinetic::collision::simd::detect_simd_tier();
println!("SIMD tier: {:?}", tier);
let start = std::time::Instant::now();
let iters = 10_000;
for _ in 0..iters {
std::hint::black_box(spheres.collides_with(&obstacles));
}
let per_check = start.elapsed() / iters as u32;
println!("Collision check: {:?}/check", per_check);
Ok(())
}
}
What this does: Detects the CPU’s SIMD capabilities (SSE4.1, AVX2, AVX-512, NEON) and benchmarks collision checking speed over 10,000 iterations. black_box prevents the compiler from optimizing away the computation.
Why: Kinetic automatically selects the best SIMD tier at runtime. AVX2 checks 8 sphere pairs per instruction, AVX-512 checks 16. Knowing your tier helps set performance expectations: on AVX2 hardware, a 50-sphere robot vs 100-sphere environment typically takes 200-500 nanoseconds per check.
Complete Code
use kinetic::collision::{adjacent_link_pairs, RobotSphereModel, SphereGenConfig, SpheresSoA};
use kinetic::prelude::*;
fn main() -> kinetic::core::Result<()> {
let robot = Robot::from_urdf_string(THREE_DOF_URDF)?;
let chain = KinematicChain::extract(&robot, "base_link", "ee_link")?;
// Build sphere model
let sphere_model = RobotSphereModel::from_robot(&robot, &SphereGenConfig::default());
let mut spheres = sphere_model.create_runtime();
// Test a configuration
let q = vec![0.0, 1.0, -0.5];
let link_poses = forward_kinematics_all(&robot, &chain, &q)?;
spheres.update(&link_poses);
let skip_pairs = adjacent_link_pairs(&robot);
println!("Self-collision: {}", spheres.self_collision(&skip_pairs));
let mut obstacles = SpheresSoA::new();
obstacles.push(0.5, 0.0, 0.3, 0.1, 0);
println!("Env collision: {}", spheres.collides_with(&obstacles));
// Benchmark
let tier = kinetic::collision::simd::detect_simd_tier();
println!("SIMD: {:?}", tier);
let start = std::time::Instant::now();
for _ in 0..10_000 {
std::hint::black_box(spheres.collides_with(&obstacles));
}
println!("{:?}/check", start.elapsed() / 10_000);
Ok(())
}
What You Learned
RobotSphereModel::from_robotgenerates bounding spheres from URDF collision geometryspheres.update(&link_poses)transforms spheres to world coordinates after FKadjacent_link_pairsprevents false positive self-collision at jointsSpheresSoAstores obstacles in a cache-friendly Structure of Arrays layout- SIMD tier is auto-detected; AVX2/AVX-512 provide significant speedups
- A single collision check typically takes 100-500 nanoseconds
Try This
- Increase
SphereGenConfigsphere count and observe tighter coverage vs slower checks - Add 100 obstacle spheres and benchmark how check time scales linearly
- Compare
self_collisionresults between a straight and a folded arm configuration - Use
spheres.min_distance(&obstacles)instead ofcollides_withto get the closest approach distance
Next
- Planning with Obstacles — using collision checking in a planner
- Servo Control — real-time collision avoidance in control loops
Servo Control
What You’ll Learn
- Create a
Servocontroller for real-time Cartesian velocity control - Send twist (linear + angular velocity) commands at 100 Hz
- Monitor manipulability and singularity proximity
- Execute multi-phase motions with collision awareness
Prerequisites
Overview
Servo control lets you command a robot’s end-effector velocity in real time,
useful for teleoperation, visual servoing, and force-guided assembly. Kinetic’s
Servo controller converts Cartesian twist commands into joint velocity commands
at each timestep, while monitoring singularity proximity and collision distances.
This tutorial runs a simulated 2-second servo loop with two motion phases:
forward along X, then sideways along Y.
Step 1: Load Robot and Create Scene
use std::sync::Arc;
use kinetic::core::Twist;
use kinetic::prelude::*;
use kinetic::reactive::servo::{Servo, ServoConfig};
fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
let robot = Arc::new(Robot::from_urdf_string(ARM_URDF)?);
let mut scene = Scene::new(&robot)?;
scene.add_box("table", [0.4, 0.4, 0.01], [0.6, 0.0, 0.3]);
let scene = Arc::new(scene);
What this does: Loads a 6-DOF arm and creates a scene with a table obstacle. Both are wrapped in Arc because Servo needs shared ownership for internal threading.
Why: Servo control runs at high frequency (100+ Hz) and needs fast access to robot kinematics and collision data. Arc enables safe shared access without copying the robot model or scene at every timestep.
Step 2: Configure and Initialize Servo
#![allow(unused)]
fn main() {
let config = ServoConfig {
rate_hz: 100.0,
singularity_threshold: 0.005,
..Default::default()
};
let mut servo = Servo::new(&robot, &scene, config)?;
let initial_joints = vec![0.0, -0.8, 1.0, 0.0, -0.5, 0.0];
let initial_vel = vec![0.0; robot.dof];
servo.set_state(&initial_joints, &initial_vel)?;
}
What this does: Creates a Servo controller at 100 Hz. singularity_threshold controls when the controller starts scaling down velocities near singular configurations. set_state initializes the joint positions and velocities.
Why: The rate determines the timestep for integration (dt = 1/rate_hz = 10 ms). A higher rate gives smoother motion but requires faster computation. The singularity threshold is a manipulability value below which the controller degrades gracefully instead of producing large joint velocities.
Step 3: Phase 1 — Forward Motion
#![allow(unused)]
fn main() {
let forward_twist = Twist::new(
Vector3::new(0.05, 0.0, 0.0), // 5 cm/s forward
Vector3::zeros(), // no rotation
);
for step in 0..100 {
let cmd = servo.send_twist(&forward_twist)?;
let state = servo.state();
if step % 20 == 0 {
print!(" t={:.2}s joints=[", step as f64 / 100.0);
for (i, p) in cmd.positions.iter().enumerate() {
if i > 0 { print!(", "); }
print!("{:.3}", p);
}
println!("]");
println!(
" manip={:.4} near_singularity={} near_collision={}",
state.manipulability, state.is_near_singularity, state.is_near_collision,
);
}
}
}
What this does: Sends 100 twist commands (1 second at 100 Hz), each requesting 5 cm/s forward motion in the X direction. send_twist returns a JointCommand with the computed joint positions. The state reports manipulability (a scalar measuring how “dexterous” the current configuration is) and collision/singularity proximity flags.
Why: Each send_twist call does: (1) compute Jacobian at current configuration, (2) invert it to get joint velocities from Cartesian velocity, (3) integrate joint positions forward by dt, (4) check for singularity and collision. The returned cmd.positions are what you would send to a real robot controller.
Step 4: Phase 2 — Sideways Motion
#![allow(unused)]
fn main() {
let sideways_twist = Twist::new(
Vector3::new(0.0, 0.05, 0.0), // 5 cm/s sideways
Vector3::zeros(),
);
for step in 0..100 {
let cmd = servo.send_twist(&sideways_twist)?;
// ... same logging pattern
}
}
What this does: Changes the twist direction to the Y axis and runs another 100 steps. The motion seamlessly transitions from forward to sideways without stopping.
Why: Phase-based motion demonstrates how to compose servo commands in sequence. In teleoperation, the twist comes from a joystick or spacemouse. In visual servoing, it comes from an image-based controller. The servo loop does not care about the source — it converts whatever twist you provide.
Step 5: Inspect Final State
#![allow(unused)]
fn main() {
let final_state = servo.state();
let ee = final_state.ee_pose.translation.vector;
println!("Final EE position: ({:.4}, {:.4}, {:.4})", ee.x, ee.y, ee.z);
println!("Manipulability: {:.4}", final_state.manipulability);
println!("Near singularity: {}", final_state.is_near_singularity);
println!("Near collision: {}", final_state.is_near_collision);
println!("Min obstacle distance: {:.4} m", final_state.min_obstacle_distance);
Ok(())
}
}
What this does: Reads the final servo state after both motion phases. The state includes the end-effector pose, manipulability, singularity and collision flags, and the minimum distance to any obstacle.
Why: min_obstacle_distance is critical for safety monitoring. If it drops below your safety threshold, you should reduce speed or stop. manipulability trending toward zero warns of an approaching singularity where the robot loses a degree of freedom and small Cartesian motions require large joint velocities.
Complete Code
use std::sync::Arc;
use kinetic::core::Twist;
use kinetic::prelude::*;
use kinetic::reactive::servo::{Servo, ServoConfig};
fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
let robot = Arc::new(Robot::from_urdf_string(ARM_URDF)?);
let mut scene = Scene::new(&robot)?;
scene.add_box("table", [0.4, 0.4, 0.01], [0.6, 0.0, 0.3]);
let scene = Arc::new(scene);
let config = ServoConfig { rate_hz: 100.0, singularity_threshold: 0.005, ..Default::default() };
let mut servo = Servo::new(&robot, &scene, config)?;
servo.set_state(&[0.0, -0.8, 1.0, 0.0, -0.5, 0.0], &vec![0.0; robot.dof])?;
// Phase 1: Forward (X axis, 1 second)
let fwd = Twist::new(Vector3::new(0.05, 0.0, 0.0), Vector3::zeros());
for _ in 0..100 { servo.send_twist(&fwd)?; }
// Phase 2: Sideways (Y axis, 1 second)
let side = Twist::new(Vector3::new(0.0, 0.05, 0.0), Vector3::zeros());
for _ in 0..100 { servo.send_twist(&side)?; }
let state = servo.state();
let ee = state.ee_pose.translation.vector;
println!("EE: ({:.4}, {:.4}, {:.4}), manip={:.4}", ee.x, ee.y, ee.z, state.manipulability);
Ok(())
}
What You Learned
Servo::new(&robot, &scene, config)creates a real-time Cartesian velocity controllerTwist::new(linear, angular)specifies desired end-effector velocitysend_twist()converts Cartesian velocity to joint commands at each timestep- Servo state reports
manipulability,is_near_singularity,is_near_collision, andmin_obstacle_distance - Phase-based motion is achieved by changing the twist command between loop iterations
ServoConfig::rate_hzdetermines the integration timestep (dt = 1/rate)
Try This
- Add angular velocity to the twist:
Twist::new(linear, Vector3::new(0.0, 0.0, 0.1))for rotation about Z - Reduce
singularity_thresholdto 0.001 and drive the arm toward a singular configuration to observe the scaling behavior - Move the table obstacle closer to the arm and observe
is_near_collisionandmin_obstacle_distance - Increase
rate_hzto 500 and compare the smoothness of the end-effector trajectory
Next
- Grasp Planning — generating grasp candidates for objects
- Pick and Place — combining servo with task planning
Grasp Planning
What You’ll Learn
- Create a parallel jaw gripper model with
GripperType::parallel - Generate grasp candidates for cylinders, cuboids, and spheres
- Rank grasps by force closure quality, distance from center of mass, or approach angle
- Filter and select the best grasp for execution
Prerequisites
Overview
Before a robot can pick up an object, it needs to know how to grasp it.
Kinetic’s GraspGenerator produces ranked grasp candidates from object geometry.
Given a shape (cylinder, cuboid, sphere) and a gripper model, it generates
antipodal, top-down, and side grasps, then ranks them by quality metrics like
force closure. This tutorial generates grasps for three common object shapes and
compares ranking strategies.
Step 1: Create a Gripper
use kinetic::grasp::{
GraspConfig, GraspError, GraspGenerator, GraspMetric, GraspType, GripperType,
};
use kinetic::prelude::*;
fn main() -> kinetic::core::Result<()> {
let gripper = GripperType::parallel(0.08, 0.03);
let gen = GraspGenerator::new(gripper);
What this does: Creates a parallel jaw gripper model with 80 mm maximum opening and 30 mm finger depth. The GraspGenerator uses these dimensions to determine which grasps are geometrically feasible.
Why: The gripper model constrains which grasps are possible. An object wider than max_opening cannot be grasped. finger_depth determines how far the fingers wrap around the object, affecting grasp stability. Modeling the gripper accurately prevents generating grasps that would fail on real hardware.
Step 2: Define Objects and Generate Grasps
#![allow(unused)]
fn main() {
let objects: Vec<(&str, Shape, Isometry3<f64>)> = vec![
(
"Bottle (cylinder)",
Shape::Cylinder(0.03, 0.08), // radius 3cm, half-height 8cm
Isometry3::translation(0.4, 0.0, 0.08),
),
(
"Box (cuboid)",
Shape::Cuboid(0.04, 0.03, 0.02), // half-extents: 4x3x2 cm
Isometry3::translation(0.4, 0.2, 0.02),
),
(
"Ball (sphere)",
Shape::Sphere(0.025), // radius 2.5 cm
Isometry3::translation(0.4, -0.2, 0.025),
),
];
for (name, shape, pose) in &objects {
let config = GraspConfig {
num_candidates: 50,
rank_by: GraspMetric::ForceClosureQuality,
..Default::default()
};
match gen.from_shape(shape, pose, config) {
Ok(grasps) => {
println!("{}: {} grasps", name, grasps.len());
// Show top 3
for (i, g) in grasps.iter().take(3).enumerate() {
println!(
" #{}: type={:?}, quality={:.3}, approach=({:.2}, {:.2}, {:.2})",
i + 1, g.grasp_type, g.quality,
g.approach_direction.x, g.approach_direction.y, g.approach_direction.z,
);
}
}
Err(GraspError::NoGraspsFound) => {
println!("{}: no valid grasps (object too large for gripper)", name);
}
Err(e) => println!("{}: error — {}", name, e),
}
}
}
What this does: For each object, calls gen.from_shape() to generate up to 50 grasp candidates ranked by force closure quality. Each grasp has a grasp_type (Antipodal, TopDown, or SideGrasp), a quality score (0.0 to 1.0), a grasp_pose (the gripper TCP pose), and an approach_direction.
Why: Different shapes produce different grasp distributions. Cylinders favor antipodal grasps around the circumference. Cuboids produce grasps aligned with each face. Spheres are hardest — only small spheres that fit within the gripper opening are graspable. Force closure quality measures how well the grasp resists arbitrary external wrenches.
Step 3: Count Grasps by Type
#![allow(unused)]
fn main() {
let antipodal = grasps.iter()
.filter(|g| g.grasp_type == GraspType::Antipodal).count();
let topdown = grasps.iter()
.filter(|g| g.grasp_type == GraspType::TopDown).count();
let side = grasps.iter()
.filter(|g| g.grasp_type == GraspType::SideGrasp).count();
println!(" Types: {} antipodal, {} top-down, {} side", antipodal, topdown, side);
}
What this does: Categorizes the generated grasps. Antipodal grasps apply opposing forces through the object’s center. Top-down grasps approach from above. Side grasps approach from the side.
Why: Grasp type selection depends on the scene. If there is a table below the object, top-down grasps require enough clearance above. If there are objects on both sides, antipodal grasps along the free axis are preferred. Knowing the distribution helps you filter for the right approach direction.
Step 4: Compare Ranking Metrics
#![allow(unused)]
fn main() {
let bottle = Shape::Cylinder(0.03, 0.08);
let bottle_pose = Isometry3::translation(0.4, 0.0, 0.08);
let metrics = [
("Force Closure", GraspMetric::ForceClosureQuality),
("Distance from CoM", GraspMetric::DistanceFromCenterOfMass),
("Approach Angle", GraspMetric::ApproachAngle),
];
for (metric_name, metric) in &metrics {
let config = GraspConfig {
num_candidates: 20,
rank_by: *metric,
..Default::default()
};
if let Ok(grasps) = gen.from_shape(&bottle, &bottle_pose, config) {
let best = &grasps[0];
println!(
" {}: best quality={:.3}, type={:?}",
metric_name, best.quality, best.grasp_type,
);
}
}
Ok(())
}
}
What this does: Generates grasps for the same bottle using three different ranking metrics and compares which grasp type ranks first under each.
Why: ForceClosureQuality maximizes resistance to disturbances — best for heavy or slippery objects. DistanceFromCenterOfMass minimizes torque during lifting — best for tall objects that might tip. ApproachAngle prefers grasps with approach directions aligned with a preferred axis (typically vertical) — best when the scene constrains the approach direction.
Complete Code
See examples/grasp_planning.rs for the full listing combining all steps above.
What You Learned
GripperType::parallel(max_opening, finger_depth)defines the gripper geometryGraspGenerator::new(gripper)creates a generator bound to the gripper modelgen.from_shape(&shape, &pose, config)generates ranked grasp candidates- Three grasp types:
Antipodal,TopDown,SideGrasp - Three ranking metrics:
ForceClosureQuality,DistanceFromCenterOfMass,ApproachAngle - Grasps are returned sorted by quality (best first)
Try This
- Increase
num_candidatesto 200 and observe more diverse grasp orientations - Try a shape wider than
max_opening(e.g.,Shape::Sphere(0.05)with 0.08 opening) and handleGraspError::NoGraspsFound - Use
GraspConfig { check_collision: Some(scene.clone()), .. }to filter out grasps that collide with the scene - Combine with IK: for each grasp candidate, solve IK to check if the arm can reach the grasp pose
Next
- Pick and Place — using grasps in a full pick-place pipeline
- Task Sequencing — composing grasps into multi-stage tasks
Pick and Place
What You’ll Learn
- Build a complete plan-to-execute pipeline for a UR5e
- Create a collision scene with obstacles
- Plan, time-parameterize, and execute a trajectory
- Verify joint limits and export the trajectory to JSON
- Use
PlanExecuteLoopfor a one-liner workflow
Prerequisites
Overview
This tutorial walks through kinetic’s full workflow from loading a robot to
exporting a trajectory. It covers scene construction, collision-aware planning,
per-joint trapezoidal time parameterization using the robot’s actual velocity and
acceleration limits, execution with a logging executor, joint limit validation,
frame tracking, and JSON export. Finally, it shows PlanExecuteLoop — a
one-liner that handles planning and execution together.
Step 1: Load Robot and Build Scene
use kinetic::prelude::*;
fn main() -> Result<()> {
let robot = Robot::from_name("ur5e")?;
println!("Loaded {} ({}DOF)", robot.name, robot.dof);
let mut scene = Scene::new(&robot)?;
scene.add_box("table", [0.4, 0.3, 0.02], [0.5, 0.0, -0.02]);
println!("Scene: {} objects", scene.num_objects());
What this does: Loads the UR5e from the built-in robot library and creates a scene with a table surface. The table is a thin box positioned just below the XY plane.
Why: Robot::from_name is the quickest way to load a supported robot — it bundles URDF, joint limits, and collision meshes. The scene ensures the planner avoids the table during path search.
Step 2: Plan a Collision-Free Path
#![allow(unused)]
fn main() {
let start = vec![0.0, -1.57, 0.0, -1.57, 0.0, 0.0];
let goal_joints = vec![1.0, -1.0, 0.5, -1.0, -0.5, 0.3];
let planner = Planner::new(&robot)?.with_scene(&scene);
let plan_result = planner.plan(&start, &Goal::joints(goal_joints.clone()))?;
println!(
"Planned: {} waypoints in {:.1}ms",
plan_result.waypoints.len(),
plan_result.planning_time.as_secs_f64() * 1000.0
);
}
What this does: Plans a joint-space path from start to goal_joints with scene awareness. The planner runs RRT-Connect to find a collision-free path.
Why: The planning result is a geometric path — a sequence of waypoint joint configurations. It has no timing information yet. The waypoint count depends on the scene complexity and path length; the planner inserts waypoints at collision-critical points.
Step 3: Time-Parameterize with Per-Joint Limits
#![allow(unused)]
fn main() {
let vel_limits = robot.velocity_limits();
let accel_limits = robot.acceleration_limits();
let timed = kinetic::trajectory::trapezoidal_per_joint(
&plan_result.waypoints,
&vel_limits,
&accel_limits,
).map_err(|e| KineticError::PlanningFailed(e))?;
println!(
"Trajectory: {:.3}s, {} waypoints",
timed.duration.as_secs_f64(),
timed.waypoints.len()
);
}
What this does: Applies trapezoidal velocity profiling with per-joint limits. Each joint accelerates to its individual maximum velocity, cruises, then decelerates — respecting the robot’s actual hardware limits from the URDF.
Why: Unlike the simpler trapezoidal() which uses a single velocity limit for all joints, trapezoidal_per_joint uses the robot’s real limits. Shoulder joints are typically slower (2.175 rad/s on UR5e) while wrist joints are faster (2.61 rad/s). Using real limits produces trajectories that run at full speed without exceeding hardware constraints.
Step 4: Execute and Log
#![allow(unused)]
fn main() {
let mut executor = LogExecutor::new(ExecutionConfig {
rate_hz: 100.0,
..Default::default()
});
let exec_result = executor.execute_and_log(&timed)
.map_err(|e| KineticError::PlanningFailed(e.to_string()))?;
println!(
"Executed: {:?}, {} commands, {:.3}s",
exec_result.state, exec_result.commands_sent,
exec_result.actual_duration.as_secs_f64()
);
}
What this does: Creates a LogExecutor that records every joint command at 100 Hz. execute_and_log walks the trajectory, interpolating positions at each timestep, and stores all commands for later analysis.
Why: LogExecutor is a simulation executor for testing and validation. It implements the same Executor trait as hardware executors, so you can develop and test your pipeline without a real robot, then swap in a hardware executor for deployment.
Step 5: Validate Joint Limits
#![allow(unused)]
fn main() {
let commands = executor.commands();
let mut all_valid = true;
for cmd in commands {
for (j, &pos) in cmd.positions.iter().enumerate() {
if pos < robot.joint_limits[j].lower - 0.001
|| pos > robot.joint_limits[j].upper + 0.001
{
println!("WARNING: joint {} at {:.4} outside limits", j, pos);
all_valid = false;
}
}
}
if all_valid {
println!("All {} commands within joint limits", commands.len());
}
}
What this does: Iterates every logged command and verifies each joint position falls within the robot’s limits (with 1 mm tolerance for floating-point rounding).
Why: Joint limit validation is a safety check that catches bugs in planning or time parameterization. On real hardware, exceeding joint limits can trigger emergency stops or cause mechanical damage. Always validate before deploying to production.
Step 6: Export and Frame Tracking
#![allow(unused)]
fn main() {
// Track coordinate frames (like ROS tf2)
let tree = FrameTree::new();
tree.set_static_transform("base_link", "camera", Isometry3::translation(0.0, 0.0, 0.5));
// Verify final end-effector pose
let final_pose = kinetic::kinematics::forward_kinematics(
&robot, &planner.chain(), &exec_result.final_positions,
)?;
println!("Final EE: [{:.4}, {:.4}, {:.4}]",
final_pose.translation().x, final_pose.translation().y, final_pose.translation().z);
// Export trajectory to JSON
let json = trajectory_to_json(&timed);
println!("Exported: {} bytes JSON", json.len());
}
What this does: FrameTree manages parent-child transform relationships (like ROS tf2). trajectory_to_json serializes for visualization or interop.
Why: Frame tracking keeps sensor-to-robot calibrations organized. JSON export enables replay, analysis in external tools, and archival.
Step 7: PlanExecuteLoop One-Liner
#![allow(unused)]
fn main() {
let planner2 = Planner::new(&robot)?;
let sim_executor = Box::new(SimExecutor::default());
let mut pel = PlanExecuteLoop::new(planner2, sim_executor);
let pel_result = pel.move_to(&start, &Goal::joints(goal_joints))?;
println!(
"PlanExecuteLoop: {:.1}ms, {} replans",
pel_result.total_duration.as_secs_f64() * 1000.0,
pel_result.replans
);
Ok(())
}
}
What this does: PlanExecuteLoop wraps a planner and executor into a single object. move_to plans, time-parameterizes, and executes in one call, with automatic replanning if the first attempt fails or the environment changes.
Why: For applications that do not need fine-grained control over each stage, PlanExecuteLoop reduces the pipeline to a single method call. It handles retries, replanning on collision, and trajectory validation internally.
Complete Code
See examples/plan_and_execute.rs for the full listing combining all steps above.
What You Learned
Robot::from_name("ur5e")loads a built-in robot with correct limitstrapezoidal_per_jointrespects each joint’s individual velocity/acceleration limitsLogExecutorrecords commands;FrameTreetracks frames;trajectory_to_jsonexportsPlanExecuteLoopwraps the full pipeline in a singlemove_tocall
Try This
- Replace
LogExecutorwithSimExecutorand compare the execution results - Add a second motion from
goalback tostartand concatenate the trajectories - Use
TrajectoryValidatorto check velocity and acceleration limits before execution - Call
planner.fk(&exec_result.final_positions)to verify the arm reached the goal pose
Next
- Task Sequencing — multi-stage pick-place sequences
- GPU Optimization — GPU-accelerated trajectory optimization
Task Sequencing
What You’ll Learn
- Compose multi-stage manipulation sequences with
Task::sequence - Use
Task::move_to,Task::gripper,Task::pick, andTask::place - Plan an entire pick-and-place sequence as a single task
- Inspect per-stage trajectories and scene modifications
- Validate the complete solution before execution
Prerequisites
Overview
Real manipulation tasks involve multiple stages: move to approach pose, open
gripper, descend, close gripper, retreat, move to place location, release. Rather
than planning each stage manually and stitching them together, kinetic’s Task
enum lets you declare the sequence declaratively. The task planner handles stage
transitions, joint continuity between stages, and scene modifications (attaching
and detaching objects). This tutorial builds a multi-stage pick-place task from
primitives.
Step 1: Set Up Robot and Scene
use std::sync::Arc;
use nalgebra::{Isometry3, Vector3};
use kinetic::prelude::*;
use kinetic_task::{Task, PickConfig, PlaceConfig, Approach};
fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
let robot = Arc::new(Robot::from_name("ur5e")?);
let mut scene = Scene::new(&robot)?;
scene.add_box("table", [0.6, 0.4, 0.02], [0.5, 0.0, -0.02]);
scene.add_box("cup", [0.03, 0.03, 0.05], [0.4, 0.1, 0.05]);
let scene = Arc::new(scene);
What this does: Creates a UR5e robot and a scene with a table and a cup object. Both are wrapped in Arc because Task variants hold shared references for recursive planning.
Why: The scene contains both obstacles (table) and manipulable objects (cup). During pick, the cup will be “attached” to the gripper link in the scene model; during place, it will be “detached” to its target pose. Arc enables this shared state across task stages.
Step 2: Build a Task Sequence
#![allow(unused)]
fn main() {
let home_goal = Goal::joints([0.0, -1.57, 0.0, -1.57, 0.0, 0.0]);
let task = Task::sequence(vec![
// Stage 1: Move to home position
Task::move_to(&robot, home_goal.clone()),
// Stage 2: Open gripper
Task::gripper(0.08),
// Stage 3: Pick the cup
Task::pick(&robot, &scene, PickConfig {
object: "cup".into(),
grasp_poses: vec![Isometry3::translation(0.4, 0.1, 0.05)],
approach: Approach::linear(-Vector3::z(), 0.10),
retreat: Approach::linear(Vector3::z(), 0.05),
gripper_open: 0.08,
gripper_close: 0.04,
}),
// Stage 4: Move to place location
Task::move_to(&robot, Goal::joints([0.8, -1.0, 0.5, -1.0, -0.5, 0.2])),
// Stage 5: Place the cup
Task::place(&robot, &scene, PlaceConfig {
object: "cup".into(),
target_pose: Isometry3::translation(0.5, -0.2, 0.05),
approach: Approach::linear(-Vector3::z(), 0.08),
retreat: Approach::linear(Vector3::z(), 0.10),
gripper_open: 0.08,
}),
// Stage 6: Return home
Task::move_to(&robot, home_goal),
]);
}
What this does: Declares six stages chained so each starts where the previous ended. Task::pick internally generates sub-stages (approach, gripper close, retreat) and handles scene attachment.
Why: Declarative composition separates what from how. You describe intent; the planner handles path planning, IK, and collision checking per sub-stage.
Step 3: Plan the Complete Task
#![allow(unused)]
fn main() {
let start_joints = [0.0, -1.57, 0.0, -1.57, 0.0, 0.0];
let solution = task.plan(&start_joints)?;
println!("Task planned in {:.1}ms", solution.total_planning_time.as_millis());
println!("Total duration: {:.3}s", solution.total_duration.as_secs_f64());
println!("Stages: {}", solution.stages.len());
}
What this does: Recursively plans each sub-task, threading the final joint configuration of each stage as the start of the next.
Why: Single-call planning ensures joint continuity — stage N ends where stage N+1 begins, preventing discontinuities on real hardware.
Step 4: Inspect Per-Stage Results
#![allow(unused)]
fn main() {
for stage in &solution.stages {
let duration = stage.trajectory.as_ref()
.map(|t| format!("{:.3}s", t.duration.as_secs_f64()))
.unwrap_or_else(|| "instant".to_string());
let info = if let Some(width) = stage.gripper_command {
format!("gripper → {:.0}mm", width * 1000.0)
} else if let Some(diff) = &stage.scene_diff {
format!("{:?}", diff)
} else {
format!("{} waypoints", stage.trajectory.as_ref()
.map(|t| t.waypoints.len()).unwrap_or(0))
};
println!(" {}: {} — {}", stage.name, duration, info);
}
}
What this does: Prints each stage’s name, duration, and details (gripper width, waypoint count, or scene modification).
Why: Verify the plan before execution — check durations, gripper timing, and scene modification order.
Step 5: Validate and Get Final Configuration
#![allow(unused)]
fn main() {
// Get the final joint configuration
if let Some(final_joints) = solution.final_joints() {
println!("Final joints: {:?}", &final_joints[..3]);
}
// Validate all trajectories against joint limits
let validator = TrajectoryValidator::from_robot(&robot);
let violations = solution.validate_trajectories(&validator);
if violations.is_empty() {
println!("All stages pass joint limit validation");
} else {
for (stage_name, viols) in &violations {
println!("WARNING: {} has {} violations", stage_name, viols.len());
}
}
Ok(())
}
}
What this does: Extracts the final joint state and validates all stages against joint/velocity/acceleration limits.
Why: final_joints() is useful for chaining tasks. validate_trajectories catches limit violations across all stages before execution.
Complete Code
The code above (Steps 1-5) forms the complete listing. Copy them into a single main() function. No separate example file exists for this tutorial — it is composed from the kinetic_task API.
What You Learned
Task::sequence(vec![...])chains multiple task stages in orderTask::move_toplans a joint-space motion via RRTTask::gripper(width)inserts a gripper open/close commandTask::pickandTask::placehandle approach, grasp/release, and retreat sub-stagesApproach::linear(direction, distance)specifies approach/retreat motion direction and distancetask.plan(&start)plans all stages with automatic joint continuitysolution.validate_trajectories()checks all stages against hardware limits
Try This
- Add a
Task::cartesian_movestage for a precise linear approach instead of joint-space planning - Use
Task::pick_autoto auto-generate grasps from the object’s shape instead of specifyinggrasp_posesmanually - Nest sequences:
Task::sequence(vec![pick_sequence, place_sequence])for modular task composition - Apply
apply_scene_diffs(&mut scene, &solution)to update the scene after execution and verify the cup moved
Next
- GPU Optimization — GPU-accelerated trajectory optimization
- Grasp Planning — generating grasp candidates automatically
GPU Optimization
What You’ll Learn
- Configure GPU-accelerated trajectory optimization with
GpuConfigpresets - Build an obstacle environment from sphere primitives
- Run parallel-seed optimization on the GPU via
GpuOptimizer - Handle GPU unavailability with graceful CPU fallback
- Interpret optimization results (goal error, trajectory cost, timing)
Prerequisites
Overview
Kinetic’s GPU trajectory optimizer uses wgpu compute shaders to evaluate many trajectory candidates simultaneously. Instead of the sequential sample-and-extend approach of RRT, it initializes dozens to hundreds of trajectory seeds, then applies parallel gradient descent to minimize a cost function that balances collision avoidance, smoothness, and goal reaching. This tutorial configures the optimizer, builds an obstacle environment, runs optimization, and inspects the result.
Step 1: Load Robot and Define Start/Goal
use kinetic::collision::SpheresSoA;
use kinetic::gpu::{GpuConfig, GpuOptimizer};
use kinetic::prelude::*;
fn main() -> kinetic::core::Result<()> {
let robot = Robot::from_name("franka_panda")?;
println!("Robot: {} ({} DOF)", robot.name, robot.dof);
let start: Vec<f64> = robot.joint_limits.iter()
.map(|l| (l.lower + l.upper) / 2.0)
.collect();
let goal: Vec<f64> = robot.joint_limits.iter()
.map(|l| l.lower + (l.upper - l.lower) * 0.75)
.collect();
What this does: Loads a 7-DOF Franka Panda and defines start (mid-range of each joint) and goal (75th percentile of each joint’s range). Both configurations are guaranteed to be within joint limits.
Why: GPU optimization works on any robot kinetic supports. The Panda’s 7-DOF redundancy makes it a good test case because there are many possible trajectories between any two configurations. Starting at mid-range ensures a well-conditioned initial configuration.
Step 2: Build the Obstacle Environment
#![allow(unused)]
fn main() {
let mut obstacles = SpheresSoA::new();
// Table surface: grid of spheres at z=0.4
for i in 0..10 {
for j in 0..10 {
let x = 0.2 + i as f64 * 0.06;
let y = -0.3 + j as f64 * 0.06;
obstacles.push(x, y, 0.4, 0.03, 0);
}
}
// Pillar obstacle
for k in 0..5 {
obstacles.push(0.4, 0.2, 0.5 + k as f64 * 0.05, 0.04, 1);
}
println!("Environment: {} obstacle spheres", obstacles.len());
}
What this does: Builds a table surface from a 10x10 grid of small spheres at z=0.4, plus a vertical pillar of 5 spheres. Each push call adds a sphere with (x, y, z, radius, group_id).
Why: The GPU optimizer uses a Signed Distance Field (SDF) constructed from obstacle spheres. The SDF is voxelized on the GPU at the resolution specified by GpuConfig::sdf_resolution. A denser sphere environment creates a more detailed SDF, producing smoother collision gradients for the optimizer to follow.
Step 3: Configure the GPU Optimizer
#![allow(unused)]
fn main() {
let config = GpuConfig {
num_seeds: 64, // parallel trajectory candidates
timesteps: 32, // waypoints per trajectory
iterations: 50, // gradient descent iterations
collision_weight: 100.0,
smoothness_weight: 1.0,
goal_weight: 50.0,
step_size: 0.01,
sdf_resolution: 0.03,
workspace_bounds: [-1.0, -1.0, -0.5, 1.0, 1.0, 1.5],
seed_perturbation: 0.3,
warm_start: None,
};
}
What this does: Configures the optimizer to run 64 trajectory seeds in parallel, each with 32 waypoints, for 50 gradient descent iterations. The cost function weights collision avoidance at 100x, smoothness at 1x, and goal reaching at 50x.
Why: The trade-offs are:
num_seeds: More seeds explore more of configuration space but use more GPU memory. 64 is a good balance; use 512 for quality-critical offline planning.timesteps: More waypoints produce smoother trajectories. 32 is standard; 48 for complex environments.iterations: More iterations improve convergence. 50 is fast; 200 for high quality.collision_weight: Higher values push harder away from obstacles. Too high causes the optimizer to “flee” obstacles rather than navigate around them.sdf_resolution: Finer resolution (0.01) is more accurate but uses more GPU memory. Coarser (0.05) is faster.
Step 4: Presets and Initialization
Kinetic provides three presets: GpuConfig::balanced() (default), GpuConfig::speed() (32 seeds, 24 steps, 30 iters — for real-time replanning), and GpuConfig::quality() (512 seeds, 48 steps, 200 iters — for offline planning). Start with a preset and adjust individual parameters only if needed.
#![allow(unused)]
fn main() {
let optimizer = match GpuOptimizer::new(config) {
Ok(opt) => opt,
Err(e) => {
eprintln!("GPU init failed: {} (needs Vulkan/Metal/DX12)", e);
return Ok(()); // Fall back to CPU-based RRT planning
}
};
let trajectory = optimizer.optimize(&robot, &obstacles, &start, &goal)?;
}
What this does: Initializes wgpu, compiles compute shaders, and runs optimization. Both can fail if no GPU is available.
Why: Always handle GPU failure gracefully — CI servers and SSH sessions lack GPU access. Fall back to CPU-based RRT planning for portability.
Step 5: Inspect Results
#![allow(unused)]
fn main() {
println!("Trajectory waypoints: {}", trajectory.len());
if trajectory.len() >= 2 {
let first = trajectory.waypoint(0);
let last = trajectory.waypoint(trajectory.len() - 1);
// Check goal reaching accuracy
let goal_err: f64 = last.positions.0.iter()
.zip(goal.iter())
.map(|(a, b)| (a - b).powi(2))
.sum::<f64>()
.sqrt();
println!("Goal error (joint-space L2): {:.4} rad", goal_err);
}
Ok(())
}
}
What this does: Prints the number of trajectory waypoints and computes the L2 distance between the final waypoint and the goal configuration. A goal error below 0.01 rad indicates successful convergence.
Why: Unlike RRT which guarantees the exact goal configuration is reached (within tolerance), GPU optimization produces a trajectory that approaches the goal as determined by the goal_weight. Higher goal_weight forces tighter goal reaching at the possible cost of less smooth trajectories. Always check goal_err to verify the trajectory is usable.
Complete Code
See examples/gpu_optimize.rs for the full listing combining all steps above.
What You Learned
GpuConfigcontrols seeds, timesteps, iterations, cost weights, and SDF resolution- Three presets:
balanced()(default),speed()(real-time),quality()(offline) GpuOptimizer::new(config)initializes wgpu and compiles compute shadersoptimizer.optimize(&robot, &obstacles, &start, &goal)runs parallel optimization- Always handle GPU initialization failure for portability
- Goal error should be checked — GPU optimization approaches the goal, not exact
- Obstacle environments are built from
SpheresSoAwhich becomes a GPU-side SDF
Try This
- Compare
GpuConfig::speed()vsGpuConfig::quality()on the same problem — measure planning time and goal error - Increase
num_seedsto 256 and observe whether goal error decreases - Set
warm_start: Some(initial_trajectory)from a previous RRT solution to seed the optimizer with a known-good path - Remove all obstacles and verify the optimizer produces a near-straight-line trajectory
- Try
GpuConfig { collision_weight: 0.0, .. }to disable collision checking and see the difference in trajectory shape
Next
- Planning Basics — CPU-based RRT planning for comparison
- Pick and Place — using optimized trajectories in a full pipeline
Python Quickstart
Get up and running with kinetic from Python in under 5 minutes: load a robot, compute FK/IK, and plan your first motion – all with numpy arrays.
Install
pip install kinetic numpy
kinetic ships as a native extension (PyO3). No Rust toolchain needed.
The standard import
import kinetic
import numpy as np
All functions that accept or return arrays use numpy.ndarray.
Load a robot
kinetic ships 52 built-in robots. Load one by name:
robot = kinetic.Robot("ur5e")
print(robot.name) # "ur5e"
print(robot.dof) # 6
print(robot.joint_names) # ['shoulder_pan', 'shoulder_lift', ...]
Other built-ins: franka_panda, kuka_iiwa7, kinova_gen3. Custom URDFs:
robot = kinetic.Robot.from_urdf("/path/to/my_robot.urdf")
Forward kinematics
Pass joint angles as a numpy array, get a 4x4 SE(3) matrix back:
joints = np.array([0.0, -1.57, 0.0, -1.57, 0.0, 0.0])
pose = robot.fk(joints)
print(pose.shape) # (4, 4)
print(pose[:3, 3]) # XYZ position of the end-effector
J = robot.jacobian(joints)
print(J.shape) # (6, 6)
Inverse kinematics
IK finds joint angles that reach a target pose (DLS, 8 random restarts):
target_pose = robot.fk(np.array([0.5, -1.0, 0.5, -0.5, 0.5, 0.0]))
solution = robot.ik(target_pose)
print(solution) # array of 6 joint values
# With a seed for deterministic results
solution = robot.ik(target_pose, seed=np.zeros(6))
For full control, ik_config returns a dict with convergence info:
result = robot.ik_config(
target_pose, solver="auto", max_iterations=300, num_restarts=10,
)
print(result["converged"]) # True
print(result["joints"]) # numpy array
print(result["position_error"]) # meters
Named poses and manipulability
print(robot.named_poses) # ['home', 'ready', 'zero', ...]
home = robot.named_pose("home")
m = robot.manipulability(joints)
print(f"manipulability = {m:.4f}") # 0 = singular
Batch operations
Process many configurations at once – much faster than a Python loop:
configs = np.random.uniform(-1.5, 1.5, size=(100, robot.dof))
poses = robot.batch_fk(configs) # 100 FK calls in one shot
print(len(poses), poses[0].shape) # 100, (4, 4)
targets = [robot.fk(c) for c in configs[:10]]
results = robot.batch_ik(targets, solver="auto")
for r in results:
if r is not None:
print(f"converged={r['converged']}, error={r['position_error']:.6f}")
Plan a motion
The fastest way to plan – one function call:
start = np.array([0.0, -1.57, 0.0, -1.57, 0.0, 0.0])
goal = kinetic.Goal.joints(np.array([0.5, -1.0, 0.5, -0.5, 0.5, 0.0]))
traj = kinetic.plan("ur5e", start, goal)
print(f"{traj.num_waypoints} waypoints, {traj.duration:.2f}s")
For repeated planning, create a Planner once and reuse it:
planner = kinetic.Planner(robot)
traj = planner.plan(start, goal)
Choose from 8 planner algorithms:
planner = kinetic.Planner(robot, planner_type="rrt_star") # asymptotically optimal
# Options: rrt_connect (default), rrt_star, bi_rrt_star, bitrrt, est, kpiece, prm, gcs
Goals can be joint-space, Cartesian, or named:
goal_pose = kinetic.Goal.pose(target_pose) # 4x4 numpy matrix
goal_named = kinetic.Goal.named("home") # named configuration
Sample the trajectory
Trajectories are time-parameterized. Sample at any time:
for t in np.linspace(0, traj.duration, 50):
joints_at_t = traj.sample(t)
print(f"t={t:.3f}s joints={joints_at_t}")
Using help()
Every class has docstrings with text_signature, so built-in help works:
help(kinetic.Robot)
help(kinetic.Robot.ik)
help(kinetic.Planner.plan)
Next
- Planning in Python – scenes, constraints, Cartesian planning
- Trajectories and NumPy – export, visualize, validate
- Dynamics – gravity compensation, inverse/forward dynamics
- GPU Acceleration – parallel trajectory optimization
- Hardware Execution – deploy to real robots
Planning in Python
Collision-aware planning, constrained motion, Cartesian straight-line paths, and error handling.
Setup
import kinetic
import numpy as np
robot = kinetic.Robot("ur5e")
start = np.array([0.0, -1.57, 0.0, -1.57, 0.0, 0.0])
Choose a planner
kinetic ships 9 planning algorithms. Pass planner_type to Planner:
# Default: RRT-Connect (fastest for most problems)
planner = kinetic.Planner(robot)
# Asymptotically optimal (better paths, slower)
planner = kinetic.Planner(robot, planner_type="rrt_star")
# All options: rrt_connect, rrt_star, bi_rrt_star, bitrrt, est, kpiece, prm, gcs
Not sure which to use? See the Planner Selection Guide.
Build a scene with obstacles
A Scene holds collision objects. Each has a Shape and a 4x4 pose matrix:
scene = kinetic.Scene(robot)
# Table (1m x 1m x 2cm)
table_pose = np.eye(4)
table_pose[2, 3] = 0.4
scene.add("table", kinetic.Shape.cuboid(0.5, 0.5, 0.01), table_pose)
# Mug (cylinder r=4cm, h=12cm)
mug_pose = np.eye(4)
mug_pose[0, 3], mug_pose[2, 3] = 0.3, 0.46
scene.add("mug", kinetic.Shape.cylinder(0.04, 0.06), mug_pose)
# Ball (sphere r=5cm)
ball_pose = np.eye(4)
ball_pose[:3, 3] = [0.5, 0.2, 0.5]
scene.add("ball", kinetic.Shape.sphere(0.05), ball_pose)
print(scene.num_objects) # 3
Collision queries
print(f"In collision: {scene.check_collision(start)}")
print(f"Clearance: {scene.min_distance(start):.4f} m")
Plan with a scene
Pass the scene to Planner – it avoids all obstacles automatically:
planner = kinetic.Planner(robot, scene=scene, timeout=5.0)
goal = kinetic.Goal.joints(np.array([1.0, -1.2, 0.8, -0.5, 0.7, 0.0]))
traj = planner.plan(start, goal)
print(f"{traj.num_waypoints} waypoints, {traj.duration:.2f}s")
Scenes are mutable: scene.remove("ball"), scene.clear().
Constrained planning
Constraints restrict the motion during planning:
# Keep EE level (Z-axis up), 0.1 rad tolerance
keep_level = kinetic.Constraint.orientation(
link="tool0", axis=[0.0, 0.0, 1.0], tolerance=0.1,
)
# Restrict EE height to [0.45m, 1.2m]
stay_above = kinetic.Constraint.position_bound(
link="tool0", axis="z", min=0.45, max=1.2,
)
goal = kinetic.Goal.joints(np.array([0.8, -1.0, 0.5, -1.0, 0.8, 0.0]))
traj = planner.plan_constrained(
start, goal, constraints=[keep_level, stay_above],
)
print(f"Constrained: {traj.num_waypoints} waypoints")
Other constraint types:
kinetic.Constraint.joint(joint_index=2, min=-0.5, max=0.5)
kinetic.Constraint.visibility(
sensor_link="camera_link", target=[0.5, 0.0, 0.4], cone_angle=0.3,
)
Cartesian planning
Move the end-effector in a straight line through task space:
config = kinetic.CartesianConfig(
max_step=0.005, # 5mm interpolation step
jump_threshold=1.4, # reject large joint jumps
avoid_collisions=True,
collision_margin=0.02, # 2cm safety margin
)
target = robot.fk(np.array([0.6, -1.1, 0.6, -0.8, 0.6, 0.0]))
result = planner.plan_cartesian(start, kinetic.Goal.pose(target), config=config)
print(f"Achieved {result.fraction * 100:.1f}% of the path")
print(f"{result.trajectory.num_waypoints} waypoints")
fraction is 1.0 when the full straight line is achievable. Less than 1.0
means the robot hit a singularity or collision before reaching the goal.
Allowed collisions
scene.allow_collision("robot_base", "table")
# plan as usual -- base-table contact is ignored
scene.disallow_collision("robot_base", "table")
Error handling
try:
traj = planner.plan(start, goal)
except RuntimeError as e:
print(f"Planning failed: {e}")
try:
solution = robot.ik(target)
except ValueError as e:
print(f"IK failed: {e}")
Next
- Trajectories and NumPy – export, time-parameterize, plot
- Servo Control – real-time reactive control
Trajectories and NumPy
Export trajectories to numpy arrays, apply time parameterizations, validate against joint limits, and plot with matplotlib.
Setup
import kinetic
import numpy as np
robot = kinetic.Robot("ur5e")
start = np.array([0.0, -1.57, 0.0, -1.57, 0.0, 0.0])
goal = kinetic.Goal.joints(np.array([1.0, -0.8, 0.6, -0.5, 0.8, 0.3]))
planner = kinetic.Planner(robot)
traj = planner.plan(start, goal)
Export to numpy arrays
to_numpy() returns three arrays – times, positions, and velocities:
times, positions, velocities = traj.to_numpy()
print(times.shape) # (N,) -- timestamps in seconds
print(positions.shape) # (N, 6) -- joint positions per waypoint
print(velocities.shape) # (N, 6) -- joint velocities per waypoint
Sample at arbitrary times
sample() interpolates at any time. Out-of-range values are clamped:
dt = 0.01
t = 0.0
while t <= traj.duration:
joints = traj.sample(t)
print(f"t={t:.3f} q={np.round(joints, 4)}")
t += dt
Time parameterization
Assign velocities and accelerations that respect physical limits. Four profiles are available:
vel_limits = np.array(robot.velocity_limits)
acc_limits = np.array(robot.acceleration_limits)
# Trapezoidal (bang-coast-bang)
traj_trap = traj.time_parameterize("trapezoidal", vel_limits, acc_limits)
print(f"Trapezoidal: {traj_trap.duration:.3f}s")
# Time-Optimal (TOTP -- fastest possible)
traj_totp = traj.time_parameterize("totp", vel_limits, acc_limits)
print(f"TOTP: {traj_totp.duration:.3f}s")
# Jerk-limited (smooth acceleration)
jerk_limits = acc_limits * 10.0
traj_smooth = traj.time_parameterize(
"jerk_limited", vel_limits, acc_limits, jerk_limits=jerk_limits,
)
print(f"Jerk-limited: {traj_smooth.duration:.3f}s")
# Cubic spline (C2 continuous)
traj_spline = traj.time_parameterize("cubic_spline", vel_limits, acc_limits)
print(f"Cubic spline: {traj_spline.duration:.3f}s")
Validate against limits
Returns a list of violation dicts – empty means valid:
pos_lower = np.full(6, -6.28)
pos_upper = np.full(6, 6.28)
violations = traj_trap.validate(pos_lower, pos_upper, vel_limits, acc_limits)
if not violations:
print("Trajectory is valid")
else:
for v in violations:
print(f"Violation: waypoint {v['waypoint']}, {v['type']}, "
f"joint {v['joint']}, value {v['value']:.4f}")
Blend two trajectories
Smooth transition between consecutive motions:
goal2 = kinetic.Goal.joints(start)
traj2 = planner.plan(np.array([1.0, -0.8, 0.6, -0.5, 0.8, 0.3]), goal2)
blended = traj.blend(traj2, blend_duration=0.2)
print(f"Blended: {blended.duration:.3f}s, {blended.num_waypoints} waypoints")
Plot with matplotlib
import matplotlib.pyplot as plt
times, positions, velocities = traj_trap.to_numpy()
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(10, 6), sharex=True)
for j in range(robot.dof):
ax1.plot(times, positions[:, j], label=robot.joint_names[j])
ax1.set_ylabel("Position (rad)")
ax1.legend(fontsize=8)
ax1.set_title("UR5e Joint Positions")
ax1.grid(True, alpha=0.3)
for j in range(robot.dof):
ax2.plot(times, velocities[:, j], label=robot.joint_names[j])
ax2.set_ylabel("Velocity (rad/s)")
ax2.set_xlabel("Time (s)")
ax2.legend(fontsize=8)
ax2.set_title("UR5e Joint Velocities")
ax2.grid(True, alpha=0.3)
plt.tight_layout()
plt.savefig("trajectory_plot.png", dpi=150)
plt.show()
Export and import (JSON / CSV)
Save trajectories to files and load them back:
# Export
json_str = traj_trap.to_json()
csv_str = traj_trap.to_csv()
# Write to file
with open("trajectory.json", "w") as f:
f.write(json_str)
with open("trajectory.csv", "w") as f:
f.write(csv_str)
# Import
traj_loaded = kinetic.Trajectory.from_json(json_str)
traj_loaded = kinetic.Trajectory.from_csv(csv_str)
print(f"Loaded: {traj_loaded.num_waypoints} wp, {traj_loaded.duration:.3f}s")
Waypoint access as lists
If you need plain Python lists instead of numpy:
waypoints = traj.positions() # list[list[float]]
for i, wp in enumerate(waypoints[:3]):
print(f"Waypoint {i}: {wp}")
Next
- Servo Control – real-time reactive control
- Pick and Place – full manipulation workflow
Servo Control in Python
High-frequency Cartesian twist and joint-jog commands for teleoperation and
reactive manipulation. Covers the Servo controller with collision avoidance
and RMP for multi-policy reactive control at 500 Hz.
Setup
import kinetic
import numpy as np
robot = kinetic.Robot("franka_panda")
scene = kinetic.Scene(robot)
table_pose = np.eye(4)
table_pose[2, 3] = 0.3
scene.add("table", kinetic.Shape.cuboid(0.5, 0.5, 0.01), table_pose)
Create a servo controller
servo = kinetic.Servo(robot, scene, rate_hz=500.0)
Send twist commands
send_twist takes a 6D array [vx, vy, vz, wx, wy, wz]:
twist = np.array([0.05, 0.0, 0.0, 0.0, 0.0, 0.0]) # 5 cm/s in +X
cmd = servo.send_twist(twist)
print(cmd["positions"]) # new joint positions (numpy array)
print(cmd["velocities"]) # new joint velocities (numpy array)
500 Hz servo loop
dt = 1.0 / servo.rate_hz
for i in range(1000): # 2 seconds
t = i * dt
vx = 0.03 * np.cos(2 * np.pi * t)
vy = 0.03 * np.sin(2 * np.pi * t)
cmd = servo.send_twist(np.array([vx, vy, 0, 0, 0, 0]))
if i % 100 == 0:
state = servo.state()
pos = state["ee_pose"][:3, 3]
print(f"t={t:.2f}s EE=[{pos[0]:.3f}, {pos[1]:.3f}, {pos[2]:.3f}]")
Joint jogging and state
cmd = servo.send_joint_jog(joint_index=3, velocity=0.1)
state = servo.state()
print(state["joint_positions"]) # numpy (dof,)
print(state["ee_pose"]) # numpy (4, 4)
print(state["manipulability"]) # float
print(state["near_singularity"]) # bool
print(state["near_collision"]) # bool
Set initial state
home = robot.named_pose("home")
if home is not None:
servo.set_state(positions=home, velocities=np.zeros(robot.dof))
RMP: multi-policy reactive control
Riemannian Motion Policies combine multiple objectives into one consistent joint-space command via metric-weighted averaging:
rmp = kinetic.RMP(robot)
target = robot.fk(np.array([0.3, -0.5, 0.2, -1.5, 0.1, 1.0, 0.0]))
rmp.add(kinetic.Policy.reach_target(target, gain=10.0))
rmp.add(kinetic.Policy.avoid_obstacles(scene, influence_distance=0.15, gain=25.0))
rmp.add(kinetic.Policy.joint_limit_avoidance(margin=0.1, gain=15.0))
rmp.add(kinetic.Policy.singularity_avoidance(threshold=0.02, gain=5.0))
rmp.add(kinetic.Policy.avoid_self_collision(gain=20.0))
rmp.add(kinetic.Policy.damping(coefficient=0.5))
print(f"RMP: {rmp.num_policies} policies, {rmp.dof} DOF")
RMP control loop
dt = 0.002 # 500 Hz
q = np.zeros(robot.dof)
qd = np.zeros(robot.dof)
for step in range(1000):
cmd = rmp.compute(q, qd, dt)
q, qd = cmd["positions"], cmd["velocities"]
if step % 100 == 0:
ee = robot.fk(q)[:3, 3]
m = robot.manipulability(q)
print(f"step {step:4d} EE=[{ee[0]:.3f}, {ee[1]:.3f}, {ee[2]:.3f}] "
f"manip={m:.4f}")
Next
- Pick and Place – full manipulation workflow
- Trajectories and NumPy – export and visualize paths
Pick and Place in Python
Full pick-and-place workflow: scene, grasps, attach/detach, and Task composition.
graph LR
Home[Home] -->|Plan| Pre[Pre-grasp<br/>above object]
Pre -->|Cartesian| Grasp[Grasp pose]
Grasp -->|Close gripper| Attach[Attach object]
Attach -->|Plan| Transport[Transport<br/>to place]
Transport -->|Cartesian| Place[Place pose]
Place -->|Open gripper| Detach[Detach object]
Detach -->|Plan| Home2[Home]
style Home fill:#4a9eff,color:#fff
style Grasp fill:#ff6b6b,color:#fff
style Place fill:#3ddc84,color:#000
style Home2 fill:#4a9eff,color:#fff
Setup
import kinetic
import numpy as np
robot = kinetic.Robot("ur5e")
home = np.array([0.0, -1.57, 0.0, -1.57, 0.0, 0.0])
Build the workspace
scene = kinetic.Scene(robot)
table_pose = np.eye(4)
table_pose[2, 3] = 0.4
scene.add("table", kinetic.Shape.cuboid(0.5, 0.5, 0.01), table_pose)
# Object to pick: 5cm x 5cm x 8cm box on the table
box_half = [0.025, 0.025, 0.04]
box_pose = np.eye(4)
box_pose[0, 3], box_pose[2, 3] = 0.4, 0.45
scene.add("box", kinetic.Shape.cuboid(*box_half), box_pose)
# Place location: 30cm to the right
place_pose = np.eye(4)
place_pose[:3, 3] = [0.4, 0.3, 0.45]
Generate grasp candidates
gripper = kinetic.GripperType.parallel(max_opening=0.08, finger_depth=0.03)
generator = kinetic.GraspGenerator(gripper)
grasps = generator.from_shape(
shape_type="cuboid",
dimensions=box_half,
object_pose=box_pose,
num_candidates=100,
)
print(f"{len(grasps)} grasp candidates")
grasps.sort(key=lambda g: g.quality_score, reverse=True)
best = grasps[0]
print(f"Best quality: {best.quality_score:.3f}")
print(f"Grasp pose:\n{best.pose()}")
Plan approach
planner = kinetic.Planner(robot, scene=scene, timeout=5.0)
# Pre-grasp: 10cm above
pre_grasp = best.pose().copy()
pre_grasp[2, 3] += 0.10
traj_approach = planner.plan(home, kinetic.Goal.pose(pre_grasp))
print(f"Approach: {traj_approach.num_waypoints} wp, {traj_approach.duration:.2f}s")
# Straight-line descent to grasp
approach_end = traj_approach.sample(traj_approach.duration)
config = kinetic.CartesianConfig(max_step=0.002, collision_margin=0.01)
descent = planner.plan_cartesian(
approach_end, kinetic.Goal.pose(best.pose()), config=config,
)
print(f"Descent: {descent.fraction * 100:.0f}% achieved")
Pick: attach the object
After the gripper closes, attach the object to the robot link:
grasp_tf = np.eye(4)
grasp_tf[2, 3] = -0.04 # object center 4cm below flange
scene.attach("box", kinetic.Shape.cuboid(*box_half), grasp_tf, link_name="tool0")
print(f"Attached: {scene.num_attached}")
Transport
The planner includes the attached object in collision checks:
grasp_joints = descent.trajectory.sample(descent.trajectory.duration)
traj_transport = planner.plan(grasp_joints, kinetic.Goal.pose(place_pose))
print(f"Transport: {traj_transport.duration:.2f}s")
Place: detach the object
scene.detach("box", place_pose)
print(f"Attached: {scene.num_attached}") # 0
Return home
place_joints = traj_transport.sample(traj_transport.duration)
traj_return = planner.plan(place_joints, kinetic.Goal.joints(home))
print(f"Return: {traj_return.duration:.2f}s")
Compose with Task
For cleaner code, use Task to define each stage. Task.pick and Task.place
handle approach/retreat/gripper automatically:
approach = kinetic.Approach([0, 0, -1], 0.10) # 10cm straight down
retreat = kinetic.Approach([0, 0, 1], 0.05) # 5cm straight up
# Pick: approach, close gripper, retreat
pick = kinetic.Task.pick(
robot, scene, "box",
grasp_poses=[best.pose()],
approach=approach, retreat=retreat,
gripper_open=0.08, gripper_close=0.0,
)
# Place: approach, open gripper, retreat
place = kinetic.Task.place(
robot, scene, "box",
target_pose=place_pose,
approach=approach, retreat=retreat,
gripper_open=0.08,
)
# Compose into a full sequence
seq = kinetic.Task.sequence([
kinetic.Task.move_to(robot, kinetic.Goal.pose(pre_grasp)),
pick,
kinetic.Task.move_to(robot, kinetic.Goal.pose(place_pose)),
place,
kinetic.Task.move_to(robot, kinetic.Goal.named("home")),
])
sol = seq.plan(home)
print(f"Full sequence: {sol.num_stages} stages, {sol.total_duration:.2f}s")
print(f"Stages: {sol.stage_names}")
Individual tasks also work standalone:
t1 = kinetic.Task.move_to(robot, kinetic.Goal.named("home"))
t2 = kinetic.Task.gripper(width=0.08)
sol = t1.plan(home)
print(f"{sol.num_stages} stages, {sol.total_duration:.2f}s")
Verify with LogExecutor
executor = kinetic.LogExecutor(rate_hz=500.0)
executor.execute(traj_approach)
print(f"{executor.num_commands} commands logged")
executor.clear()
Suction gripper
suction = kinetic.GripperType.suction(cup_radius=0.02)
gen = kinetic.GraspGenerator(suction)
grasps = gen.from_shape("cuboid", box_half, box_pose, num_candidates=50)
Next
- Python Quickstart – review the basics
- Planning in Python – constraints and Cartesian paths
Dynamics in Python
Gravity compensation, inverse dynamics, forward dynamics, and mass matrix computation via the featherstone articulated-body algorithm bridge.
Setup
import kinetic
import numpy as np
robot = kinetic.Robot("franka_panda")
dyn = kinetic.Dynamics(robot)
Gravity compensation
Torques needed to hold the robot stationary at a given configuration (countering gravity):
q = np.array([0.0, -0.3, 0.0, -1.5, 0.0, 1.2, 0.0])
tau_gravity = dyn.gravity_compensation(q)
print("Gravity torques (Nm):")
for name, t in zip(robot.joint_names, tau_gravity):
print(f" {name}: {t:+.3f}")
Shoulder joints carry more weight – you’ll see larger torques there.
Inverse dynamics
Given position, velocity, and acceleration, compute the required torques:
q = np.array([0.0, -0.3, 0.0, -1.5, 0.0, 1.2, 0.0])
qd = np.zeros(7) # zero velocity (static)
qdd = np.zeros(7) # zero acceleration
tau = dyn.inverse_dynamics(q, qd, qdd)
# With zero velocity and acceleration, this equals gravity compensation
print(f"ID torques: {np.round(tau, 3)}")
With non-zero acceleration (e.g., a step response):
qdd = np.array([0, 0, 0, 1.0, 0, 0, 0]) # accelerate joint 4
tau = dyn.inverse_dynamics(q, qd, qdd)
print(f"Torques with acceleration: {np.round(tau, 3)}")
Forward dynamics
Given position, velocity, and applied torques, compute the resulting accelerations:
tau_applied = dyn.gravity_compensation(q) # exactly counteract gravity
qdd = dyn.forward_dynamics(q, qd, tau_applied)
print(f"Accelerations: {np.round(qdd, 6)}")
# Should be near-zero (gravity compensated, no external forces)
Apply extra torque to see the robot accelerate:
tau_extra = dyn.gravity_compensation(q)
tau_extra[3] += 5.0 # 5 Nm extra on joint 4
qdd = dyn.forward_dynamics(q, qd, tau_extra)
print(f"Joint 4 acceleration: {qdd[3]:.3f} rad/s^2")
Mass matrix
The joint-space inertia matrix M(q) – symmetric positive-definite:
M = dyn.mass_matrix(q)
print(f"Mass matrix shape: {M.shape}") # (7, 7) for Panda
print(f"Diagonal (inertias): {np.diag(M).round(3)}")
print(f"Symmetric: {np.allclose(M, M.T)}")
print(f"Positive definite: {np.all(np.linalg.eigvalsh(M) > 0)}")
Trajectory feasibility check
Before executing on real hardware, verify that the required torques stay within the robot’s effort limits:
# Plan and time-parameterize
planner = kinetic.Planner(robot)
start = np.zeros(7)
goal = kinetic.Goal.joints(np.array([0.5, -0.5, 0.3, -1.0, 0.2, 1.0, 0.3]))
traj = planner.plan(start, goal)
# Check torques at each waypoint
times, positions, velocities = traj.to_numpy()
max_torques = np.zeros(robot.dof)
for i in range(len(times)):
q = positions[i]
qd = velocities[i]
# Approximate acceleration from finite differences
if i == 0 or i == len(times) - 1:
qdd = np.zeros(robot.dof)
else:
dt = times[i+1] - times[i-1]
qdd = (velocities[i+1] - velocities[i-1]) / dt if dt > 0 else np.zeros(robot.dof)
tau = dyn.inverse_dynamics(q, qd, qdd)
max_torques = np.maximum(max_torques, np.abs(tau))
print("Max torques per joint (Nm):")
for name, t in zip(robot.joint_names, max_torques):
print(f" {name}: {t:.2f}")
Next
- GPU Acceleration – parallel trajectory optimization
- Hardware Execution – deploy to real robots
- Servo Control – reactive control at 500 Hz
GPU Acceleration in Python
Parallel-seed trajectory optimization and batch collision checking using wgpu compute shaders. Runs on NVIDIA (Vulkan), AMD (Vulkan), Intel (Vulkan), and Apple Silicon (Metal). Automatically falls back to CPU if no GPU is found.
Check GPU availability
import kinetic
import numpy as np
print(f"GPU available: {kinetic.GpuOptimizer.gpu_available()}")
GPU trajectory optimizer
The optimizer runs hundreds of trajectory seeds in parallel and returns the best one (cuRobo-style):
robot = kinetic.Robot("ur5e")
# Create optimizer (auto-detects GPU, falls back to CPU)
opt = kinetic.GpuOptimizer(robot, preset="balanced")
print(f"Backend: {'GPU' if opt.is_gpu else 'CPU'}")
Three presets trade off speed vs quality:
| Preset | Seeds | Iterations | SDF Resolution | Best for |
|---|---|---|---|---|
speed | 32 | 30 | 5cm | Real-time replanning |
balanced | 128 | 100 | 2cm | General use |
quality | 512 | 200 | 1cm | Offline, quality matters |
opt_fast = kinetic.GpuOptimizer(robot, preset="speed")
opt_quality = kinetic.GpuOptimizer(robot, preset="quality")
# Custom: override specific parameters
opt_custom = kinetic.GpuOptimizer(robot, num_seeds=256, iterations=150)
Optimize a trajectory
start = np.array([0.0, -1.57, 0.0, -1.57, 0.0, 0.0])
goal = np.array([1.0, -0.8, 0.6, -0.5, 0.8, 0.3])
traj = opt.optimize(start, goal)
print(f"{traj.num_waypoints} waypoints")
Optimize with obstacles
Pass a scene or raw obstacle spheres:
# Option 1: from a Scene
scene = kinetic.Scene(robot)
table_pose = np.eye(4)
table_pose[2, 3] = 0.4
scene.add("table", kinetic.Shape.cuboid(0.5, 0.5, 0.01), table_pose)
traj = opt.optimize(start, goal, scene=scene)
# Option 2: raw obstacle spheres (N, 4) array of [x, y, z, radius]
obstacles = np.array([
[0.4, 0.0, 0.5, 0.05], # sphere at (0.4, 0, 0.5) radius 0.05
[0.3, 0.2, 0.6, 0.08],
])
traj = opt.optimize(start, goal, obstacle_spheres=obstacles)
Time-parameterize the result
The optimizer returns a geometric path. Apply time parameterization before execution:
vel_limits = np.array(robot.velocity_limits)
acc_limits = np.array(robot.acceleration_limits)
traj_timed = traj.time_parameterize("totp", vel_limits, acc_limits)
print(f"Duration: {traj_timed.duration:.3f}s")
GPU batch collision checking
Check hundreds of configurations at once – much faster than checking one at a time:
checker = kinetic.GpuCollisionChecker(robot, scene)
print(f"Backend: {'GPU' if checker.is_gpu else 'CPU'}")
# Generate random configurations
configs = np.random.uniform(-1.5, 1.5, size=(1000, robot.dof))
results = checker.check_batch(configs)
n_collisions = sum(results["in_collision"])
print(f"{n_collisions}/{len(configs)} configs in collision")
print(f"Min clearance: {min(results['min_distances']):.4f} m")
Single-configuration check:
colliding, distance = checker.check_single(start)
print(f"In collision: {colliding}, clearance: {distance:.4f} m")
Combining GPU optimization with planning
Use GPU optimization to refine an RRT path:
# Step 1: fast RRT-Connect for initial path
planner = kinetic.Planner(robot, scene=scene)
rrt_traj = planner.plan(start, kinetic.Goal.joints(goal), time_parameterize=False)
# Step 2: GPU refinement with warm-start (future API)
# For now, just compare quality
gpu_traj = opt.optimize(start, goal, scene=scene)
print(f"RRT: {rrt_traj.num_waypoints} waypoints")
print(f"GPU: {gpu_traj.num_waypoints} waypoints")
Next
- Hardware Execution – deploy trajectories to real robots
- Dynamics – torque feasibility checking
- Trajectories and NumPy – export and visualize
Hardware Execution in Python
Deploy trajectories to real robots using RealTimeExecutor. Streams
joint commands to your hardware driver at precise rates (up to 500 Hz)
via a Python callback, with optional feedback for deviation monitoring.
The pattern
sequenceDiagram
participant P as Python
participant E as RealTimeExecutor
participant H as Your Hardware
P->>E: execute(traj, send_cmd, feedback)
Note over E: Release GIL, start timing loop
loop Every 2ms (500Hz)
E->>E: Interpolate trajectory at t
E->>P: send_cmd(positions, velocities)
P->>H: set_joints(positions)
opt Feedback enabled
E->>P: feedback()
P->>H: get_positions()
H-->>P: actual positions
P-->>E: [q1, q2, ...]
E->>E: Check deviation < tolerance
end
E->>E: Sleep until next tick
end
E-->>P: result dict
Your callback receives numpy arrays of positions and velocities. kinetic handles timing, interpolation, and safety monitoring.
Basic execution
Define a callback that sends commands to your robot:
import kinetic
import numpy as np
robot = kinetic.Robot("ur5e")
# Your hardware driver callback
def send_command(positions, velocities):
# positions: numpy (6,) -- joint positions in radians
# velocities: numpy (6,) -- joint velocities in rad/s
print(f" cmd: {np.round(positions, 3)}")
# In real code: my_driver.set_joints(positions, velocities)
# Plan a trajectory
start = np.array([0.0, -1.57, 0.0, -1.57, 0.0, 0.0])
goal = kinetic.Goal.joints(np.array([0.5, -1.0, 0.5, -0.5, 0.5, 0.0]))
traj = kinetic.plan("ur5e", start, goal)
# Execute at 100 Hz
executor = kinetic.RealTimeExecutor(rate_hz=100)
result = executor.execute(traj, send_command)
print(f"State: {result['state']}")
print(f"Duration: {result['actual_duration']:.3f}s")
print(f"Commands sent: {result['commands_sent']}")
Safe execution with feedback
For real robot deployments, use RealTimeExecutor.safe() which enables
joint limit validation and requires a feedback source:
def send_command(positions, velocities):
my_driver.set_joints(positions, velocities)
def read_feedback():
"""Return current joint positions, or None if unavailable."""
pos = my_driver.get_joint_positions()
return list(pos) if pos is not None else None
executor = kinetic.RealTimeExecutor.safe(robot, rate_hz=500)
result = executor.execute(traj, send_command, feedback=read_feedback)
if result["max_deviation"] is not None:
print(f"Max tracking error: {result['max_deviation']:.4f} rad")
safe() automatically:
- Validates all waypoints against the robot’s joint limits before execution
- Requires the feedback callback
- Sets tight command timeout (50ms)
- Aborts if position deviation exceeds 0.1 rad
Configuration
executor = kinetic.RealTimeExecutor(
rate_hz=500, # Command frequency (Hz)
position_tolerance=0.05, # Max allowed deviation (rad)
command_timeout_ms=50, # Abort if callback takes >50ms
timeout_factor=2.0, # Abort if execution takes >2x expected
require_feedback=True, # Feedback callback is mandatory
)
Error handling
The executor raises RuntimeError on failure. Common errors:
try:
result = executor.execute(traj, send_command, feedback=read_feedback)
except RuntimeError as e:
error_msg = str(e)
if "DeviationExceeded" in error_msg:
print("Robot drifted too far from planned path -- check hardware")
elif "CommandFailed" in error_msg:
print("Hardware driver returned an error")
elif "Timeout" in error_msg:
print("Execution took too long -- check for mechanical binding")
elif "empty trajectory" in error_msg:
print("Trajectory has no waypoints")
else:
print(f"Execution error: {e}")
Result dict
The execute() method returns a dict:
| Key | Type | Description |
|---|---|---|
state | str | "Completed", "Error", etc. |
actual_duration | float | Wall-clock execution time (seconds) |
expected_duration | float | Trajectory duration (seconds) |
commands_sent | int | Number of commands streamed |
final_positions | numpy | Last commanded joint positions |
max_deviation | float or None | Largest position error (if feedback provided) |
Comparison: three executors
| Executor | Timing | Use case |
|---|---|---|
SimExecutor | Instant (no sleep) | Unit tests, validation |
LogExecutor | Instant + records commands | Debugging, replay analysis |
RealTimeExecutor | Real wall-clock timing | Actual robot hardware |
# Validation first
sim = kinetic.SimExecutor()
sim_result = sim.execute(traj)
assert sim_result["state"] == "Completed"
# Inspect commands
log = kinetic.LogExecutor(rate_hz=500)
log.execute(traj)
commands = log.commands()
print(f"Would send {len(commands)} commands")
# Then deploy
executor = kinetic.RealTimeExecutor.safe(robot)
result = executor.execute(traj, send_command, feedback=read_feedback)
UR robot example
A complete example for Universal Robots (using ur_rtde or similar):
import kinetic
import numpy as np
# import rtde_control # your UR driver
robot = kinetic.Robot("ur5e")
# Plan
start = np.array([0.0, -1.57, 0.0, -1.57, 0.0, 0.0])
goal = kinetic.Goal.named("home")
traj = kinetic.plan("ur5e", start, goal)
# Validate before deploying
vel = np.array(robot.velocity_limits)
acc = np.array(robot.acceleration_limits)
violations = traj.validate(
np.full(6, -6.28), np.full(6, 6.28), vel, acc,
)
assert not violations, f"Trajectory has violations: {violations}"
# Check dynamics feasibility
dyn = kinetic.Dynamics(robot)
times, positions, velocities = traj.to_numpy()
for i in range(len(times)):
tau = dyn.inverse_dynamics(positions[i], velocities[i], np.zeros(6))
# Check against motor limits...
# Deploy
# rtde = rtde_control.RTDEControlInterface("192.168.1.100")
# def send(pos, vel):
# rtde.servoJ(pos.tolist(), vel.tolist(), 0.002, 0.1, 500)
# def feedback():
# return rtde.getActualQ()
# executor = kinetic.RealTimeExecutor.safe(robot, rate_hz=500)
# result = executor.execute(traj, send, feedback=feedback)
Next
- Dynamics – torque feasibility checking
- GPU Acceleration – faster trajectory optimization
- Pick and Place – full manipulation workflow
Planner Selection
How to pick the right planning algorithm for your problem.
Decision Flowchart
Follow this text-based flowchart from top to bottom. Start with your planning scenario and follow the arrows to the recommended planner.
START: What does your problem look like?
|
+-- Is the workspace mostly open (few obstacles)?
| YES --> Use RRT-Connect (fastest general-purpose planner)
|
+-- Are there narrow passages or tight spaces?
| YES --> Use EST or KPIECE (designed for constrained spaces)
|
+-- Do you need the shortest/optimal path?
| YES --> Do you have time (>100ms)?
| YES --> Use RRT* or BiRRT*
| NO --> Use RRT-Connect + shortcutting (default)
|
+-- Do you want cost-aware planning (avoid joint limits, singularities)?
| YES --> Use BiTRRT (cost-guided transition test)
|
+-- Will you solve many queries in the same environment?
| YES --> Use PRM (build roadmap once, query many times)
|
+-- Do you need globally optimal trajectories?
| YES --> Use GCS (pre-compute convex decomposition, then solve)
|
+-- Do you need straight-line Cartesian motion?
| YES --> Use Cartesian planner (LERP position, SLERP orientation)
|
+-- Are you planning for two arms simultaneously?
YES --> Use DualArmPlanner (combined C-space, inter-arm avoidance)
Planner Comparison Table
| Planner | Speed | Optimality | Best For |
|---|---|---|---|
| RRT-Connect | Fast (<50ms) | Feasible | General pick-and-place, open spaces |
| RRT* | Slow (100ms+) | Asymptotically optimal | Offline planning, quality matters |
| BiRRT* | Medium (50-200ms) | Asymptotically optimal | Faster RRT* convergence |
| BiTRRT | Medium (50-200ms) | Cost-aware | Avoiding singularities, joint limits |
| EST | Medium | Feasible | Narrow passages, constrained spaces |
| KPIECE | Medium | Feasible | High-DOF, narrow passages |
| PRM | Build: slow / Query: fast | Feasible | Multi-query same environment |
| GCS | Build: slow / Query: fast | Globally optimal | Repeated queries, convex environments |
| Cartesian | Fast (<10ms) | N/A (follows path) | Linear/arc EE motion |
| DualArmPlanner | Slow (200ms+) | Feasible | Bimanual coordination |
| ConstrainedRRT | Medium-slow | Feasible | Orientation/position constraints |
| CHOMP | Medium | Locally optimal | Trajectory optimization |
| STOMP | Medium | Locally optimal | Stochastic optimization |
| GpuOptimizer | Fast (GPU) | Locally optimal | Parallel seed optimization |
Setting the Planner Type
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
let robot = Robot::from_name("ur5e")?;
// Default (Auto selects RRT-Connect)
let planner = Planner::new(&robot)?;
// Explicit planner selection
let planner = Planner::new(&robot)?
.with_planner_type(PlannerType::RRTStar);
// With custom config
let planner = Planner::new(&robot)?
.with_planner_type(PlannerType::BiTRRT)
.with_config(PlannerConfig::offline());
}
import kinetic
robot = kinetic.Robot("ur5e")
planner = kinetic.Planner(robot)
# Python uses the default (Auto/RRT-Connect) planner
“My Plan Is Slow” Diagnosis
If planning is taking too long, work through these steps in order.
1. Check if start or goal is in collision.
The planner spends its entire time budget searching before returning
StartInCollision or GoalInCollision. Validate both before calling plan().
#![allow(unused)]
fn main() {
if planner.is_in_collision(&start) {
// Move the robot or adjust the start configuration
}
}
2. Reduce the iteration count.
Default is 10,000 iterations. For real-time use, try PlannerConfig::realtime()
which uses 2,000 iterations and a 10ms timeout.
3. Disable post-processing. Shortcutting and smoothing add time after the path is found. Disable them for latency-critical paths.
#![allow(unused)]
fn main() {
let config = PlannerConfig {
shortcut_iterations: 0,
smooth: false,
..PlannerConfig::default()
};
}
4. Narrow the workspace bounds. Setting workspace bounds eliminates exploration of unreachable regions.
5. Use GPU optimization for complex environments.
When the environment has many obstacles, GpuOptimizer runs hundreds of
parallel seeds on the GPU and can find solutions faster than tree-based planners.
6. Switch to PRM for repeated queries. If you plan many paths in the same environment, build a PRM roadmap once and query it repeatedly. Amortized query time drops significantly.
7. Profile with the benchmark suite.
Run cargo bench --bench planning_benchmarks to compare planners on your
specific robot and environment.
When to Use GPU Optimization
The GpuOptimizer is not a replacement for sampling-based planners.
Use it when:
- The environment has many obstacles (SDF is more efficient than sphere checks)
- You need smooth trajectories (optimizer minimizes jerk natively)
- You can warm-start from an RRT solution
- A Vulkan/Metal-capable GPU is available
Do not use GPU optimization when:
- The environment is simple (RRT-Connect will be faster)
- You need strict completeness guarantees
- No GPU is available (CPU fallback exists but is slower than RRT)
Planner Combinations
Planners can be composed for better results:
- RRT-Connect + GPU warm-start: find a feasible path fast, then optimize it on the GPU for smoothness.
- PRM + Cartesian: use PRM for free-space transit, switch to Cartesian for approach/retreat motions.
- PlanExecuteLoop: wraps planning with automatic replanning and fallback chains when execution deviates from the plan.
Performance Tuning
Diagnosing and fixing slow planning, IK, or collision checks.
PlannerConfig Presets
Kinetic ships with three presets covering the most common scenarios. Start with a preset and adjust individual fields as needed.
| Preset | Timeout | Iterations | Margin | Shortcut | Smooth |
|---|---|---|---|---|---|
default() | 50ms | 10,000 | 0.02m | 100 | Yes |
realtime() | 10ms | 2,000 | 0.01m | 20 | No |
offline() | 500ms | 100,000 | 0.02m | 500 | Yes |
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
use std::time::Duration;
// Use a preset
let planner = Planner::new(&robot)?
.with_config(PlannerConfig::realtime());
// Or customize individual fields
let config = PlannerConfig {
timeout: Duration::from_millis(30),
max_iterations: 5_000,
collision_margin: 0.015,
shortcut_iterations: 50,
smooth: false,
workspace_bounds: Some([-1.0, -1.0, 0.0, 1.0, 1.0, 2.0]),
};
}
IKConfig Tuning
IK performance depends on the solver choice and restart count.
Solver selection guidelines:
| Solver | Speed | Best For |
|---|---|---|
| OPW | <5 us | 6-DOF spherical-wrist robots (UR, KUKA KR, ABB IRB) |
| Subproblem | <10 us | 6-DOF with intersecting wrist axes |
| Subproblem7DOF | ~50 us | 7-DOF (Panda, KUKA iiwa) |
| DLS | 100-500 us | General purpose, any DOF |
| FABRIK | 50-200 us | Position-only tasks |
Tuning parameters:
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
// Fast: analytical solver, no restarts
let config = IKConfig::opw();
// Robust: iterative solver with restarts
let config = IKConfig::dls()
.with_restarts(8)
.with_max_iterations(200)
.with_position_tolerance(1e-3);
// Position-only (3-DOF task, ignore orientation)
let config = IKConfig::position_only();
// Try full 6D, fall back to position-only
let config = IKConfig::with_fallback();
}
Key tuning knobs:
num_restarts: Each restart randomizes the seed. More restarts improve success rate but cost time. Use 0 for real-time, 4-8 for planning.max_iterations: Increase for hard-to-reach poses. Decrease for real-time.position_tolerance: Loosen (1e-3) for speed, tighten (1e-5) for precision.seed: Providing a good seed (e.g., current joint state) dramatically reduces convergence time.
SIMD Tier Detection
Kinetic auto-detects SIMD capabilities at compile time:
| Tier | Instructions | Collision Speedup |
|---|---|---|
| AVX-512 | 16-wide f32 | ~8x |
| AVX2 | 8-wide f32 | ~4x |
| SSE4.1 | 4-wide f32 | ~2x |
| Scalar | Fallback | 1x |
SIMD is used automatically in collision checking. Compile with native target features to get the best performance:
RUSTFLAGS="-C target-cpu=native" cargo build --release
Batch Operations
When you need FK or IK for many configurations, use the batch APIs to amortize overhead and enable parallelism.
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
use kinetic::kinematics::solve_ik_batch;
let robot = Robot::from_name("ur5e")?;
let chain = KinematicChain::extract(&robot, "base_link", "tool0")?;
// Batch FK
let configs: Vec<Vec<f64>> = vec![
vec![0.0; 6],
vec![0.1, -0.5, 0.3, 0.0, 0.0, 0.0],
vec![0.5, -1.0, 0.8, 0.2, -0.3, 0.1],
];
let poses: Vec<Pose> = configs.iter()
.map(|q| forward_kinematics(&robot, &chain, q).unwrap())
.collect();
// Batch IK
let solutions = solve_ik_batch(&robot, &chain, &poses, &IKConfig::default());
// Returns Vec<Option<IKSolution>> -- None for failed targets
}
For GPU-accelerated batch FK:
#![allow(unused)]
fn main() {
use kinetic::gpu::batch_fk_gpu;
let results = batch_fk_gpu(&robot, &configs)?;
}
Collision Resolution vs Speed Trade-off
The collision_margin parameter controls the trade-off between safety
and planning speed. Larger margins make the planner more conservative
but reject more configurations, making it harder to find paths.
| collision_margin | Effect |
|---|---|
| 0.00m | Exact contact only (fast, risky) |
| 0.01m | Tight clearance (production minimum) |
| 0.02m | Default (good balance) |
| 0.05m | Conservative (cluttered environments) |
Sphere resolution also affects speed. The SphereGenConfig controls
how many spheres approximate each link:
#![allow(unused)]
fn main() {
use kinetic::collision::SphereGenConfig;
SphereGenConfig::coarse(); // ~5 spheres/link (fast, less precise)
SphereGenConfig::default(); // ~10 spheres/link (balanced)
SphereGenConfig::fine(); // ~20 spheres/link (slow, precise)
}
GPU: When to Use
Use the GPU optimizer when:
- Environment has 50+ obstacles (SDF lookup is O(1) vs O(n) sphere checks)
- You need smooth trajectories without post-processing
- Planning latency budget is 20-100ms with a discrete GPU available
#![allow(unused)]
fn main() {
use kinetic::gpu::{GpuOptimizer, GpuConfig};
// Speed preset: 32 seeds, 30 iterations
let opt = GpuOptimizer::new(GpuConfig::speed())?;
// Quality preset: 512 seeds, 200 iterations
let opt = GpuOptimizer::new(GpuConfig::quality())?;
// Balanced (default): 128 seeds, 100 iterations
let opt = GpuOptimizer::new(GpuConfig::balanced())?;
}
The CpuOptimizer provides the same API without requiring a GPU,
useful for testing or environments without GPU access.
Profiling Checklist
- Run
cargo benchto establish baselines - Identify the bottleneck: IK, collision, or tree expansion
- For IK: try analytical solvers (OPW/Subproblem) before DLS
- For collision: reduce sphere count or loosen margin
- For planning: reduce iterations, disable smoothing, add workspace bounds
- For trajectory: use
trapezoidal(fast) instead oftotp(optimal)
Custom Robots
Loading your own URDF and creating kinetic configurations.
Overview
Kinetic ships with 52 built-in robot configurations, but you can add any robot that has a URDF file. This guide walks through the process step by step.
Step 1: Get a URDF
You need a valid URDF file describing your robot’s kinematics. Common sources:
- Robot manufacturer (most provide URDF files)
- ROS robot description packages (e.g.,
ur_description,franka_description) - Export from CAD software (SolidWorks, Fusion 360)
- Create manually for simple robots
The URDF must include joints with type, axis, origin, and limit tags.
Collision geometry is optional but improves planning quality.
Step 2: Create a Configuration Directory
Create a directory under robot_configs/ named after your robot:
robot_configs/
my_robot/
kinetic.toml # Configuration file
my_robot.urdf # Robot description
If you are working outside the kinetic source tree, you can place the
directory anywhere and load it with Robot::from_path().
Step 3: Write kinetic.toml
The configuration file defines planning groups, IK solver preferences, named poses, and collision settings.
[robot]
name = "my_robot"
urdf = "my_robot.urdf"
dof = 6
# Planning group: defines the kinematic chain
[planning_group.arm]
chain = ["base_link", "tool0"]
# End effector definition (optional)
[end_effector.tool]
parent_link = "tool0"
parent_group = "arm"
tcp_xyz = [0.0, 0.0, 0.05] # Tool center point offset
tcp_rpy = [0.0, 0.0, 0.0] # Tool orientation offset (optional)
# IK solver preference
[ik]
solver = "opw" # "opw" for 6-DOF spherical wrist, "dls" for general
damping = 0.05 # Only used with "dls" solver
# Named joint configurations
[named_poses]
home = [0.0, -1.5708, 0.0, -1.5708, 0.0, 0.0]
zero = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
# Collision settings
[collision]
self_collision_pairs = "auto" # Auto-detect from URDF geometry
padding = 0.01 # Extra padding in meters
skip_pairs = [ # Disable collision between known-safe pairs
["link1", "link3"],
]
Section Reference
[robot] – Required. Robot identity.
| Field | Type | Description |
|---|---|---|
name | string | Robot name used in Robot::from_name() |
urdf | string | Path to URDF file (relative to config directory) |
dof | integer | Degrees of freedom |
[planning_group.<name>] – At least one required. Defines kinematic chains.
| Field | Type | Description |
|---|---|---|
chain | [string, string] | [base_link, tip_link] |
[ik] – Optional. IK solver configuration.
| Field | Type | Description |
|---|---|---|
solver | string | "opw", "dls", "fabrik", "subproblem", "subproblem7dof" |
damping | float | DLS damping factor (default: 0.05) |
[named_poses] – Optional. Named joint configurations.
Each key is a pose name, value is an array of joint values matching the DOF.
[collision] – Optional. Collision checking settings.
| Field | Type | Description |
|---|---|---|
self_collision_pairs | string | "auto" or "none" |
padding | float | Extra clearance in meters |
skip_pairs | [[string, string]] | Link pairs to ignore |
Step 4: Load and Verify
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
// Load from built-in configs (after adding to robot_configs/)
let robot = Robot::from_name("my_robot")?;
// Or load from a path
let robot = Robot::from_path("path/to/my_robot/")?;
// Or load directly from URDF string (no config file)
let robot = Robot::from_urdf_string(urdf_string)?;
// Verify basic info
println!("DOF: {}", robot.dof);
println!("Links: {}", robot.links.len());
println!("Joints: {}", robot.joints.len());
}
Step 5: Verify FK/IK
#![allow(unused)]
fn main() {
let chain = KinematicChain::extract(&robot, "base_link", "tool0")?;
// Forward kinematics
let home = vec![0.0, -1.5708, 0.0, -1.5708, 0.0, 0.0];
let pose = forward_kinematics(&robot, &chain, &home)?;
println!("EE position: {:?}", pose.translation());
// Inverse kinematics
let config = IKConfig::default().with_restarts(8);
let solution = solve_ik(&robot, &chain, &pose, &config)?;
assert!(solution.converged);
println!("IK residual: {:.6}m", solution.position_error);
}
Step 6: Test Planning
#![allow(unused)]
fn main() {
let planner = Planner::new(&robot)?;
let start = vec![0.0; 6];
let goal = Goal::Named("home".into());
let result = planner.plan(&start, &goal)?;
println!("Path: {} waypoints in {:?}",
result.num_waypoints(), result.planning_time);
}
Choosing the Right IK Solver
| Robot Type | DOF | Recommended Solver |
|---|---|---|
| Spherical wrist (UR, ABB IRB, KUKA KR) | 6 | opw |
| Intersecting wrist axes | 6 | subproblem |
| Redundant arms (Panda, iiwa, xArm7) | 7 | subproblem7dof or dls |
| Mobile manipulators (Fetch, TIAGo) | 7-8 | dls |
| Low-DOF (4-5 joints) | 4-5 | dls |
| Any robot (fallback) | Any | dls |
Tips
- Start with
dlsif unsure. It works for all robots. - Set
skip_pairsfor links that are physically close but cannot collide (reduces false positives in collision checking). - Use FK to verify your URDF before planning. Wrong link lengths or joint axes cause silent errors.
- Named poses must be within joint limits. The planner validates them.
Production Deployment
Checklist and best practices for deploying kinetic on real robots.
Pre-Deployment Checklist
Complete all 10 items before running kinetic on physical hardware.
-
1. Validate URDF accuracy. Run FK at known joint configurations and compare the computed EE pose against physical measurements. Position error should be under 1mm.
-
2. Verify joint limits. Confirm that the URDF joint limits match the actual hardware limits. Wrong limits cause collisions or unreachable goals.
-
3. Test IK convergence. Run IK for 100+ random reachable poses and confirm convergence rate is above 95%. Check
solution.condition_number– values above 100 indicate near-singularity configurations. -
4. Validate collision model. Add all fixed obstacles (tables, walls, mounting plates) to the Scene. Run the planner in simulation to verify collision avoidance before hardware.
-
5. Use ExecutionConfig::safe(). This auto-populates joint limits, enables feedback monitoring, and configures a safety watchdog.
-
6. Set velocity scaling. Start at 10% speed (
velocity_scale: 0.1) and increase gradually. Never run a new trajectory at full speed. -
7. Implement CommandSink with hardware timeout. Your
send_commandcallback should timeout and returnErrif the hardware does not respond within the configuredcommand_timeout_ms. -
8. Enable the safety watchdog. Configure
WatchdogConfigto fireZeroVelocityif the control loop stalls for more than 50ms. -
9. Test emergency stop. Verify that the robot stops within one control cycle when the watchdog fires or when
send_commandreturns an error. -
10. Log everything. Use
LogExecutorin parallel to record all commanded positions for post-incident analysis.
Error Handling Patterns
All kinetic errors are returned through KineticError. Use the built-in
classification methods to decide how to respond.
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
match planner.plan(&start, &goal) {
Ok(result) => execute(result),
Err(e) if e.is_retryable() => {
// PlanningTimeout, IKNotConverged, CartesianPathIncomplete
// Try again with relaxed parameters
let config = PlannerConfig::offline();
let result = planner.plan_with_config(&start, &goal, config)?;
execute(result);
}
Err(e) if e.is_input_error() => {
// Bad URDF, wrong DOF, unreachable goal
// Fix the input, do not retry
log::error!("Input error: {e}");
}
Err(e) => {
log::error!("Unexpected error: {e}");
}
}
}
Trajectory Validation
Always validate trajectories before executing on hardware.
#![allow(unused)]
fn main() {
use kinetic::trajectory::{TrajectoryValidator, ValidationConfig};
let validator = TrajectoryValidator::new(ValidationConfig {
max_velocity: robot.velocity_limits(),
max_acceleration: robot.acceleration_limits(),
joint_limits: robot.joint_limits.clone(),
..Default::default()
});
let violations = validator.validate(&timed_trajectory);
if !violations.is_empty() {
for v in &violations {
log::warn!("Trajectory violation: {:?}", v);
}
// Do NOT execute. Replan or adjust time parameterization.
}
}
Monitoring Manipulability
Track manipulability during execution to detect approaching singularities.
#![allow(unused)]
fn main() {
use kinetic::kinematics::manipulability;
let m = manipulability(&robot, &chain, ¤t_joints)?;
if m < 0.02 {
log::warn!("Low manipulability ({m:.4}), near singularity");
// Consider replanning away from this configuration
}
}
The degraded Flag
IK solutions include a degraded flag indicating whether the solver fell
back to a less accurate method (e.g., scaled transpose instead of full
pseudoinverse near a singularity).
#![allow(unused)]
fn main() {
let solution = solve_ik(&robot, &chain, &target, &config)?;
if solution.degraded {
log::warn!(
"IK solution is degraded (condition={:.0}), verify before executing",
solution.condition_number
);
}
}
Recovery Strategies
Replan on Failure
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
let loop_config = PlanExecuteConfig {
max_replans: 3,
replan_strategy: ReplanStrategy::FromCurrent,
recovery: RecoveryStrategy::RetryWithRelaxedConfig,
..Default::default()
};
let pel = PlanExecuteLoop::new(&robot, loop_config);
let result = pel.run(&start, &goal, &scene)?;
}
Graceful Degradation
When the primary planner fails, fall back to simpler strategies:
- Retry with
PlannerConfig::offline()(more iterations) - Try a different planner type (
PlannerType::ESTfor narrow passages) - Use waypoint decomposition (break the path into segments)
- Alert the operator
Execution Monitoring
#![allow(unused)]
fn main() {
use kinetic::trajectory::{ExecutionMonitor, MonitorConfig, DeviationLevel};
let monitor = ExecutionMonitor::new(MonitorConfig::default());
// During execution, feed actual vs commanded joint positions
let level = monitor.check_deviation(&commanded, &actual);
match level {
DeviationLevel::Normal => {}
DeviationLevel::Warning => {
log::warn!("Position deviation detected");
}
DeviationLevel::Critical => {
log::error!("Critical deviation, emergency stop");
}
}
}
Safe Execution Configuration
#![allow(unused)]
fn main() {
use kinetic::execution::{ExecutionConfig, RealTimeExecutor};
// Auto-configures from robot model:
// rate_hz: 500
// position_tolerance: 0.05 rad
// velocity_tolerance: 0.3 rad/s
// joint_limits: from URDF
// command_timeout_ms: 50
// require_feedback: true
// watchdog: 50ms timeout, ZeroVelocity action
let config = ExecutionConfig::safe(&robot);
let executor = RealTimeExecutor::new(config);
}
Production Architecture
Sensors --> SceneNode --> Scene
|
Operator --> Goal -----> Planner --> Trajectory --> Executor --> Robot
| |
PlanExecuteLoop SafetyWatchdog
Every component has a failure mode and a recovery path. No single failure should cause uncontrolled robot motion.
Hardware Integration
Connecting kinetic trajectories to real robot hardware.
Trajectory Format
Kinetic produces TimedTrajectory objects containing timestamped waypoints
with positions, velocities, and optional accelerations.
#![allow(unused)]
fn main() {
use kinetic::trajectory::{TimedTrajectory, TimedWaypoint};
// TimedWaypoint structure:
// - time: f64 (seconds from trajectory start)
// - positions: Vec<f64> (joint angles in radians)
// - velocities: Vec<f64> (joint velocities in rad/s)
// - accelerations: Option<Vec<f64>> (joint accelerations in rad/s^2)
}
Time parameterization converts a geometric path (waypoints without timing) into a timed trajectory that respects velocity and acceleration limits:
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
let result = planner.plan(&start, &goal)?;
let vel_limits = robot.velocity_limits();
let accel_limits = robot.acceleration_limits();
// Trapezoidal (fast, per-joint limits)
let timed = trapezoidal(&result.waypoints, &vel_limits, &accel_limits)?;
// TOTP (time-optimal, tighter trajectory)
let timed = totp(&result.waypoints, &vel_limits, &accel_limits)?;
}
The CommandSink Trait
CommandSink is the interface between kinetic and your robot hardware.
Implement it once for each robot driver.
#![allow(unused)]
fn main() {
use kinetic::execution::CommandSink;
struct MyRobotDriver {
connection: TcpStream,
}
impl CommandSink for MyRobotDriver {
fn send_command(
&mut self,
positions: &[f64],
velocities: &[f64],
) -> Result<(), String> {
// Format and send to your controller
// Return Err("...") on hardware fault
Ok(())
}
}
}
The executor calls send_command at the configured rate (default: 500 Hz),
interpolating between trajectory waypoints.
500 Hz Control Loop Pattern
The RealTimeExecutor handles the timing loop. You provide the CommandSink,
it handles interpolation, timing, and error detection.
#![allow(unused)]
fn main() {
use kinetic::execution::{RealTimeExecutor, ExecutionConfig};
let config = ExecutionConfig {
rate_hz: 500.0,
position_tolerance: 0.05,
velocity_tolerance: 0.3,
..ExecutionConfig::default()
};
let executor = RealTimeExecutor::new(config);
let result = executor.execute(&timed_trajectory, &mut my_driver)?;
println!("Executed in {:.2}s", result.actual_duration.as_secs_f64());
println!("Max deviation: {:.4} rad", result.max_deviation);
}
For feedback-based monitoring, implement FeedbackSource:
#![allow(unused)]
fn main() {
use kinetic::execution::FeedbackSource;
impl FeedbackSource for MyRobotDriver {
fn read_positions(&self) -> Option<Vec<f64>> {
// Read actual joint positions from encoders
Some(self.read_encoders())
}
}
}
Connecting to Universal Robots via URScript
UR robots accept URScript commands over TCP port 30002 (secondary interface) or 30003 (real-time interface at 500 Hz).
#![allow(unused)]
fn main() {
use std::io::Write;
use std::net::TcpStream;
use kinetic::execution::CommandSink;
struct URDriver {
stream: TcpStream,
}
impl URDriver {
fn new(ip: &str) -> std::io::Result<Self> {
let stream = TcpStream::connect(format!("{ip}:30003"))?;
Ok(Self { stream })
}
}
impl CommandSink for URDriver {
fn send_command(
&mut self,
positions: &[f64],
velocities: &[f64],
) -> Result<(), String> {
// URScript servoj command
let cmd = format!(
"servoj([{:.6},{:.6},{:.6},{:.6},{:.6},{:.6}], t=0.002)\n",
positions[0], positions[1], positions[2],
positions[3], positions[4], positions[5],
);
self.stream.write_all(cmd.as_bytes())
.map_err(|e| format!("UR send failed: {e}"))
}
}
}
General Driver Pattern
For any robot, follow this pattern:
- Connect to the robot controller (TCP, serial, shared memory, etc.)
- Implement
CommandSinkwith your protocol’s command format - Optionally implement
FeedbackSourcefor deviation monitoring - Create an
ExecutionConfigappropriate for your robot - Execute with
RealTimeExecutor
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
use kinetic::execution::*;
// 1. Plan
let robot = Robot::from_name("ur5e")?;
let planner = Planner::new(&robot)?;
let result = planner.plan(&start, &goal)?;
// 2. Time-parameterize
let timed = trapezoidal(
&result.waypoints,
&robot.velocity_limits(),
&robot.acceleration_limits(),
)?;
// 3. Validate
let validator = kinetic::trajectory::TrajectoryValidator::new(Default::default());
let violations = validator.validate(&timed);
assert!(violations.is_empty(), "Trajectory has violations");
// 4. Execute
let config = ExecutionConfig::safe(&robot);
let executor = RealTimeExecutor::new(config);
let mut driver = MyRobotDriver::connect("192.168.1.100")?;
let exec_result = executor.execute(&timed, &mut driver)?;
}
Trajectory Export
For robots that accept trajectory files instead of streaming commands,
use trajectory_to_csv_file or trajectory_to_json_file. Import with
trajectory_from_csv and trajectory_from_json.
See the Production Deployment guide for safety watchdog configuration.
HORUS Integration
Using kinetic with the HORUS robotics framework.
Overview
The horus-kinetic crate bridges kinetic’s motion planning stack with
HORUS’s zero-copy shared-memory IPC. It provides three node types that
can run as HORUS nodes or standalone with channel-based communication.
Node Architecture
HORUS IPC (shared memory topics)
================================
| | |
plan_request --> [PlannerNode] --> trajectory
twist_cmd --> [ServoNode] --> joint_cmd
pointcloud --> [SceneNode] --> scene_update
PlannerNode
Receives planning requests, computes collision-free paths, publishes timed trajectories.
#![allow(unused)]
fn main() {
use horus_kinetic::PlannerNode;
let mut planner = PlannerNode::new("ur5e")?;
// With HORUS IPC: subscribes to "plan_request", publishes to "trajectory"
// Without HORUS: use channel API
}
Topics:
| Topic | Direction | Message | Rate |
|---|---|---|---|
plan_request | Subscribe | PlanRequest | On demand |
trajectory | Publish | TimedTrajectory | On demand |
ServoNode
Real-time reactive control. Receives twist or joint jog commands, outputs joint commands at 500 Hz.
#![allow(unused)]
fn main() {
use horus_kinetic::ServoNode;
let mut servo = ServoNode::new("ur5e")?;
// Subscribes to "twist_cmd" or "joint_jog"
// Publishes "joint_cmd" at 500 Hz
}
Topics:
| Topic | Direction | Message | Rate |
|---|---|---|---|
twist_cmd | Subscribe | Twist | Up to 500 Hz |
joint_jog | Subscribe | JointJog | Up to 500 Hz |
joint_cmd | Publish | JointCommand | 500 Hz |
SceneNode
Manages the collision environment. Ingests point clouds, depth images, and explicit obstacle descriptions.
#![allow(unused)]
fn main() {
use horus_kinetic::SceneNode;
let mut scene = SceneNode::new("ur5e")?;
// Subscribes to "pointcloud", "depth_image"
// Publishes "scene_update" on change
}
Topics:
| Topic | Direction | Message | Rate |
|---|---|---|---|
pointcloud | Subscribe | PointCloud2 | Sensor rate |
depth_image | Subscribe | DepthImage | Sensor rate |
scene_update | Publish | SceneUpdate | On change |
With vs Without HORUS
The horus-kinetic crate works in two modes:
With HORUS IPC (feature horus-ipc enabled):
Full HORUS Node trait implementations with zero-copy shared-memory
topics. Nodes are discovered and connected by the HORUS runtime.
Without HORUS (default): Channel-based API for standalone use or integration with other transports.
#![allow(unused)]
fn main() {
use horus_kinetic::PlannerNode;
// Standalone mode (no HORUS dependency)
let mut planner = PlannerNode::new("ur5e")?;
// Send a request through the channel API
let request = PlanRequest {
start: vec![0.0; 6],
goal: Goal::Named("home".into()),
..Default::default()
};
let trajectory = planner.plan(request)?;
}
This design means kinetic works identically in simulation, on standalone robots, and in full HORUS-networked systems.
Point Cloud Perception Pipeline
The SceneNode processes raw sensor data into a collision world:
Camera/LiDAR --> PointCloud
|
[Downsampling] -- VoxelGrid or Random
|
[Outlier Removal] -- Statistical or Radius
|
[Octree Build] -- Spatial indexing
|
[Sphere Approx] -- Collision spheres
|
CollisionEnvironment
Configure the pipeline:
#![allow(unused)]
fn main() {
use kinetic::scene::{PointCloudConfig, OutlierConfig};
let config = PointCloudConfig {
voxel_size: 0.01, // 1cm voxels
max_points: 50_000,
outlier: OutlierConfig::Statistical {
neighbors: 20,
std_ratio: 2.0,
},
..Default::default()
};
scene.add_pointcloud("camera_0", &points, config);
}
Building
# Without HORUS (standalone, channel API)
cargo build -p horus-kinetic
# With HORUS IPC (requires horus to be installed)
cargo build -p horus-kinetic --features horus-ipc
The horus-ipc feature adds a compile-time dependency on HORUS.
Without it, the crate is fully self-contained and has no HORUS
dependency at all.
3D Visualization
The kinetic-viewer crate provides an interactive 3D viewer for robot
visualization, motion planning, and trajectory playback. It includes both a
windowed GUI (with egui panels) and a headless renderer for CI pipelines and
screenshot generation.
Quick Start
Launch the viewer for a built-in robot:
cargo run -p kinetic-viewer --features cli -- --robot ur5e
List all 54 supported robots:
cargo run -p kinetic-viewer --features cli -- --list-robots
Load a custom URDF file:
cargo run -p kinetic-viewer --features cli -- --robot path/to/robot.urdf
Features
Interactive GUI
The viewer opens a wgpu-rendered window with egui side panels:
- Joint sliders – drag individual joints to see the robot move in real time
- Planning panel – set start/goal configurations and run RRT, PRM, or other planners
- Servo panel – real-time velocity-based end-effector control
- Constraint panel – visualize joint limits and workspace constraints
- Scene objects – add/remove obstacles and inspection targets
- Trajectory playback – animate planned paths with speed control and looping
- Collision debug – toggle sphere model visualization
Rendering
- Hardware-accelerated via wgpu (Vulkan, Metal, DX12)
- Grid floor, coordinate axes, and wireframe overlays
- Per-link coloring and transparency for ghost robots
- Trajectory trail visualization
Headless Rendering
The HeadlessRenderer renders to an offscreen texture without a window. It
works with software Vulkan (lavapipe) for CI environments and with NVIDIA/AMD
GPUs for production screenshots.
Keyboard Shortcuts
| Key | Action |
|---|---|
| G | Toggle grid |
| A | Toggle coordinate axes |
| T | Toggle trajectory trail |
| C | Toggle collision debug spheres |
| W | Toggle collision wireframe |
| V | Toggle voxel display |
| P | Toggle point cloud |
| R | Reset camera to default position |
| F | Focus camera on goal marker |
| Z | Undo last interaction |
| Space | Play/pause trajectory |
| F1 | Show/hide keyboard shortcuts |
| F3 | Show/hide stats overlay |
| Esc | Quit |
Mouse controls: left-drag to orbit, right-drag to pan, scroll to zoom.
Screenshots
Headless screenshots of robots at their zero joint configuration, rendered using collision geometry primitives with per-link color gradients.
| Robot | Screenshot |
|---|---|
| UR5e | ![]() |
| Franka Panda | ![]() |
| KUKA iiwa14 | ![]() |
| xArm7 | ![]() |
To regenerate screenshots:
VK_ICD_FILENAMES=/usr/share/vulkan/icd.d/nvidia_icd.json \
cargo run -p kinetic-viewer --example capture_screenshots --features visual
API Usage
Embedding in Custom Applications
Use kinetic-viewer as a library to render robots in your own application:
#![allow(unused)]
fn main() {
use kinetic_viewer::{robot_scene_graph, Camera, MeshRegistry, ViewerSettings};
use kinetic_viewer::app::{run_viewer, ViewerConfig};
use kinetic_robot::Robot;
let robot = Robot::from_name("ur5e")?;
let camera = Camera::perspective([1.5, 1.0, 1.5], [0.0, 0.3, 0.0], 45.0);
let config = ViewerConfig {
title: "My Robot Viewer".into(),
..Default::default()
};
run_viewer(config, robot, camera)?;
}
Headless Rendering (Screenshots, CI)
Render to PNG without a window:
#![allow(unused)]
fn main() {
use kinetic_viewer::test_utils::HeadlessRenderer;
use kinetic_viewer::{robot_scene_graph, Camera, MeshRegistry, ViewerSettings};
use kinetic_robot::Robot;
let robot = Robot::from_name("ur5e").unwrap();
let mut meshes = MeshRegistry::new();
let scene = robot_scene_graph(&robot, &mut meshes);
let camera = Camera::perspective([1.5, 1.0, 1.5], [0.0, 0.3, 0.0], 45.0);
let settings = ViewerSettings::default();
let mut renderer = HeadlessRenderer::new(1280, 720).unwrap();
renderer.upload_meshes(&meshes);
let pixels = renderer.render(&scene, &camera, &settings);
// Save as PNG (requires `image` crate)
image::save_buffer(
"screenshot.png", &pixels, 1280, 720,
image::ColorType::Rgba8,
).unwrap();
}
Visual Regression Testing
Compare two renders for pixel-level differences:
#![allow(unused)]
fn main() {
use kinetic_viewer::test_utils::{compare_screenshots, diff_image};
let result = compare_screenshots(&actual, &expected, 1280, 720, 2);
assert!(result.rmse < 5.0, "Visual regression: RMSE={}", result.rmse);
// Generate a red/green diff image for inspection
let diff = diff_image(&actual, &expected, 1280, 720);
}
Crate Features
| Feature | Description |
|---|---|
visual | GPU rendering, mesh loading, image export |
cli | Command-line interface (--robot, --list-robots) |
The visual feature is enabled by default. Use --no-default-features for
data-model-only usage (scene graph, camera math, render commands) without GPU
dependencies.
Architecture
kinetic-viewer
+-- lib.rs Scene graph, Camera, MeshData, ViewerSettings, RenderCommand
+-- app.rs Windowed viewer (winit + wgpu + egui)
+-- gpu_context.rs wgpu device/surface setup
+-- pipeline.rs Mesh, wireframe, and line render pipelines
+-- gpu_buffers.rs GPU buffer management (instances, lines, uniforms)
+-- test_utils.rs HeadlessRenderer + screenshot comparison
+-- egui_ui.rs Joint sliders, planning panel, servo panel
+-- gizmo.rs 3D gizmo interaction (translate/rotate handles)
+-- interaction.rs Markers, selection, undo
+-- collision_viz.rs Collision sphere debug overlays
+-- trajectory_viz.rs Trajectory playback + ghost robots
+-- perception_viz.rs Point cloud + voxel display
+-- web_export.rs Scene export for web viewers
Troubleshooting
Systematic diagnosis of common errors and failures. Every KineticError
variant is listed with its message, common cause, and fix.
IK Failures
IKNotConverged
Message: IK did not converge after {iterations} iterations (residual: {residual})
Common causes:
- Target pose is at or near the workspace boundary
- Seed configuration is far from the solution
- Solver damping is too high (DLS)
Fix:
- Increase
num_restartsto 8 or higher - Provide a better seed via
IKConfig::with_seed(current_joints) - Lower DLS damping from 0.05 to 0.01
- Try
IKConfig::with_fallback()for position-only fallback - Verify the target is reachable by checking workspace bounds
NoIKSolution
Message: No valid IK solution found for target pose
Common causes:
- Target is physically outside the robot’s workspace
- Joint limits prevent reaching the target orientation
- Wrong kinematic chain (base_link or tip_link mismatch)
Fix:
- Verify the target is within the robot’s reachable workspace
- Check that
base_linkandtip_linkin the config match your URDF - Use
IKMode::PositionOnlyif orientation does not matter
SingularityLockup
Message: Singularity lockup: pseudoinverse failed {n} consecutive times
Common causes:
- Robot is at a singular configuration (e.g., arm fully extended)
- Servo mode is driving through a singularity
Fix:
- Move the robot away from the singular configuration manually
- In servo mode, increase
singularity_damping(e.g., 0.1) - Check
solution.condition_number– values above 100 indicate trouble
Planning Failures
PlanningTimeout
Message: Planning timed out after {elapsed} ({iterations} iterations)
Common causes:
- Environment is highly constrained (narrow passages)
- Timeout is too short for the problem complexity
- Start or goal is near an obstacle
Fix:
- Increase timeout:
PlannerConfig::offline()uses 500ms - Try EST or KPIECE for narrow passages
- Verify start and goal are not in collision before planning
- Reduce
collision_marginif the robot is being too conservative
StartInCollision
Message: Start configuration is in collision
Common causes:
- Robot’s current joint state touches a scene obstacle
- Self-collision due to tight collision model
- Wrong joint values (e.g., degrees instead of radians)
Fix:
- Check
planner.is_in_collision(&start)before planning - Verify joint values are in radians
- Add offending link pairs to
skip_pairsif they are false positives - Reduce
collision_margin
GoalInCollision / GoalUnreachable
GoalInCollision: Target joints collide with the scene. Verify the goal is collision-free and adjust if needed.
GoalUnreachable: Goal joints violate URDF limits or pose has no IK
solution. Check that values are within robot.joint_limits.
PlanningFailed
Message: Planning failed: {reason}
Common causes:
- No collision-free path exists between start and goal
- Planner exhausted all iterations without connecting trees
Fix:
- Increase iterations and timeout
- Try a different planner (EST, KPIECE for constrained spaces)
- Decompose the problem into intermediate waypoints
Collision Issues
CollisionDetected
Message: Collision detected at waypoint {index}
Common causes:
- Post-planning validation found a collision (shortcutting artifact)
- Scene changed between planning and validation
Fix:
- Replan with the current scene
- Increase
collision_marginto build in safety buffer - Disable shortcutting if it is creating invalid shortcuts
PlannerOutputInvalid
Message: Planner output invalid at waypoint {waypoint}: {reason}
Common causes:
- A waypoint violates joint limits (safety gate 1)
- End-effector position exceeds workspace bounds (safety gate 2)
Fix:
- If joint limits: check URDF limit accuracy
- If workspace bounds: widen bounds or remove
workspace_boundsconfig - This error indicates a planner bug – report it if reproducible
Trajectory Violations
TrajectoryLimitExceeded
Message: Trajectory limit exceeded at waypoint {index}: {detail}
Common causes:
- Velocity or acceleration exceeds robot limits
- Time parameterization produced infeasible timing
Fix:
- Use
trapezoidal_per_jointwhich respects per-joint limits - Lower
velocity_scalein the execution config - Increase the time parameterization’s speed limits
Robot Loading Errors
UrdfParse
Message: URDF parse error: {detail}
Common causes:
- Malformed XML in the URDF file
- Missing required tags (
joint,link,origin) - Encoding issues (non-UTF-8 characters)
Fix:
- Validate your URDF with
check_urdf(from ROS) or an XML validator - Ensure all joints have
type,parent,child,limittags
RobotConfigNotFound
Message: Robot config not found: {name}
Common causes:
- Typo in the robot name
- Config directory not in
robot_configs/ - Missing
kinetic.tomlin the config directory
Fix:
- Check spelling:
Robot::from_name("ur5e")(notUR5eorur-5e) - Verify
robot_configs/{name}/kinetic.tomlexists - Use
Robot::from_path()for custom locations
DimensionMismatch
Message: Dimension mismatch in {context}: expected {expected}, got {got}
Common causes:
- Wrong number of joint values for the robot
- Mixing full robot DOF with chain DOF
- Array indexing error in user code
Fix:
- Check
robot.dofandchain.dofand match your arrays accordingly - For mobile manipulators (Fetch, TIAGo), the chain DOF may be less than robot DOF
LinkNotFound / JointNotFound: Typo in link or joint name.
NamedConfigNotFound: Pose not in kinetic.toml [named_poses].
ChainExtraction: Links not connected by joints in the URDF.
Extending Kinetic
Adding custom planners, IK solvers, robot configurations, tests, and benchmarks.
Adding a Custom Planner
Step 1: Implement the PlannerPlugin Trait
The PlannerPlugin trait is the extension point for the planning pipeline.
#![allow(unused)]
fn main() {
use kinetic_planning::pipeline::{PlannerPlugin, PlanningRequest};
use kinetic_core::Trajectory;
pub struct MyPlanner {
// Your planner state
}
impl PlannerPlugin for MyPlanner {
fn plan(&self, request: &PlanningRequest) -> Option<Trajectory> {
// Implement your planning algorithm
// Return None if no solution found within timeout
let mut traj = Trajectory::with_dof(request.start.len());
traj.push_waypoint(&request.start);
// ... add waypoints ...
Some(traj)
}
fn id(&self) -> &str {
"my_planner"
}
fn description(&self) -> &str {
"My custom planner: does something special"
}
}
}
Step 2: Register in the Pipeline
#![allow(unused)]
fn main() {
use kinetic_planning::pipeline::PlanningPipeline;
let mut pipeline = PlanningPipeline::new();
pipeline.add_planner(Box::new(MyPlanner::new()));
pipeline.default_planner = "my_planner".into();
}
Step 3: Use via the Facade (Optional)
To make your planner available through the Planner facade, add a
variant to PlannerType and handle it in the dispatch match:
#![allow(unused)]
fn main() {
// In kinetic-planning/src/facade.rs
pub enum PlannerType {
// ... existing variants ...
MyPlanner,
}
// In Planner::plan_with_config, add the dispatch:
PlannerType::MyPlanner => {
let planner = MyPlanner::new(/* params */);
let r = planner.plan(&chain_start, &goal_resolved)?;
(r.waypoints, r.planning_time, r.iterations, r.tree_size)
}
}
Adding a Custom IK Solver
Step 1: Implement the Solver Function
Follow the pattern in kinetic-kinematics/src/dls.rs:
#![allow(unused)]
fn main() {
use kinetic_core::{Pose, Result};
use kinetic_kinematics::{KinematicChain, IKConfig, IKSolution, IKMode};
use kinetic_robot::Robot;
pub fn solve_my_ik(
robot: &Robot,
chain: &KinematicChain,
target: &Pose,
seed: &[f64],
config: &IKConfig,
mode: IKMode,
) -> Result<IKSolution> {
// Your IK algorithm here
// Must return IKSolution with:
// - joints: Vec<f64>
// - position_error: f64
// - orientation_error: f64
// - converged: bool
// - iterations: usize
// - mode_used: IKMode
// - degraded: bool
// - condition_number: f64
todo!()
}
}
Step 2: Register in the Solver Dispatch
Add a variant to IKSolver in kinetic-kinematics/src/ik.rs and
handle it in solve_once:
#![allow(unused)]
fn main() {
pub enum IKSolver {
// ... existing variants ...
MySolver { param: f64 },
}
// In solve_once:
IKSolver::MySolver { param } => {
my_solver::solve_my_ik(robot, chain, target, seed, config, mode)
}
}
Adding a Robot Configuration
- Create
robot_configs/<name>/withkinetic.tomland the URDF file - Follow the structure from an existing config (see
robot_configs/ur5e/) - Add a test to verify FK/IK:
#![allow(unused)]
fn main() {
#[test]
fn my_robot_fk_ik_roundtrip() {
let robot = Robot::from_name("my_robot").unwrap();
let planner = Planner::new(&robot).unwrap();
let home = robot.named_pose("home").unwrap();
let pose = planner.fk(&home).unwrap();
let ik_joints = planner.ik(&pose).unwrap();
let recovered = planner.fk(&ik_joints).unwrap();
let err = (pose.translation() - recovered.translation()).norm();
assert!(err < 0.001, "FK/IK roundtrip error: {err}");
}
}
Adding Tests
Acceptance Test Template
Every new planner or solver should have an acceptance test that verifies correctness on a real robot model.
#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
use super::*;
use kinetic_core::Goal;
use kinetic_robot::Robot;
#[test]
fn planner_finds_path_ur5e() {
let robot = Robot::from_name("ur5e").unwrap();
let planner = Planner::new(&robot).unwrap()
.with_planner_type(PlannerType::MyPlanner);
let start = vec![0.0, -1.0, 0.8, 0.0, 0.0, 0.0];
let goal = Goal::Joints(JointValues(vec![0.5, -0.5, 0.3, 0.1, -0.2, 0.4]));
let result = planner.plan(&start, &goal).unwrap();
assert!(result.num_waypoints() >= 2);
// Verify start and end match
let first = result.start().unwrap();
let last = result.end().unwrap();
for i in 0..6 {
assert!((first[i] - start[i]).abs() < 1e-6);
}
}
}
}
Run tests with cargo test (or cargo test -p kinetic-planning for a
specific crate). Use xvfb-run cargo test for visual features.
Adding Benchmarks
Use Criterion for micro-benchmarks. Add to benches/ in the relevant crate.
Run with cargo bench --bench planning_benchmarks.
Code Conventions
- All
pubitems must have///doc comments - Use
kinetic_core::Result<T>for fallible functions,thiserrorfor errors - Run
cargo fmtandcargo clippy -D warningsbefore submitting - Target 85% test coverage for new code
From MoveIt2
Step-by-step migration guide for MoveIt2 users.
API Mapping Table
| MoveIt2 Concept | Kinetic Equivalent |
|---|---|
MoveGroupInterface | Planner |
PlanningSceneInterface | Scene |
MoveIt Servo | Servo (kinetic-reactive) |
RobotModel | Robot |
RobotState | JointValues + KinematicChain |
PlanningScene | Scene + CollisionEnvironment |
MoveItCpp | kinetic::prelude::* |
Task Constructor | kinetic::task::Task |
collision_detection::World | CollisionEnvironment |
kinematics::KinematicsBase | IKSolver enum |
robot_trajectory::RobotTrajectory | TimedTrajectory |
planning_interface::MotionPlanRequest | Goal + PlannerConfig |
AllowedCollisionMatrix | AllowedCollisionMatrix |
Config Translation: MoveIt YAML to Kinetic TOML
MoveIt2 uses YAML files (generated by MoveIt Setup Assistant).
Kinetic uses a single kinetic.toml file per robot.
MoveIt2 YAML (joint_limits.yaml):
joint_limits:
shoulder_pan_joint:
has_velocity_limits: true
max_velocity: 3.15
has_acceleration_limits: true
max_acceleration: 5.0
Kinetic: Joint limits come directly from the URDF. No separate file needed.
Kinetic reads <limit> tags from your URDF automatically.
MoveIt2 YAML (kinematics.yaml):
arm:
kinematics_solver: kdl_kinematics_plugin/KDLKinematicsPlugin
kinematics_solver_search_resolution: 0.005
kinematics_solver_timeout: 0.05
Kinetic TOML:
[ik]
solver = "dls" # or "opw", "fabrik", "subproblem"
damping = 0.05
MoveIt2 SRDF (collision disable):
<disable_collisions link1="link5" link2="link7" reason="Never"/>
Kinetic TOML:
[collision]
skip_pairs = [["link5", "link7"]]
MoveIt2 SRDF (named poses):
<group_state name="home" group="arm">
<joint name="joint1" value="0.0"/>
<joint name="joint2" value="-1.5708"/>
</group_state>
Kinetic TOML:
[named_poses]
home = [0.0, -1.5708, 0.0, -1.5708, 0.0, 0.0]
8-Step Migration Checklist
-
Copy your URDF. Place it in
robot_configs/<name>/. Kinetic parses the same URDF format as MoveIt2. -
Create kinetic.toml. Translate your SRDF groups, named poses, and collision pairs. See the config translation above.
-
Replace MoveGroupInterface with Planner.
Planner::new(&robot)replacesMoveGroupInterface(node, "arm"). -
Replace PlanningSceneInterface with Scene.
Scene::new(&robot)replacesPlanningSceneInterface(). -
Replace Servo with kinetic Servo.
Servo::new(&robot, &scene, ServoConfig::default())replacesmoveit_servo::Servo(node, params). -
Update goal types. MoveIt2’s
setJointValueTargetbecomesGoal::Joints(...).setPoseTargetbecomesGoal::Pose(...). -
Replace trajectory execution. MoveIt2’s
execute()callsmove_group_->execute()through ROS action servers. Kinetic usesRealTimeExecutorwith aCommandSinkcallback. -
Remove ROS dependencies. Kinetic has zero ROS/ROS2 dependencies. Remove
rclcpp,moveit_ros_planning_interface, and related packages.
Before/After Code Examples
Planning a Joint-Space Goal
MoveIt2 (C++):
auto move_group = MoveGroupInterface(node, "arm");
move_group.setJointValueTarget({0.5, -0.5, 0.3, 0.0, 0.0, 0.0});
auto [success, plan] = move_group.plan();
if (success) move_group.execute(plan);
Kinetic (Rust):
#![allow(unused)]
fn main() {
let robot = Robot::from_name("ur5e")?;
let planner = Planner::new(&robot)?;
let start = vec![0.0, -1.5708, 0.0, -1.5708, 0.0, 0.0];
let goal = Goal::Joints(JointValues(vec![0.5, -0.5, 0.3, 0.0, 0.0, 0.0]));
let result = planner.plan(&start, &goal)?;
}
Adding Scene Obstacles
MoveIt2 (C++):
auto scene_interface = PlanningSceneInterface();
moveit_msgs::msg::CollisionObject obj;
obj.id = "table";
obj.primitives.push_back(box_shape);
obj.primitive_poses.push_back(table_pose);
scene_interface.applyCollisionObject(obj);
Kinetic (Rust):
#![allow(unused)]
fn main() {
let mut scene = Scene::new(&robot);
scene.add("table", Shape::Cuboid(0.8, 0.6, 0.02),
Pose::from_xyz(0.5, 0.0, 0.0));
let planner = Planner::new(&robot)?.with_scene(&scene);
}
Servo Teleoperation
MoveIt2 (C++):
auto servo = moveit_servo::Servo(node, servo_params);
servo.start();
// Publish TwistStamped to /servo_node/delta_twist_cmds
Kinetic (Rust):
#![allow(unused)]
fn main() {
use kinetic::reactive::{Servo, ServoConfig};
let servo = Servo::new(&robot, &scene, ServoConfig::teleop());
let cmd = servo.send_twist(&twist)?;
// cmd.positions and cmd.velocities ready for hardware
}
Key Differences from MoveIt2
| Aspect | MoveIt2 | Kinetic |
|---|---|---|
| Language | C++ (Python bindings) | Rust (Python bindings) |
| ROS dependency | Required (ROS2) | None |
| IPC | ROS topics/actions/services | Direct API or HORUS IPC |
| IK solver | Plugin (KDL, Trac-IK) | Built-in (OPW, DLS, FABRIK, Subproblem) |
| Collision | FCL | SIMD sphere model + CAPT broadphase |
| GPU support | None | wgpu trajectory optimizer |
| Configuration | YAML + SRDF + launch files | Single kinetic.toml |
| Thread model | ROS executor | Direct, zero overhead |
From Other Frameworks
Concept mapping for cuRobo, Drake, OMPL, PyBullet, and Pinocchio users.
cuRobo
NVIDIA’s cuRobo provides GPU-accelerated motion generation using parallel
trajectory optimization. Kinetic’s kinetic-gpu crate follows the same
architecture (parallel seeds, SDF collision, gradient-based optimization)
but runs on any GPU via wgpu (Vulkan/Metal/DX12), not just CUDA.
| cuRobo | Kinetic |
|---|---|
MotionGen | GpuOptimizer or Planner |
RobotConfig | Robot + kinetic.toml |
WorldConfig | Scene + CollisionEnvironment |
MotionGenConfig | GpuConfig |
CudaRobotModel | RobotSphereModel |
plan_single() | planner.plan() |
plan_batch() | GpuOptimizer::optimize() |
Key difference: cuRobo requires CUDA and Isaac Sim. Kinetic runs on any platform with a Vulkan/Metal GPU, or falls back to CPU. cuRobo is Python-first; kinetic is Rust-first with Python bindings.
Comparison:
# cuRobo
from curobo.wrap.reacher.motion_gen import MotionGen
motion_gen = MotionGen(MotionGenConfig.load_from_robot_config(robot_cfg))
result = motion_gen.plan_single(start, goal)
# Kinetic
import kinetic
planner = kinetic.Planner(kinetic.Robot("ur5e"))
result = planner.plan(start, kinetic.Goal.joints(goal))
Drake
Drake is a C++ toolbox for model-based robotics with a focus on mathematical rigor, optimization, and simulation. Kinetic is a focused motion planning library without a physics simulator.
| Drake | Kinetic |
|---|---|
MultibodyPlant | Robot |
SceneGraph | Scene |
InverseKinematics (program) | solve_ik() |
KinematicTrajectoryOptimization | GpuOptimizer or totp() |
GcsTrajectoryOptimization | GCS planner |
RigidBodyTree | KinematicChain |
CalcJacobian | jacobian() |
Key difference: Drake solves motion planning as a mathematical program (convex optimization, mixed-integer programming). Kinetic uses sampling-based planners (RRT, EST, KPIECE) for speed and GPU optimization for quality. Drake includes a full physics simulator; kinetic does not.
Comparison:
# Drake
from pydrake.multibody.inverse_kinematics import InverseKinematics
ik = InverseKinematics(plant)
ik.AddPositionConstraint(...)
result = Solve(ik.prog())
# Kinetic
import kinetic
robot = kinetic.Robot("ur5e")
planner = kinetic.Planner(robot)
joints = planner.ik(target_pose)
OMPL
OMPL (Open Motion Planning Library) provides sampling-based planning algorithms. Kinetic implements many of the same algorithms (RRT, RRT*, PRM, EST, KPIECE) with Rust performance and integrated kinematics/collision.
| OMPL | Kinetic |
|---|---|
SimpleSetup | Planner |
SpaceInformation | KinematicChain + CollisionEnvironment |
StateSpace | Joint limits from Robot |
RRTConnect | PlannerType::RRTConnect |
RRTstar | PlannerType::RRTStar |
EST | PlannerType::EST |
KPIECE1 | PlannerType::KPIECE |
PRM | PlannerType::PRM |
StateValidityChecker | planner.is_in_collision() |
Key difference: OMPL is a standalone planning library that requires you to provide collision checking, FK, and IK externally (usually via MoveIt2). Kinetic bundles all of these into a single stack with zero external dependencies.
Comparison:
// OMPL
auto si = std::make_shared<SpaceInformation>(space);
si->setStateValidityChecker(myCollisionChecker);
auto planner = std::make_shared<RRTConnect>(si);
planner->solve(5.0);
// Kinetic
let planner = Planner::new(&robot)?
.with_planner_type(PlannerType::RRTConnect);
let result = planner.plan(&start, &goal)?;
PyBullet
PyBullet is a Python physics simulator commonly used for robotics research and reinforcement learning. Kinetic is not a simulator but can replace PyBullet’s IK and motion planning functionality.
| PyBullet | Kinetic |
|---|---|
loadURDF | Robot::from_urdf("file.urdf") |
calculateInverseKinematics | solve_ik() |
calculateJacobian | jacobian() |
getJointInfo | robot.joints |
getLinkState | forward_kinematics() |
| N/A (no planner) | Planner::plan() |
setCollisionFilter | AllowedCollisionMatrix |
Key difference: PyBullet is a simulator with basic IK (damped least squares). Kinetic is a planning stack with 10 IK solvers, 14 planners (8 surfaced in Python), and GPU optimization. PyBullet runs in Python; kinetic’s core is Rust with Python bindings.
Comparison:
# PyBullet
import pybullet as p
robot_id = p.loadURDF("ur5e.urdf")
ik = p.calculateInverseKinematics(robot_id, ee_link, target_pos)
# Kinetic
import kinetic
robot = kinetic.Robot("ur5e")
planner = kinetic.Planner(robot)
ik = planner.ik(target_pose)
Pinocchio
Pinocchio is a C++ rigid-body dynamics library focused on analytical derivatives and efficient dynamics computations. Kinetic covers kinematics and planning but not dynamics.
| Pinocchio | Kinetic |
|---|---|
Model | Robot |
Data | (computed on the fly) |
forwardKinematics | forward_kinematics() |
computeJointJacobians | jacobian() |
ik (via proxsuite) | solve_ik() |
computeMinverse | N/A (use featherstone crate) |
GeometryModel | RobotSphereModel |
computeCollisions (HPP-FCL) | CollisionEnvironment::check_collision() |
Key difference: Pinocchio focuses on rigid-body dynamics (ABA, RNEA, CRBA)
with analytical derivatives for optimal control. Kinetic focuses on motion
planning with fast collision checking. For dynamics, use the featherstone
crate from the Softmata ecosystem.
Comparison:
# Pinocchio
import pinocchio as pin
model = pin.buildModelFromUrdf("ur5e.urdf")
data = model.createData()
pin.forwardKinematics(model, data, q)
J = pin.computeJointJacobian(model, data, q, joint_id)
# Kinetic
import kinetic
robot = kinetic.Robot("ur5e")
planner = kinetic.Planner(robot)
pose = planner.fk(q)
Summary Comparison
| Feature | kinetic | MoveIt2 | cuRobo | Drake | OMPL | PyBullet | Pinocchio |
|---|---|---|---|---|---|---|---|
| Language | Rust | C++ | Python | C++ | C++ | Python | C++ |
| FK/IK | Built-in | Plugin | Built-in | Built-in | External | Basic | Built-in |
| Planning | 14 (8 in Py) | OMPL | Optimizer | GCS/Opt | Planners | None | None |
| Collision | SIMD spheres | FCL | CUDA SDF | Drake | External | Bullet | HPP-FCL |
| GPU | wgpu | No | CUDA | No | No | No | No |
| Dynamics | No | No | No | Yes | No | Yes | Yes |
| ROS req | No | Yes | No | No | No | No | No |
Supported Robots
All 52 built-in robot configurations, organized by manufacturer.
Universal Robots
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
ur3e | UR3e | 6 | OPW | Compact tabletop arm |
ur5e | UR5e | 6 | OPW | Most popular UR model |
ur10e | UR10e | 6 | OPW | Long reach (1300mm) |
ur16e | UR16e | 6 | OPW | Heavy payload (16kg) |
ur20 | UR20 | 6 | OPW | Next-gen 20kg payload |
ur30 | UR30 | 6 | OPW | Next-gen 30kg payload |
Franka Emika
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
franka_panda | Panda | 7 | DLS | Research-grade, torque sensing |
KUKA
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
kuka_iiwa7 | LBR iiwa 7 | 7 | DLS | 7-DOF collaborative |
kuka_iiwa14 | LBR iiwa 14 | 7 | DLS | 14kg payload variant |
kuka_kr6 | KR 6 | 6 | OPW | Industrial 6kg |
ABB
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
abb_irb1200 | IRB 1200 | 6 | OPW | Compact industrial |
abb_irb4600 | IRB 4600 | 6 | OPW | High payload industrial |
abb_yumi_left | YuMi (left) | 7 | DLS | Dual-arm collaborative |
abb_yumi_right | YuMi (right) | 7 | DLS | Dual-arm collaborative |
Fanuc
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
fanuc_crx10ia | CRX-10iA | 6 | OPW | Collaborative |
fanuc_lr_mate_200id | LR Mate 200iD | 6 | OPW | Compact industrial |
Yaskawa/Motoman
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
yaskawa_gp7 | GP7 | 6 | OPW | General purpose |
yaskawa_hc10 | HC10 | 6 | OPW | Collaborative 10kg |
UFactory (xArm)
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
xarm5 | xArm 5 | 5 | DLS | 5-DOF variant |
xarm6 | xArm 6 | 6 | OPW | 6-DOF variant |
xarm7 | xArm 7 | 7 | DLS | 7-DOF redundant |
Kinova
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
kinova_gen3 | Gen3 | 7 | DLS | 7-DOF research arm |
kinova_gen3_lite | Gen3 Lite | 6 | DLS | 6-DOF lightweight |
jaco2_6dof | Jaco2 | 6 | DLS | Assistive robotics |
Trossen Robotics
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
trossen_px100 | PincherX 100 | 4 | DLS | Budget tabletop |
trossen_rx150 | ReactorX 150 | 5 | DLS | Mid-range |
trossen_wx250s | WidowX 250s | 6 | OPW | Research platform |
viperx_300 | ViperX 300 | 6 | OPW | 6-DOF research |
widowx_250 | WidowX 250 | 6 | OPW | Research platform |
Research / Education
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
aloha_left | ALOHA (left) | 6 | OPW | Bimanual teleop |
aloha_right | ALOHA (right) | 6 | OPW | Bimanual teleop |
koch_v1 | Koch v1 | 6 | OPW | Open-source arm |
lerobot_so100 | LeRobot SO-100 | 6 | DLS | AI training arm |
so_arm100 | SO-ARM100 | 5 | DLS | Compact research |
open_manipulator_x | OpenMANIPULATOR-X | 4 | DLS | ROBOTIS, 4-DOF |
robotis_open_manipulator_p | OpenMANIPULATOR-P | 6 | OPW | ROBOTIS, 6-DOF |
mycobot_280 | myCobot 280 | 6 | OPW | Elephant Robotics |
Rethink Robotics
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
baxter_left | Baxter (left) | 7 | DLS | Dual-arm research |
baxter_right | Baxter (right) | 7 | DLS | Dual-arm research |
sawyer | Sawyer | 7 | DLS | Single-arm research |
Other Manufacturers
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
denso_vs068 | Denso VS068 | 6 | OPW | Compact industrial |
dobot_cr5 | Dobot CR5 | 6 | OPW | Collaborative |
elite_ec66 | Elite EC66 | 6 | OPW | Collaborative |
flexiv_rizon4 | Flexiv Rizon 4 | 7 | DLS | Force-controlled |
meca500 | Mecademic Meca500 | 6 | OPW | Micro industrial |
niryo_ned2 | Niryo Ned2 | 6 | OPW | Education |
staubli_tx260 | Staubli TX2-60 | 6 | OPW | High-speed |
techman_tm5_700 | Techman TM5-700 | 6 | OPW | Collaborative with vision |
Mobile Manipulators
| Config Name | Model | DOF | IK Solver | Notes |
|---|---|---|---|---|
fetch | Fetch | 8 | DLS | Mobile base + 7-DOF arm |
pr2 | PR2 | 8 | DLS | Willow Garage dual-arm |
stretch_re2 | Stretch RE2 | 5 | DLS | Hello Robot, mobile |
tiago | TIAGo | 8 | DLS | PAL Robotics mobile |
Usage
#![allow(unused)]
fn main() {
use kinetic::prelude::*;
// Load any robot by config name
let robot = Robot::from_name("ur5e")?;
// Check available named poses
if let Some(joints) = robot.named_pose("home") {
println!("Home: {:?}", joints);
}
}
Adding Your Own
See the Custom Robots guide for step-by-step instructions on adding a new robot configuration. Contributions are welcome via pull request.
API Cheatsheet
Quick reference grouped by task. Rust and Python shown side by side.
Load Robot
#![allow(unused)]
fn main() {
// Rust
use kinetic::prelude::*;
let robot = Robot::from_name("ur5e")?;
let robot = Robot::from_urdf("path/to/robot.urdf")?;
let robot = Robot::from_urdf_string(urdf_str)?;
println!("DOF: {}", robot.dof);
}
# Python
import kinetic
robot = kinetic.Robot("ur5e")
robot = kinetic.Robot.from_urdf("path/to/robot.urdf")
print(f"DOF: {robot.dof}")
Forward Kinematics
#![allow(unused)]
fn main() {
// Rust
let chain = KinematicChain::extract(&robot, "base_link", "tool0")?;
let pose = forward_kinematics(&robot, &chain, &joints)?;
let all_poses = forward_kinematics_all(&robot, &chain, &joints)?;
println!("Position: {:?}", pose.translation());
}
# Python
pose = robot.fk(joints) # 4x4 numpy
poses = robot.batch_fk(configs) # list of 4x4 (N configs)
Inverse Kinematics
#![allow(unused)]
fn main() {
// Rust
let config = IKConfig::default().with_restarts(8);
let solution = solve_ik(&robot, &chain, &target_pose, &config)?;
println!("Converged: {}, error: {:.6}m", solution.converged, solution.position_error);
let joints = solution.joints;
}
# Python
joints = robot.ik(target_pose)
result = robot.ik_config(target, solver="opw", mode="full6d", num_restarts=10)
# result: {'joints': array, 'converged': bool, 'position_error': float, ...}
Batch IK
#![allow(unused)]
fn main() {
// Rust
let results = solve_ik_batch(&robot, &chain, &targets, &config);
// Vec<Option<IKSolution>> -- None for failed targets
}
# Python
results = robot.batch_ik(target_list, solver="auto")
# list of dicts (or None for failures)
Jacobian and Manipulability
#![allow(unused)]
fn main() {
// Rust
let j = jacobian(&robot, &chain, &joints)?; // 6 x DOF matrix
let m = manipulability(&robot, &chain, &joints)?;
}
# Python
jac = robot.jacobian(joints) # (6, DOF) numpy
m = robot.manipulability(joints) # float
Plan
#![allow(unused)]
fn main() {
// Rust
let planner = Planner::new(&robot)?;
let result = planner.plan(&start, &Goal::Joints(goal))?;
let result = planner.plan(&start, &Goal::Pose(target_pose))?;
let result = planner.plan(&start, &Goal::Named("home".into()))?;
let result = planner.plan(&start, &Goal::Relative(offset_vec))?;
// One-liner
let result = kinetic::plan("ur5e", &start, &goal)?;
}
# Python
planner = kinetic.Planner(robot)
goal = kinetic.Goal.joints(goal_array)
goal = kinetic.Goal.pose(position, quaternion)
goal = kinetic.Goal.named("home")
traj = planner.plan(start, goal)
Cartesian / Dual-Arm Planning
#![allow(unused)]
fn main() {
// Rust
let dual = DualArmPlanner::new(&robot, left_chain, right_chain, DualArmMode::Synchronized)?;
let goal = DualGoal { left: Goal::Named("home".into()), right: Goal::Named("home".into()) };
let result = dual.plan(&left_start, &right_start, &goal)?;
}
# Python
dual = kinetic.DualArmPlanner(robot, "left_arm", "right_arm", mode="synchronized")
result = dual.plan(start_left, start_right, goal_left, goal_right)
# result: {'left_trajectory': Trajectory, 'right_trajectory': Trajectory, ...}
Scene
#![allow(unused)]
fn main() {
// Rust
let mut scene = Scene::new(&robot);
scene.add("table", Shape::Cuboid(0.8, 0.6, 0.02), table_pose);
scene.add("bottle", Shape::Cylinder(0.03, 0.15), bottle_pose);
scene.attach("tool", Shape::Sphere(0.02), grasp_tf, "tool0");
scene.remove("bottle");
}
# Python
scene = kinetic.Scene(robot)
scene.add("table", kinetic.Shape.cuboid(0.8, 0.6, 0.02), pose)
scene.attach("tool", kinetic.Shape.sphere(0.02), grasp_tf, "tool0")
Servo
#![allow(unused)]
fn main() {
// Rust
use std::sync::Arc;
use kinetic::reactive::{Servo, ServoConfig};
let robot = Arc::new(robot);
let scene = Arc::new(scene);
let servo = Servo::new(&robot, &scene, ServoConfig::teleop())?;
let cmd = servo.send_twist(&twist)?;
// cmd.positions, cmd.velocities
}
# Python
servo = kinetic.Servo(robot, scene)
cmd = servo.send_twist(twist)
Trajectory
#![allow(unused)]
fn main() {
// Rust
let timed = trapezoidal(&waypoints, &vel_limits, &accel_limits)?;
let timed = totp(&waypoints, &vel_limits, &accel_limits)?;
trajectory_to_csv_file(&timed, "output.csv")?;
trajectory_to_json_file(&timed, "output.json")?;
let loaded = trajectory_from_csv("output.csv")?;
}
# Python
traj = planner.plan(start, goal)
traj.to_csv("output.csv")
traj.to_json("output.json")
duration = traj.duration
joints_at_t = traj.sample(0.5)
Task Planning
#![allow(unused)]
fn main() {
// Rust
let pick = Task::pick(&robot, &scene, pick_config);
let solution = Task::sequence(vec![Task::move_to(&robot, goal), pick]).plan(&start)?;
}
# Python
t1 = kinetic.Task.move_to(robot, goal)
t2 = kinetic.Task.pick(robot, scene, "cup", grasp_poses, approach, retreat)
t3 = kinetic.Task.place(robot, scene, "cup", target_pose, approach, retreat)
seq = kinetic.Task.sequence([t1, t2, t3])
solution = seq.plan(start_joints)
Dynamics
#![allow(unused)]
fn main() {
// Rust
let mut body = articulated_body_from_chain(&robot, &chain);
let tau = gravity_compensation(&mut body, &q);
let tau = inverse_dynamics(&mut body, &q, &qd, &qdd);
let qdd = forward_dynamics(&mut body, &q, &qd, &tau);
let m = mass_matrix(&mut body, &q);
}
# Python
dyn = kinetic.Dynamics(robot)
tau = dyn.gravity_compensation(q)
tau = dyn.inverse_dynamics(q, qd, qdd)
qdd = dyn.forward_dynamics(q, qd, tau)
M = dyn.mass_matrix(q) # (DOF, DOF) numpy
GPU Optimization
#![allow(unused)]
fn main() {
// Rust
let opt = GpuOptimizer::new(GpuConfig::balanced())?;
let traj = opt.optimize(&robot, &obstacles, &start, &goal)?;
}
# Python
opt = kinetic.GpuOptimizer(robot, preset="balanced") # auto GPU/CPU
traj = opt.optimize(start, goal, scene=scene)
print(opt.is_gpu) # True if GPU available
checker = kinetic.GpuCollisionChecker(robot, scene)
results = checker.check_batch(configs) # (N, DOF) → {'in_collision', 'min_distances'}
Execution
#![allow(unused)]
fn main() {
// Rust
let executor = RealTimeExecutor::new(ExecutionConfig::safe(&robot));
let result = executor.execute(&timed, &mut my_driver)?;
}
# Python
def send(positions, velocities):
my_driver.set_joints(positions)
executor = kinetic.RealTimeExecutor.safe(robot, rate_hz=500)
result = executor.execute(traj, send, feedback=read_positions)
# result: {'state', 'actual_duration', 'commands_sent', 'max_deviation', ...}
Error Reference
Every KineticError variant with display message, cause, fix, and code example.
All kinetic operations return kinetic_core::Result<T>, which is an alias
for std::result::Result<T, KineticError>. The error type is #[non_exhaustive],
so match arms should include a wildcard.
Error Classification
#![allow(unused)]
fn main() {
let err: KineticError = ...;
// Retryable errors: try again with different parameters
if err.is_retryable() { /* PlanningTimeout, IKNotConverged, CartesianPathIncomplete */ }
// Input errors: fix the input, do not retry
if err.is_input_error() { /* UrdfParse, DimensionMismatch, GoalUnreachable, ... */ }
}
UrdfParse
Display: URDF parse error: {detail}
Cause: The URDF file contains malformed XML, missing required tags, or invalid attribute values.
Fix: Validate the URDF with an XML linter. Ensure every <joint> has
type, <parent>, <child>, <origin>, <axis>, and <limit> tags.
IKNotConverged
Display: IK did not converge after {iterations} iterations (residual: {residual:.6})
Cause: Iterative solver did not reach tolerance. Common near workspace boundaries or with poor seeds.
Fix: Increase num_restarts, provide a better seed, lower damping,
or use IKConfig::with_fallback().
PlanningTimeout
Display: Planning timed out after {elapsed:?} ({iterations} iterations)
Cause: Planner exhausted time budget. Constrained environments or short timeouts.
Fix: Use PlannerConfig::offline(), try EST/KPIECE, verify start/goal
are not in collision.
StartInCollision / GoalInCollision
StartInCollision: Start joints collide with scene or self.
Check planner.is_in_collision(&start) before planning.
GoalInCollision: Resolved goal collides with scene. Verify goal is collision-free. Try different IK seeds for pose goals.
GoalUnreachable
Display: Goal is unreachable
Cause: Goal joints violate URDF limits, or IK cannot find a valid solution for a pose goal within joint limits.
Fix: Check that goal values are within robot.joint_limits.
NoIKSolution
Display: No valid IK solution found for target pose
Cause: Target is outside the robot’s workspace or no solution exists within limits.
Fix: Verify the target is reachable. Use ReachabilityMap for workspace analysis.
JointLimitViolation
Display: Joint '{name}' value {value} outside limits [{min}, {max}]
Cause: A joint value exceeds URDF-defined limits.
Fix: Clamp or adjust the offending joint value.
RobotConfigNotFound
Display: Robot config not found: {name}
Cause: No config directory for the given robot name.
Fix: Check spelling. Verify robot_configs/{name}/kinetic.toml exists.
CartesianPathIncomplete
Display: Cartesian path only achieved {fraction:.1}% of requested path
Cause: Cartesian planner hit IK failures, collisions, or joint jumps.
Fix: Lower max_step in CartesianConfig or break into shorter segments.
CollisionDetected
Display: Collision detected at waypoint {waypoint_index}
Fix: Replan with updated scene. Increase collision_margin.
TrajectoryLimitExceeded
Display: Trajectory limit exceeded at waypoint {waypoint_index}: {detail}
Fix: Re-run time parameterization with correct limits. Lower velocity scaling.
DimensionMismatch
Display: Dimension mismatch in {context}: expected {expected}, got {got}
Cause: Array length does not match the expected DOF. Common when mixing full robot DOF with chain DOF on mobile manipulators.
Fix: Check robot.dof and chain.dof. For mobile manipulators
(Fetch, TIAGo), chain.dof may be less than robot.dof.
#![allow(unused)]
fn main() {
let chain = KinematicChain::extract(&robot, "base_link", "tool0")?;
// Use chain.dof for joint arrays passed to plan/fk/ik
let joints = vec![0.0; chain.dof];
}
SingularityLockup
Display: Singularity lockup: pseudoinverse failed {consecutive_failures} consecutive times
Cause: The servo controller’s pseudoinverse computation failed repeatedly because the robot is at or near a kinematic singularity.
Fix: Move the robot away from the singular configuration. Increase
singularity_damping in ServoConfig. Check condition_number on
IK solutions to detect approaching singularities early.
PlannerOutputInvalid
Display: Planner output invalid at waypoint {waypoint}: {reason}
Cause: The planner’s output failed internal safety validation. Either a waypoint violates joint limits (safety gate 1) or the end-effector exits the configured workspace bounds (safety gate 2).
Fix: If joint limits: verify URDF limit accuracy. If workspace bounds:
widen the bounds or remove workspace_bounds from the config. If this
error persists, it may indicate a planner bug – please report it.
#![allow(unused)]
fn main() {
match planner.plan(&start, &goal) {
Err(KineticError::PlannerOutputInvalid { waypoint, reason }) => {
eprintln!("Safety gate failed at waypoint {waypoint}: {reason}");
}
_ => {}
}
}
NoLinks / LinkNotFound / JointNotFound
Display: Robot has no links / Link '{name}' not found / Joint '{name}' not found
Cause: Empty URDF, or a typo in a link/joint name string.
Fix: Print robot.links or robot.joints to see available names.
NamedConfigNotFound
Display: Named configuration '{name}' not found
Cause: The named pose does not exist in kinetic.toml.
Fix: Check the [named_poses] section. Available names can be
queried via robot.named_pose_names().
Other
Display: {message}
Cause: Catch-all for errors that do not fit a specific variant.
Fix: Read the message for context. Common cases include non-finite joint values (NaN/Infinity) passed to the planner.
Configuration Reference
All configuration structs, fields, and presets.
PlannerConfig
Controls motion planner behavior. Used by all planners through the Planner
facade.
Fields
| Field | Type | Default | Description |
|---|---|---|---|
timeout | Duration | 50ms | Maximum planning time before timeout |
max_iterations | usize | 10,000 | Upper bound on sampling/expansion iterations |
collision_margin | f64 | 0.02 | Minimum clearance from obstacles (meters) |
shortcut_iterations | usize | 100 | Number of random-shortcut passes |
smooth | bool | true | Apply B-spline smoothing after shortcutting |
workspace_bounds | Option<[f64; 6]> | None | [min_x, min_y, min_z, max_x, max_y, max_z] |
Presets
PlannerConfig::default() – General purpose.
#![allow(unused)]
fn main() {
PlannerConfig {
timeout: Duration::from_millis(50),
max_iterations: 10_000,
collision_margin: 0.02,
shortcut_iterations: 100,
smooth: true,
workspace_bounds: None,
}
}
PlannerConfig::realtime() – Low-latency planning (10ms budget).
#![allow(unused)]
fn main() {
PlannerConfig {
timeout: Duration::from_millis(10),
max_iterations: 2_000,
collision_margin: 0.01,
shortcut_iterations: 20,
smooth: false,
workspace_bounds: None,
}
}
PlannerConfig::offline() – Thorough offline planning (500ms budget).
#![allow(unused)]
fn main() {
PlannerConfig {
timeout: Duration::from_millis(500),
max_iterations: 100_000,
collision_margin: 0.02,
shortcut_iterations: 500,
smooth: true,
workspace_bounds: None,
}
}
IKConfig
Controls inverse kinematics solver behavior.
Fields
| Field | Type | Default | Description |
|---|---|---|---|
solver | IKSolver | Auto | Solver selection (Auto, DLS, FABRIK, OPW, Subproblem, Subproblem7DOF) |
mode | IKMode | Full6D | Full6D, PositionOnly, or PositionFallback |
max_iterations | usize | 100 | Max iterations for iterative solvers |
position_tolerance | f64 | 1e-4 | Position convergence tolerance (meters) |
orientation_tolerance | f64 | 1e-3 | Orientation convergence tolerance (radians) |
check_limits | bool | true | Enforce joint limits |
seed | Option<Vec<f64>> | None | Starting configuration (uses mid-config if None) |
null_space | Option<NullSpace> | None | Null-space objective for redundant robots |
num_restarts | usize | 0 | Random restart count for escaping local minima |
Solver Options
| Solver | Constructor | Best For |
|---|---|---|
| Auto | IKConfig::default() | Automatic selection based on robot geometry |
| DLS | IKConfig::dls() | General purpose, any DOF |
| FABRIK | IKConfig::fabrik() | Position-focused tasks |
| OPW | IKConfig::opw() | 6-DOF spherical wrist robots |
| Subproblem | Direct construction | 6-DOF intersecting wrist axes |
| Subproblem7DOF | Direct construction | 7-DOF robots |
Convenience Constructors
#![allow(unused)]
fn main() {
IKConfig::dls() // DLS with default damping 0.05
IKConfig::fabrik() // FABRIK solver
IKConfig::opw() // OPW analytical solver
IKConfig::position_only() // Position-only mode
IKConfig::with_fallback() // Try Full6D, fall back to PositionOnly
}
ServoConfig
Controls the reactive servo controller for teleoperation and tracking.
Fields
| Field | Type | Default | Description |
|---|---|---|---|
rate_hz | f64 | 500.0 | Control loop rate |
input_type | InputType | Twist | Twist, JointJog, or PoseTracking |
collision_check_hz | f64 | 100.0 | Collision check frequency |
singularity_threshold | f64 | 0.02 | Manipulability threshold |
slowdown_distance | f64 | 0.15 | Distance to start decelerating (meters) |
stop_distance | f64 | 0.03 | Emergency stop distance (meters) |
velocity_limits | Vec<f64> | [] | Per-joint vel limits (empty = use robot) |
acceleration_limits | Vec<f64> | [] | Per-joint accel limits (empty = defaults) |
pose_tracking_gain | f64 | 5.0 | Proportional gain for pose tracking |
singularity_damping | f64 | 0.05 | Damping for singularity-robust pseudoinverse |
max_delta_per_tick | f64 | 0.02 | Max position change per tick (radians) |
Presets
ServoConfig::teleop() – General teleoperation (joystick, spacemouse).
Uses Twist input, generous collision margins, moderate precision.
ServoConfig::tracking() – Pose tracking (following a moving target).
Higher tracking gain (10.0), tighter collision checking (200 Hz),
PoseTracking input mode.
ServoConfig::precise() – Precise manipulation (assembly, insertion).
Small movements per tick (0.005 rad), tight singularity avoidance,
fine collision checking (250 Hz).
GpuConfig
Controls GPU-accelerated trajectory optimization.
Fields
| Field | Type | Default | Description |
|---|---|---|---|
num_seeds | u32 | 128 | Number of parallel trajectory seeds |
timesteps | u32 | 32 | Waypoints per trajectory |
iterations | u32 | 100 | Gradient descent iterations |
collision_weight | f32 | 100.0 | SDF collision cost weight |
smoothness_weight | f32 | 1.0 | Jerk minimization weight |
goal_weight | f32 | 50.0 | Goal-reaching cost weight |
step_size | f32 | 0.01 | Gradient descent step size |
sdf_resolution | f32 | 0.02 | SDF voxel resolution (meters) |
workspace_bounds | [f32; 6] | [-1,-1,-0.5,1,1,1.5] | SDF workspace bounds |
seed_perturbation | f32 | 0.3 | Random perturbation magnitude (radians) |
warm_start | Option | None | Initial trajectory from RRT |
Presets
GpuConfig::balanced() – Default, good balance of speed and quality.
128 seeds, 32 timesteps, 100 iterations.
GpuConfig::speed() – Fast optimization for real-time replanning.
32 seeds, 24 timesteps, 30 iterations, coarse SDF (0.05m).
GpuConfig::quality() – High-quality offline optimization.
512 seeds, 48 timesteps, 200 iterations, fine SDF (0.01m).
ExecutionConfig
Controls trajectory execution on hardware.
Fields
| Field | Type | Default | Description |
|---|---|---|---|
rate_hz | f64 | 500.0 | Command streaming rate |
position_tolerance | f64 | 0.1 | Max position deviation (radians) |
velocity_tolerance | f64 | 0.5 | Max velocity deviation (rad/s) |
timeout_factor | f64 | 2.0 | Abort if execution exceeds expected_time * factor |
joint_limits | Option | None | [(lower, upper)] for pre-execution validation |
command_timeout_ms | u64 | 100 | Per-command timeout |
require_feedback | bool | false | Require FeedbackSource for monitoring |
watchdog | Option | None | Safety watchdog configuration |
Presets
ExecutionConfig::default() – Simulation and testing.
Relaxed tolerances, no feedback required, no watchdog.
ExecutionConfig::safe(&robot) – Production deployment.
Auto-populates joint limits from URDF, enables feedback requirement,
configures safety watchdog (50ms timeout, ZeroVelocity action).
#![allow(unused)]
fn main() {
let config = ExecutionConfig::safe(&robot);
}
Benchmarks
Performance measurements and comparisons.
Performance Table
All measurements on AMD Ryzen 9 7950X, DDR5-5200, Ubuntu 24.04,
Rust 1.75+ (release mode, -C target-cpu=native).
Core Operations
| Operation | Robot | Time | Notes |
|---|---|---|---|
| FK (6-DOF) | UR5e | <1 us | Single end-effector pose |
| FK all links (6-DOF) | UR5e | ~2 us | All link poses |
| FK (7-DOF) | Panda | ~1.2 us | Single end-effector pose |
| Jacobian (6-DOF) | UR5e | ~1.5 us | 6x6 matrix |
| Jacobian (7-DOF) | Panda | ~2 us | 6x7 matrix |
| Manipulability | UR5e | ~3 us | SVD-based |
Inverse Kinematics
| Solver | Robot | Time | Notes |
|---|---|---|---|
| OPW | UR5e | <5 us | Analytical, closed-form |
| Subproblem | UR5e | <10 us | Analytical, up to 16 solutions |
| Subproblem7DOF | Panda | ~50 us | Sweep + analytical |
| DLS (converge) | UR5e | 100-300 us | 50-100 iterations typical |
| DLS (converge) | Panda | 200-500 us | 7-DOF, more iterations |
| FABRIK | UR5e | 50-150 us | Position-focused |
| Batch IK (100 targets) | UR5e | ~500 us | OPW, amortized |
Collision Checking
| Operation | Spheres | Time | Notes |
|---|---|---|---|
| Self-collision (coarse) | ~30 | <500 ns | SIMD AVX2 |
| Self-collision (fine) | ~120 | ~2 us | SIMD AVX2 |
| Env collision (10 obstacles) | ~30 | ~300 ns | CAPT broadphase |
| Env collision (100 obstacles) | ~30 | ~1 us | CAPT broadphase |
| Env collision (1000 obstacles) | ~30 | ~5 us | CAPT broadphase |
Motion Planning
| Planner | Robot | Environment | Time | Path Quality |
|---|---|---|---|---|
| RRT-Connect | UR5e | Empty | <100 us | Feasible |
| RRT-Connect | UR5e | 10 obstacles | 5-50 ms | Feasible |
| RRT-Connect | Panda | 10 obstacles | 10-80 ms | Feasible |
| RRT* | UR5e | 10 obstacles | 100-500 ms | Near-optimal |
| BiRRT* | UR5e | 10 obstacles | 50-300 ms | Near-optimal |
| EST | UR5e | Narrow passage | 20-100 ms | Feasible |
| Cartesian | UR5e | Empty | <10 ms | Exact path |
Servo (Reactive Control)
| Operation | Robot | Time | Rate |
|---|---|---|---|
| Twist command | UR5e | ~50 us | 500 Hz capable |
| Joint jog | UR5e | ~20 us | 500 Hz capable |
| Pose tracking | Panda | ~100 us | 500 Hz capable |
Trajectory Processing
| Operation | Waypoints | Time | Notes |
|---|---|---|---|
| Trapezoidal | 50 | <100 us | Per-joint limits |
| TOTP | 50 | 1-5 ms | Time-optimal |
| Jerk-limited | 50 | 200-500 us | S-curve profile |
| Cubic spline | 50 | <50 us | Interpolation only |
| Validation | 50 | ~20 us | Limit checking |
Comparison vs MoveIt2
Approximate comparisons based on published benchmarks and internal testing. MoveIt2 measurements from MoveIt2 documentation and community benchmarks.
| Operation | Kinetic | MoveIt2 | Speedup |
|---|---|---|---|
| FK (6-DOF) | <1 us | ~5 us | ~5x |
| IK (OPW, 6-DOF) | <5 us | ~50 us (KDL) | ~10x |
| IK (DLS, 7-DOF) | 200-500 us | 500 us-2 ms (KDL) | ~2-4x |
| Collision check | <500 ns | ~5 us (FCL) | ~10x |
| RRT-Connect (simple) | <100 us | 1-10 ms | ~10-100x |
| Servo loop | ~50 us | ~200 us | ~4x |
Speedups come from:
- Rust vs C++ (memory safety without overhead)
- SIMD collision (4-16x throughput per check)
- Analytical IK solvers (OPW, Subproblem) vs iterative KDL
- No ROS middleware overhead
How to Run Benchmarks
Prerequisites
# Install criterion (benchmark framework)
cargo install cargo-criterion # optional, nicer output
# Build in release mode
cargo build --release
Running
# All benchmarks
cargo bench
# Specific benchmark group
cargo bench --bench fk_benchmarks
cargo bench --bench ik_benchmarks
cargo bench --bench collision_benchmarks
cargo bench --bench planning_benchmarks
# With HTML report
cargo bench --bench planning_benchmarks -- --output-format=bencher
# Filter by name
cargo bench -- "ur5e"
Results
Criterion generates HTML reports in target/criterion/. Open
target/criterion/report/index.html for interactive charts.
Writing Custom Benchmarks
#![allow(unused)]
fn main() {
use criterion::{criterion_group, criterion_main, Criterion};
fn my_benchmark(c: &mut Criterion) {
let robot = Robot::from_name("ur5e").unwrap();
let chain = KinematicChain::extract(&robot, "base_link", "tool0").unwrap();
let joints = vec![0.0, -1.0, 0.8, 0.0, 0.0, 0.0];
c.bench_function("fk_ur5e", |b| {
b.iter(|| forward_kinematics(&robot, &chain, &joints))
});
}
criterion_group!(benches, my_benchmark);
criterion_main!(benches);
}
Reproducing Results
For reproducible benchmarks:
- Use
--releasebuilds withtarget-cpu=native - Disable CPU frequency scaling:
sudo cpupower frequency-set -g performance - Close background applications
- Run each benchmark 3+ times and take the median
- Report CPU model, RAM speed, OS version, and Rust version
Architecture
Crate structure, dependency layers, and design principles.
Crate Dependency Diagram
kinetic (top-level re-exports + prelude)
|
+-------------------+-------------------+
| | |
kinetic-task kinetic-gpu horus-kinetic
| | |
+--------+----------+ (HORUS bridge, optional)
|
kinetic-planning
|
+-----------+-----------+
| | |
kinetic- kinetic- kinetic-
reactive scene trajectory
| | |
+-----------+-----------+
|
kinetic-collision
|
kinetic-kinematics
|
kinetic-robot
|
kinetic-core
Additional crates not shown in the main dependency chain:
kinetic-execution depends on: kinetic-trajectory, kinetic-robot
kinetic-grasp depends on: kinetic-kinematics, kinetic-scene
kinetic-python depends on: all of the above (PyO3 bindings)
kinetic-viewer depends on: kinetic-robot, kinetic-core
kinetic-dynamics depends on: kinetic-core, kinetic-robot
Layer Descriptions
Layer 0: kinetic-core
Foundation types shared by all crates. Zero external dependencies beyond nalgebra, serde, and thiserror.
Key types: Pose, JointValues, Trajectory, Goal, Constraint,
KineticError, PlannerConfig, FrameTree, Twist, Wrench
Layer 1: kinetic-robot
URDF/SRDF/MJCF/SDF parsing and robot model representation. Loads robot descriptions and configuration files.
Key types: Robot, Joint, Link, JointType, RobotConfig
Depends on: kinetic-core, urdf-rs, roxmltree, toml
Layer 2: kinetic-kinematics
Forward and inverse kinematics. Six IK solvers covering analytical (OPW, Subproblem) and iterative (DLS, FABRIK) methods. Jacobian computation and manipulability analysis.
Key types: KinematicChain, IKSolver, IKConfig, IKSolution
Key functions: forward_kinematics, jacobian, solve_ik,
manipulability, solve_ik_batch
Depends on: kinetic-core, kinetic-robot, nalgebra, rand
Layer 3: kinetic-collision
SIMD-vectorized sphere-based collision detection with CAPT broadphase acceleration. Generates sphere approximations of robot links from URDF collision geometry.
Key types: CollisionEnvironment, RobotSphereModel, SpheresSoA,
AllowedCollisionMatrix, ResolvedACM
Depends on: kinetic-core, kinetic-robot, parry3d-f64
Layer 4: kinetic-scene, kinetic-trajectory, kinetic-reactive
Scene management, trajectory processing, and reactive control. These crates sit at the same layer and may depend on each other.
kinetic-scene: Scene, Shape, PointCloudConfig, Octree
kinetic-trajectory: TimedTrajectory, TrajectoryValidator, totp,
trapezoidal, jerk_limited, blend
kinetic-reactive: Servo, ServoConfig, RMP, JointCommand
Layer 5: kinetic-planning
Motion planning algorithms. The Planner facade auto-selects the right
algorithm based on goal type and configuration.
Key types: Planner, PlannerType, PlanningResult, CartesianPlanner,
DualArmPlanner, PlanningPipeline
Depends on: kinetic-core, kinetic-robot, kinetic-kinematics, kinetic-collision, kinetic-scene
Layer 6: kinetic-task, kinetic-gpu, kinetic-execution
High-level task planning, GPU optimization, and hardware execution.
kinetic-task: Task, PickConfig, PlaceConfig, Approach
kinetic-gpu: GpuOptimizer, GpuConfig, CpuOptimizer, SignedDistanceField
kinetic-execution: CommandSink, RealTimeExecutor, ExecutionConfig,
SafetyWatchdog
Top Level: kinetic
Re-exports all sub-crates and provides the prelude module for convenience.
Bridge: horus-kinetic
Optional HORUS integration. Built separately, not part of the main workspace.
Provides PlannerNode, ServoNode, SceneNode.
Bindings: kinetic-python
Python bindings via PyO3. Built separately, not part of the main workspace. Wraps all public APIs with numpy interop.
Design Principles
1. Zero-Copy Where Possible
Joint values, trajectories, and poses are passed by reference through the planning pipeline. Allocations happen during construction, not in hot loops.
2. Fail Fast with Actionable Errors
Every error variant in KineticError tells the caller what went wrong
and whether to retry. The is_retryable() and is_input_error() methods
enable automated recovery strategies.
3. Safety Gates
The planner validates its own output before returning. Two safety gates check every waypoint against joint limits and workspace bounds. This catches planner bugs before they reach hardware.
4. Configuration Defaults That Work
Every config struct has a sensible Default implementation. Named presets
(realtime(), offline(), safe()) cover common scenarios. Users only
need to customize what differs from the defaults.
5. Layered Independence
Each crate is independently testable and usable. You can use
kinetic-kinematics for FK/IK without pulling in the planner.
You can use kinetic-collision without pulling in the scene.
6. Analytical Before Iterative
The IK solver auto-selection prefers analytical solvers (OPW, Subproblem) over iterative ones (DLS, FABRIK). Analytical solvers are faster, more reliable, and return all solutions instead of a single local minimum.
7. Hardware Abstraction via Traits
CommandSink and FeedbackSource are traits, not concrete types.
This allows the same execution code to work with any robot hardware,
simulator, or test double.
Design Rationale
Why kinetic is designed the way it is.
Why Rust?
Kinetic is written in Rust for three reasons:
1. Performance without compromise. Rust compiles to native code with zero-cost abstractions. No garbage collector pauses, no interpreter overhead. FK runs in under 1 microsecond, collision checking in under 500 nanoseconds. These numbers matter when planning must complete in 10ms.
2. Memory safety without runtime cost. The borrow checker eliminates data races, use-after-free, and buffer overflows at compile time. Motion planning code is safety-critical – a memory bug in the planner can cause a robot to collide. Rust makes entire classes of bugs impossible.
3. Fearless concurrency. GPU optimization runs compute shaders while the CPU handles IK. Dual-arm planning uses parallel tree expansion. Servo mode runs collision checks asynchronously. Rust’s ownership model makes concurrent code correct by construction.
C++ offers similar performance but without memory safety guarantees. Python is too slow for real-time planning (100-1000x slower for numerical code). Rust provides both.
Why No ROS2 Dependency?
Kinetic deliberately has zero ROS2 dependencies. This is a conscious design choice, not an oversight.
1. ROS2 is middleware, not a requirement. Motion planning is a mathematical problem: compute a collision-free path from A to B. This requires no publish-subscribe system, no node graph, no launch files. Adding ROS2 as a dependency forces every user to install and configure a middleware stack they may not need.
2. Deployment flexibility. Kinetic runs anywhere Rust compiles: embedded Linux, server-grade workstations, CI pipelines, WebAssembly. ROS2 constrains deployment to platforms with DDS and ament support.
3. Testing simplicity. Unit testing a Rust function that computes
IK requires only cargo test. Testing a ROS2 node requires spinning
up executors, creating topics, waiting for discovery. Kinetic’s tests
run in milliseconds, not seconds.
4. Integration is opt-in. The horus-kinetic bridge provides HORUS
IPC integration when needed. A ROS2 bridge (rmw_horus) is planned.
Users who need ROS2 integration can add it; users who do not are not
burdened by it.
Why SIMD Collision Checking?
Traditional collision libraries (FCL, Bullet, HPP-FCL) use mesh-based representations with BVH trees. Kinetic uses a sphere-based model with SIMD-vectorized distance computations.
1. Spheres are fast. Sphere-sphere distance is one subtraction, one dot product, and one comparison. SIMD processes 4-16 sphere pairs per instruction. This is 10-50x faster than GJK/EPA on convex meshes.
2. Spheres are conservative. A sphere approximation always over-estimates the collision volume. This is the safe direction for motion planning – the planner may reject valid configurations but will never accept colliding ones.
3. Resolution is configurable. SphereGenConfig::coarse() uses
~5 spheres per link for maximum speed. SphereGenConfig::fine() uses
~20 for higher accuracy. The user controls the tradeoff.
4. GPU-compatible. Sphere collision maps directly to GPU compute shaders. The same sphere model used for CPU planning is used for GPU trajectory optimization.
Why 10 IK Solvers?
Different IK solvers excel at different robot geometries:
OPW (Ortho-Parallel Wrist): Analytical closed-form solution for 6-DOF robots with spherical wrists (UR, ABB IRB, KUKA KR, Fanuc). Returns all 8 solutions in under 5 microseconds. No iterative convergence issues.
Subproblem decomposition: Analytical for 6-DOF robots with intersecting wrist axes. Uses Paden-Kahan subproblems to decompose the 6-DOF IK into a sequence of 1-DOF and 2-DOF sub-problems. Returns up to 16 solutions.
Subproblem 7-DOF: Sweeps the redundant joint and solves analytically at each sample. Handles 7-DOF robots (Panda, KUKA iiwa) without iterative convergence issues.
DLS (Damped Least Squares): General-purpose iterative solver that works for any DOF. Handles singularities via damping. The fallback solver.
FABRIK: Forward And Backward Reaching IK. Fast for position-only tasks. Particularly good for long kinematic chains.
BioIK, IKFast, Cached IK, SQP: Specialized solvers for specific use cases (biomechanical models, code-generated solvers, repeated queries, constrained optimization).
A single IK solver cannot be optimal for all robots. By providing 10 solvers and an auto-selection mechanism, kinetic achieves the best performance for each robot geometry automatically.
Competitive Positioning
vs MoveIt2
MoveIt2 is the established motion planning framework in the ROS ecosystem. Kinetic provides an alternative for users who want:
- No ROS dependency – kinetic works standalone
- Faster performance – 5-100x faster FK, IK, and collision checking
- GPU optimization – cuRobo-style parallel trajectory optimization
- Simpler configuration – single TOML file vs YAML/SRDF/launch
- Rust safety – memory-safe, data-race-free by construction
MoveIt2 has a larger ecosystem (ROS integration, Rviz visualization, Setup Assistant) and more community support. Choose MoveIt2 if you are already invested in ROS2 and need its ecosystem. Choose kinetic if you need standalone performance, GPU optimization, or Rust safety guarantees.
vs cuRobo
NVIDIA’s cuRobo provides GPU-accelerated motion generation. Kinetic’s
kinetic-gpu crate provides similar capabilities with key differences:
- Hardware agnostic – wgpu supports Vulkan, Metal, DX12 (not just CUDA)
- CPU fallback –
CpuOptimizerworks without a GPU - Full planning stack – sampling-based planners + GPU optimization
- No Isaac Sim dependency – standalone library
cuRobo has deeper NVIDIA integration (Isaac Sim, CUDA kernels, TensorRT). Choose cuRobo if you are fully committed to the NVIDIA ecosystem. Choose kinetic for cross-platform GPU support and a complete planning stack.
vs Drake
Drake is a mathematical robotics toolbox with a focus on optimization and simulation. Kinetic is a focused motion planning library.
- Faster planning – sampling-based planners for real-time use
- Simpler API –
Planner::new(&robot)?.plan(&start, &goal)? - More IK solvers – 10 solvers with auto-selection
- No simulation – kinetic does not include a physics engine
Drake excels at optimization-based planning (GCS, trajectory optimization) and includes a full multibody dynamics simulator. Choose Drake for mathematical rigor and simulation. Choose kinetic for fast, practical motion planning on real robots.
vs OMPL
OMPL provides sampling-based planning algorithms without kinematics or collision checking. Kinetic bundles everything into one stack.
- Batteries included – FK, IK, collision, planning, trajectory, execution
- No external deps – OMPL requires you to provide collision checking
- Rust performance – faster than OMPL’s C++ for equivalent algorithms
- Integrated GPU – OMPL has no GPU support
OMPL has a wider variety of planning algorithms and a longer track record. Choose OMPL (via MoveIt2) for algorithm diversity. Choose kinetic for an integrated, zero-dependency planning stack.
API Reference
Links to generated Rust and Python API documentation.
Rust API (cargo doc)
The full Rust API documentation is generated from inline doc comments
using cargo doc.
Generating Locally
# Generate and open in browser
cargo doc --open
# Generate without opening
cargo doc
# Include private items (for development)
cargo doc --document-private-items
# Generate for a specific crate
cargo doc -p kinetic-planning
The generated documentation is written to target/doc/kinetic/index.html.
Crate Index
| Crate | Description | Key Types |
|---|---|---|
kinetic | Top-level re-exports and prelude | prelude::* |
kinetic-core | Shared types and errors | Pose, Goal, KineticError, PlannerConfig |
kinetic-robot | Robot model and URDF parsing | Robot, Joint, Link, RobotConfig |
kinetic-kinematics | FK, IK, Jacobian | KinematicChain, IKSolver, IKConfig, IKSolution |
kinetic-collision | SIMD collision detection | CollisionEnvironment, RobotSphereModel, AllowedCollisionMatrix |
kinetic-scene | Planning scene management | Scene, Shape, PointCloudConfig |
kinetic-planning | Motion planners | Planner, PlannerType, PlanningResult, CartesianPlanner |
kinetic-trajectory | Time parameterization | TimedTrajectory, TrajectoryValidator |
kinetic-reactive | Servo and RMP | Servo, ServoConfig, RMP |
kinetic-task | Task planning | Task, PickConfig, PlaceConfig |
kinetic-gpu | GPU optimization | GpuOptimizer, GpuConfig, CpuOptimizer |
kinetic-execution | Trajectory execution | CommandSink, RealTimeExecutor, ExecutionConfig |
kinetic-grasp | Grasp generation | GraspGenerator, GraspConfig, GripperType |
Python API
The Python bindings expose kinetic’s core functionality with numpy interop.
Installation
# From source (requires Rust toolchain)
cd crates/kinetic-python
maturin develop --release
# Or with pip (when published)
pip install kinetic
Module Contents
import kinetic
# Robot & Kinematics
kinetic.Robot(name) # Load robot (52 built-in configs)
robot.fk(joints) # Forward kinematics → 4x4
robot.ik(target_4x4) # Inverse kinematics → joints
robot.ik_config(target, solver="opw", ...) # IK with full config
robot.jacobian(joints) # 6×DOF Jacobian
robot.manipulability(joints) # Manipulability index
robot.batch_fk(configs_NxDOF) # Batch FK → list of 4x4
robot.batch_ik(target_list) # Batch IK → list of dicts
# Planning (9 planner types)
kinetic.Planner(robot, planner_type="rrt_star") # rrt_connect, rrt_star, bi_rrt_star, bitrrt, est, kpiece, prm, gcs
kinetic.Goal.joints(array) # Joint-space goal
kinetic.Goal.pose(target_4x4) # Cartesian pose goal
kinetic.Goal.named(name) # Named pose goal
kinetic.Constraint.orientation(...) # Orientation constraint
kinetic.CartesianConfig() # Cartesian planning config
kinetic.DualArmPlanner(robot, "left", "right") # Dual-arm planning
kinetic.MoveGroup(urdf_string) # MoveIt-style API
# Trajectory
traj.sample(t) # Interpolate at time t
traj.to_numpy() # Export (times, pos, vel)
traj.time_parameterize("totp", vel, acc) # Time parameterization
traj.blend(other, 0.1) # Smooth blending
traj.validate(lo, hi, vel, acc) # Limit validation
traj.to_json() / traj.to_csv() # Export
Trajectory.from_json(s) / from_csv(s) # Import
# Scene
kinetic.Scene(robot) # Collision scene
kinetic.Shape.cuboid(x, y, z) # Shapes
# Reactive Control
kinetic.Servo(robot, scene) # 500Hz servo
kinetic.RMP(robot) # Multi-policy RMP
kinetic.Policy.reach_target(target, gain) # 6 policy types
# Task Planning
kinetic.Task.move_to(robot, goal) # Move task
kinetic.Task.pick(robot, scene, ...) # Pick task
kinetic.Task.place(robot, scene, ...) # Place task
kinetic.Task.sequence([t1, t2, t3]) # Sequence
kinetic.Task.gripper(width) # Gripper command
# Grasp Generation
kinetic.GraspGenerator(gripper_type) # Grasp candidates
kinetic.GripperType.parallel(0.08, 0.03) # Parallel jaw
# Dynamics
kinetic.Dynamics(robot) # Featherstone bridge
dyn.gravity_compensation(q) # Hold torques
dyn.inverse_dynamics(q, qd, qdd) # ID: τ = M*qdd + C*qd + g
dyn.forward_dynamics(q, qd, tau) # FD: qdd = M⁻¹(τ - C*qd - g)
dyn.mass_matrix(q) # (DOF, DOF) mass matrix
# Execution
kinetic.SimExecutor() # Simulated (instant)
kinetic.LogExecutor() # Records all commands
kinetic.RealTimeExecutor(rate_hz=500) # Hardware execution via callback
kinetic.RealTimeExecutor.safe(robot) # With joint limit validation
kinetic.FrameTree() # TF2-like frame tree
# GPU Acceleration
kinetic.GpuOptimizer(robot, preset="balanced") # Parallel-seed optimization
kinetic.GpuCollisionChecker(robot, scene) # Batch collision checking
GpuOptimizer.gpu_available() # Check GPU availability
# One-liner
kinetic.plan("ur5e", start, goal) # Plan in one call
Type Stubs (.pyi)
Python type stubs are in crates/kinetic-python/kinetic.pyi. These provide
IDE autocompletion and type checking with mypy/pyright.
# Copy stubs to your project for IDE support
cp crates/kinetic-python/kinetic.pyi /path/to/your/project/
Example
import kinetic
import numpy as np
robot = kinetic.Robot("ur5e")
planner = kinetic.Planner(robot, planner_type="rrt_star")
start = np.array([0.0, -1.57, 0.0, -1.57, 0.0, 0.0])
goal = kinetic.Goal.named("home")
traj = planner.plan(start, goal)
# Sample the trajectory
for t in np.linspace(0, traj.duration, 50):
joints = traj.sample(t)
print(f"t={t:.3f}: {joints}")
# Export/import
json_str = traj.to_json()
traj2 = kinetic.Trajectory.from_json(json_str)
# Dynamics
dyn = kinetic.Dynamics(robot)
tau = dyn.gravity_compensation(start)
# GPU optimization
opt = kinetic.GpuOptimizer(robot, preset="speed")
fast_traj = opt.optimize(start, goal.joints)
# Hardware execution
def send(pos, vel):
my_robot_driver.set_joints(pos)
executor = kinetic.RealTimeExecutor.safe(robot)
result = executor.execute(traj, send)
Contributing
How to contribute code, tests, robot configs, and documentation to kinetic.
Bug Reports
File issues on the GitHub repository with:
- Kinetic version (
cargo pkgid kinetic) - Rust version (
rustc --version) - Robot model (e.g.,
ur5e,franka_panda) - Minimal reproduction – smallest code that triggers the bug
- Expected vs actual behavior
- Error output (full
KineticErrorDisplay string)
Pull Request Process
- Fork the repository and create a feature branch
- Research first – read existing code in the area you are changing. Understand patterns, conventions, and dependencies before writing code.
- Address tech debt – if you find issues in the code you are touching, fix them in a separate commit before adding new functionality.
- Write tests – every PR must include tests for new functionality. Bug fixes must include a regression test.
- Run the full check suite:
# Format
cargo fmt --check
# Lint (all warnings are errors)
cargo clippy -- -D warnings
# Tests
cargo test
# Visual tests (if applicable)
xvfb-run cargo test
- Open the PR with a clear description of what changed and why
- Respond to review – address all comments before merge
Code Style
Rust Conventions
- Format:
cargo fmt(rustfmt defaults, no custom config) - Lint:
cargo clippy -D warnings(zero warnings policy) - Coverage target: 85% for new code
- Error handling: Return
kinetic_core::Result<T>, usethiserror - Documentation: All
pubitems must have///doc comments - Tests: Place unit tests in
#[cfg(test)] mod testsat the bottom of each file. Integration tests go intests/. - Naming:
UpperCamelCasefor types,snake_casefor functions,SCREAMING_SNAKEfor constants.
Commit Messages
- Use imperative mood: “Add RRT* planner” not “Added RRT* planner”
- First line under 72 characters
- Reference issues: “Fix #123: IK diverges near singularity”
Adding Robot Configurations
Robot configs are the easiest way to contribute. Every new robot helps the entire community.
Template
Create robot_configs/<name>/ with:
kinetic.toml:
[robot]
name = "<name>"
urdf = "<name>.urdf"
dof = <n>
[planning_group.arm]
chain = ["<base_link>", "<tip_link>"]
[end_effector.tool]
parent_link = "<tip_link>"
parent_group = "arm"
tcp_xyz = [0.0, 0.0, 0.0]
[ik]
solver = "dls" # or "opw" for 6-DOF spherical wrist
[named_poses]
home = [0.0, ...]
zero = [0.0, ...]
[collision]
self_collision_pairs = "auto"
padding = 0.01
skip_pairs = []
Required Verification
Before submitting a robot config PR, verify:
- FK roundtrip:
fk(ik(fk(joints))) == fk(joints)within 1mm - Named poses: All poses are within joint limits
- Planning:
Planner::new(&robot)succeeds and can plan at least one simple path - IK solver: The configured solver converges for 95%+ of random reachable poses
Test Template
Include this test in your PR:
#![allow(unused)]
fn main() {
#[test]
fn <name>_fk_ik_roundtrip() {
let robot = Robot::from_name("<name>").unwrap();
let planner = Planner::new(&robot).unwrap();
let home = robot.named_pose("home").unwrap();
let pose = planner.fk(&home).unwrap();
let ik_joints = planner.ik(&pose).unwrap();
let recovered = planner.fk(&ik_joints).unwrap();
let err = (pose.translation() - recovered.translation()).norm();
assert!(err < 0.001, "FK/IK error: {err}");
}
}
Adding Planners
- Create a new file in
crates/kinetic-planning/src/ - Implement the planner (see
rrt.rsfor reference) - Add a
PlannerTypevariant infacade.rs - Add dispatch in
Planner::plan_with_config - Write acceptance tests on at least two robot models (UR5e + Panda)
- Add a benchmark in
benches/
Adding IK Solvers
- Create a new file in
crates/kinetic-kinematics/src/ - Implement the solver function returning
Result<IKSolution> - Add an
IKSolvervariant inik.rs - Add dispatch in
solve_once - Write FK/IK roundtrip tests
- Document which robot geometries the solver supports
Adding Documentation
Documentation lives in docs-site/src/. Pages are Markdown.
- Guides: Practical how-to content with code examples
- Reference: Exhaustive technical details
- Tutorials: Step-by-step walkthroughs for specific tasks
- Migration: Help users transition from other frameworks
All code examples should compile (or be marked ignore with a comment
explaining why). Test code examples with cargo test --doc.
Development Setup
# Clone
git clone https://github.com/softmata/kinetic.git
cd kinetic
# Build
cargo build
# Test
cargo test
# Benchmarks
cargo bench
# Documentation
cargo doc --open
License
Kinetic is licensed under Apache-2.0. All contributions must be compatible with this license. By submitting a PR, you agree that your contribution is licensed under the same terms.
FAQ
Frequently asked questions about kinetic.
General
Is kinetic compatible with ROS2?
Kinetic has zero ROS2 dependencies and does not require ROS2 to run. However,
the horus-kinetic bridge crate provides HORUS IPC integration, and a
rmw_horus bridge is planned for transparent ROS2 topic interop. You can
also use kinetic standalone and publish results to ROS2 topics via your own
bridge node.
Can I use kinetic from Python only?
Yes. The kinetic Python package (via PyO3) exposes Robot, Planner, Scene,
Servo, Task, and Trajectory classes with numpy interop. Install with
pip install kinetic or build from source with maturin develop.
import kinetic
import numpy as np
robot = kinetic.Robot("ur5e")
planner = kinetic.Planner(robot)
start = np.array([0.0, -1.57, 0.0, 0.0, 0.0, 0.0])
goal = kinetic.Goal.joints(np.array([0.5, -1.0, 0.5, 0.0, 0.5, 0.0]))
traj = planner.plan(start, goal)
Is kinetic production-ready?
Kinetic is used in design partner deployments with real robots. It includes trajectory validation, safety watchdogs, execution monitoring, and error recovery mechanisms. See the Production Deployment guide for the full pre-deployment checklist.
Does kinetic require a GPU?
No. The GPU optimizer (kinetic-gpu) is optional and provides faster
trajectory optimization when a Vulkan/Metal-capable GPU is available.
All core functionality (FK, IK, planning, collision, trajectory) runs
on CPU. The CpuOptimizer provides the same API as GpuOptimizer
without GPU hardware.
Does kinetic run on Windows?
Kinetic targets Linux and macOS. Windows support is not actively tested or maintained. The core math and planning algorithms are platform-agnostic, but hardware integration (shared memory, real-time scheduling) is Linux/macOS only.
How do I visualize trajectories?
Kinetic does not include a built-in visualizer. You can:
- Export trajectories to CSV/JSON and visualize in Python (matplotlib)
- Use the
kinetic-viewercrate for basic 3D rendering - Connect to HORUS and use
horus-monitorfor real-time visualization - Use
horus-sim3dfor full 3D simulation with physics
What license is kinetic under?
Apache-2.0. See the LICENSE file in the repository root.
Robots
Which robots are supported out of the box?
Kinetic ships with 52 built-in robot configurations covering robots from Universal Robots, Franka, KUKA, ABB, Fanuc, Yaskawa, xArm, Kinova, Trossen, and more. See the Supported Robots reference for the full list.
Can I use a robot not in the built-in list?
Yes. Any robot with a URDF file can be loaded. Create a kinetic.toml
config file or load the URDF directly with Robot::from_urdf(). See
the Custom Robots guide for step-by-step instructions.
How accurate is the IK for my robot?
Accuracy depends on the solver and the robot geometry:
- OPW (analytical): exact solution for 6-DOF spherical wrist robots
- Subproblem (analytical): exact for 6/7-DOF with intersecting wrist axes
- DLS (iterative): configurable tolerance, typically <0.1mm position error
- FABRIK (iterative): good position accuracy, weaker orientation accuracy
Check solution.position_error and solution.orientation_error after solving.
Does kinetic support mobile manipulators?
Yes. Robots like Fetch (8-DOF) and TIAGo (8-DOF) are supported. The
planner auto-extracts the arm chain from the full robot model. Note
that chain.dof may be less than robot.dof for mobile manipulators.
Planning
Which planner should I use?
See the Planner Selection guide for a decision flowchart. Short answer: RRT-Connect (default) for most tasks, EST/KPIECE for narrow passages, RRT* for optimal paths, GPU optimizer for complex environments.
Why does planning fail in cluttered environments?
Sampling-based planners struggle when free space is sparse. Try:
- Increase timeout and iterations (
PlannerConfig::offline()) - Use EST or KPIECE (better for narrow passages)
- Reduce collision margin
- Use the GPU optimizer (SDF-based, handles dense environments better)
- Decompose the problem into intermediate waypoints
Can I plan for two arms at once?
Yes. DualArmPlanner plans in the combined configuration space of both
arms with inter-arm collision avoidance. Three modes are available:
Independent, Synchronized, and Coordinated.
What is the difference between plan() and plan_with_scene()?
plan() uses only the robot’s self-collision model (no environment obstacles).
plan_with_scene() includes scene obstacles (tables, walls, point clouds)
in collision checking.
Performance
How fast is FK/IK?
Target performance on a modern x86-64 CPU:
| Operation | Time |
|---|---|
| FK (6-DOF) | <1 us |
| Jacobian (6-DOF) | <2 us |
| IK OPW (6-DOF) | <5 us |
| IK DLS (7-DOF) | 100-500 us |
| Collision check (SIMD) | <500 ns |
| RRT-Connect (simple) | <100 us |
Is kinetic real-time capable?
The core planning and IK functions are deterministic-time with configurable
timeouts. PlannerConfig::realtime() uses a 10ms timeout. Servo mode runs
at 500 Hz. Kinetic does not allocate memory in hot paths after initialization.
Integration
How does kinetic compare to MoveIt2?
See the From MoveIt2 migration guide for a detailed comparison. Key differences: kinetic is Rust (not C++), has no ROS dependency, includes GPU optimization, and uses SIMD collision checking instead of FCL.
Can I use kinetic with my own physics simulator?
Yes. Kinetic does not include a physics simulator. Plan trajectories
with kinetic and execute them in your simulator via CommandSink or
by exporting to CSV/JSON. The horus-sim3d simulator uses kinetic
trajectories natively.
Changelog
All notable changes to kinetic are documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[Unreleased]
Added
PlannerOutputInvaliderror variant for safety gate failuresSingularityLockuperror variant for servo singularity detectionDimensionMismatcherror variant with context stringIKSolution.degradedflag for solver fallback detectionIKSolution.condition_numberfor singularity proximity monitoringExecutionConfig::safe()constructor with auto-configured safetySafetyWatchdogfor real-time execution monitoringPlanExecuteLoopfor automatic replanning on failure
[0.1.0] - 2026-03-29
Initial release of the kinetic motion planning library.
Added
Core (kinetic-core)
Pose,JointValues,Trajectory,Goal,ConstrainttypesKineticErrorwith 20+ variants andis_retryable()/is_input_error()PlannerConfigwithdefault(),realtime(),offline()presetsFrameTreeandStampedTransformfor coordinate frame managementTwistandWrenchspatial vector types
Robot (kinetic-robot)
- URDF parser with full joint/link/collision geometry support
- SRDF parser for planning groups and collision pairs
- MJCF and SDF loaders
- 52 built-in robot configurations
Robot::from_name(),Robot::from_urdf(),Robot::from_path()kinetic.tomlconfiguration format
Kinematics (kinetic-kinematics)
- Forward kinematics (
forward_kinematics,forward_kinematics_all) - Jacobian computation (
jacobian) - Manipulability index (
manipulability) - 6 IK solvers: DLS, FABRIK, OPW, Subproblem, Subproblem7DOF, BioIK
- Cached IK solver for repeated queries
- IKFast codegen support
- Batch IK (
solve_ik_batch) - Workspace analysis (
ReachabilityMap)
Collision (kinetic-collision)
- SIMD-vectorized sphere-sphere collision (AVX-512/AVX2/SSE4.1)
- CAPT broadphase acceleration
RobotSphereModelwithSphereGenConfig(coarse/default/fine)CollisionEnvironmentwith margin-based checkingAllowedCollisionMatrixfor self-collision filtering- Contact point computation
Planning (kinetic-planning)
- 8 sampling-based planners: RRT-Connect, RRT*, BiRRT*, BiTRRT, EST, KPIECE, PRM, GCS
- Constrained RRT with orientation/position constraints
- Cartesian planner (linear and arc paths)
- Dual-arm planner (Independent, Synchronized, Coordinated modes)
- Path shortcutting and B-spline smoothing
Plannerfacade with auto chain detectionPlannerTypeenum for algorithm selection- One-line
plan()convenience function PlanningPipelinewith pre/post-processors and planner racing
Trajectory (kinetic-trajectory)
- Trapezoidal time parameterization
- TOTP (Time-Optimal Path Parameterization)
- Jerk-limited S-curve profiles
- Cubic spline interpolation
- Trajectory blending
- CSV and JSON import/export
TrajectoryValidatorwith per-joint limit checkingExecutionMonitorwith deviation detection
Scene (kinetic-scene)
- Collision object management (add, remove, attach, detach)
- Shape primitives: Cuboid, Cylinder, Sphere, HalfSpace
- Point cloud ingestion with voxel downsampling
- Depth image to point cloud conversion
- Octree spatial indexing
- Outlier removal (statistical and radius)
Reactive (kinetic-reactive)
- Servo controller (Twist, JointJog, PoseTracking modes)
ServoConfigpresets: teleop, tracking, precise- Riemannian Motion Policies (RMP)
- Collision deceleration
- Singularity avoidance
- Smoothing filters (EMA, Butterworth)
Task (kinetic-task)
- Pick and Place task planning
- Multi-stage task sequencing
- Grasp generation
- Approach/retreat motion primitives
- Gripper command integration
GPU (kinetic-gpu)
- cuRobo-style parallel trajectory optimization via wgpu
- Batch FK on GPU
- GPU collision checking with SDF
GpuConfigpresets: balanced, speed, qualityCpuOptimizerfallback for non-GPU environments
Execution (kinetic-execution)
CommandSinktrait for hardware driversFeedbackSourcetrait for encoder readingRealTimeExecutorwith 500 Hz command streamingSimExecutorfor testing without hardwareLogExecutorfor recording commanded trajectoriesSafetyWatchdogwith configurable timeout actions
HORUS Bridge (horus-kinetic)
PlannerNodefor motion planning via HORUS IPCServoNodefor reactive control via HORUS IPCSceneNodefor collision world via HORUS IPC- Channel-based API for standalone use
Python (kinetic-python)
- PyO3 bindings with numpy interop
- Robot, Planner, Scene, Servo, Task, Trajectory classes
- One-line
kinetic.plan()function - Grasp generation and gripper types
- Frame tree management
Supported Robots (52)
ABB IRB1200, ABB IRB4600, ABB YuMi (left/right), ALOHA (left/right), Baxter (left/right), Denso VS068, Dobot CR5, Elite EC66, Fanuc CRX-10iA, Fanuc LR Mate 200iD, Fetch, Flexiv Rizon4, Franka Panda, Jaco2 6DOF, Kinova Gen3, Kinova Gen3 Lite, Koch v1, KUKA iiwa7, KUKA iiwa14, KUKA KR6, LeRobot SO-100, Meca500, myCobot 280, Niryo Ned2, Open Manipulator X, Open Manipulator P, PR2, Sawyer, SO-ARM100, Staubli TX2-60, Stretch RE2, Techman TM5-700, TIAGo, Trossen PX100, Trossen RX150, Trossen WX250s, UR3e, UR5e, UR10e, UR16e, UR20, UR30, ViperX 300, WidowX 250, xArm5, xArm6, xArm7, Yaskawa GP7, Yaskawa HC10.



