Quick Start

The Jacobi vision library provides drivers for simulated and real world cameras, tools for basic image and point cloud manipulation, helpers for visualization in Studio, and enables collision-free motion planning based on sensor data together with the motion library. This guide will help you set up and use the Jacobi vision library.

Installation

The Jacobi vision library is available for Python only and can be pip installed easily:

pip install jacobi-vision

Setting up the Environment

We recommend to use Jacobi Studio to visualize camera data like images, point clouds, or depth maps. However, the vision library and the camera drivers use standard Python types and can therefore be used flexibly.

To use the vision library, we first need to place a camera into the environment. Let’s just use Studio for that by opening a new project, clicking on the + button, and then adding a camera. Alternatively, we could use Studio live to add a camera programmitcally, or define the environment locally in code.

Camera details in studio.png

You can adjust the camera’s parameters with the details panel to the right of the canvas.

Loading the Project Locally

Once your project is set up to your liking, you can download the project and use it in your code. Click the Download button and load the project from a jacobi-project file, or alternatively use the [Planner.load_from_studio] method to handle this automatically.

from pathlib import Path
from jacobi import Planner

planner = Planner.load_from_project_file(Path.home() / 'camera.jacobi-project')
camera = planner.environment.get_camera()

The simulated RGB-D camera uses Studio live - so make sure to have that feature enabled in Jacobi Studio. Then, you can read the simulated images from Studio in your Python script via:

from jacobi_vision.drivers import SimulatedCameraDriver

driver = SimulatedCameraDriver(camera)
depth_image = driver.get_depth_image()

print(depth_image)

Of course, there is also a get_color_image() method available.

Real Hardware

To use real hardware, simply import the desired driver and instantiate it. Here’s an example for an Intel Realsense camera:

from jacobi_vision.drivers import RealsenseCameraDriver

driver = RealsenseCameraDriver(camera)

All camera drivers share a common interface, although there are slight differences in the feature set of each camera.

Visualization in Studio

Real-world data from cameras can then be visualized in Studio (again using Studio live). Therefore, create a studio connection via:

from jacobi import Studio

studio = Studio()

To visualize a color image, simply run:

from jacobi_vision.images import ColorImage

color_image = ColorImage(color_image)
studio.set_camera_image_encoded(color_image.encode(), driver.camera)

or a point cloud from a real camera:

from jacobi_vision.images import DepthImage

depth_image = DepthImage(depth_image)
cloud = depth_image.to_point_cloud(driver)
studio.set_camera_point_cloud(cloud.T[::1].flatten())

Planning with a Depth Map Obstacle

The vision library integrates nicely with our motion library, allowing robots to use real-time vision input to plan their movements. The main connector is the DepthMap obstacle type, which we can update with live data as needed.

First, we create a depth map obstacle py passing in the depths matrix and add it to our usual planner environment. We can visualize the depth image easily in Studio, and go ahead by calling any plan method.

from jacobi import DepthMap

# Create sample data
object_map = np.full((20, 20), 2.0)
object_map[1:-1, 5:12] = 1.4
object_map[1:-1, 12:-1] = 1.1

map_obstacle = environment.add_obstacle(DepthMap(object_map, x=0.8, y=0.6), origin=camera.origin)

# Visualize depth map in Studio
studio.set_camera_depth_map(object_map, 0.8, 0.6)

# Plan a motion the usual way
trajectory = planner.plan(start, goal)

This example uses sample data - for live visual input you can just pass in a depth map from a camera.

Tip

For the best computational performance, make sure to scale down your depth maps to the lowest possible resolution.

We can update the depth map, and therefore the collision environment of the planner on-the-fly by passing a new depth map.

# New sample data
flat_map = np.full((20, 20), 2.0)

map_obstacle.collision.depths = flat_map
environment.update_depth_map(map_obstacle)

studio.set_camera_depth_map(flat_map, 0.8, 0.6)

Planning a new motion should now result in a direct start to goal motion, ignoring the previous depth map obstacle.