The TriFingerPlatform Class

The TriFingerPlatform class provides access to the simulation of the TriFinger platform using the same interface as for the real robot.

For a detailed explanation of the software interface on the real robot, and the logic behind it, please check the paper.

API Documentation

class trifinger_simulation.TriFingerPlatform(visualization=False, initial_robot_position=None, initial_object_pose=None, enable_cameras=False, time_step_s=0.001, object_type=ObjectType.COLORED_CUBE, camera_delay_steps=90)[source]

Wrapper around the simulation providing the same interface as robot_interfaces::TriFingerPlatformFrontend.

The following methods of the robot_interfaces counterpart are not supported:

  • get_robot_status()

  • wait_until_timeindex()

Initialize.

Parameters:
  • visualization (bool) – Set to true to run visualization.

  • initial_robot_position (Sequence[float] | None) – Initial robot joint angles

  • initial_object_pose – Only if object_type == COLORED_CUBE: Initial pose for the manipulation object. Can be any object with attributes position (x, y, z) and orientation (x, y, z, w). If not set, a default pose is used.

  • enable_cameras (bool) – Set to true to enable rendered images in the camera observations. If false camera observations are still available but the images will not be initialized. By default this is disabled as rendering of images takes a lot of computational power. Therefore the cameras should only be enabled if the images are actually used.

  • time_step_s (float) – Simulation time step duration in seconds.

  • object_type (ObjectType) – Which type of object to load. This also influences some other aspects: When using the cube, the camera observation will contain an attribute object_pose.

  • camera_delay_steps (int) – Number of time steps by which camera observations are held back after they are generated. This is used to simulate the delay of the camera observation that is happening on the real system due to processing (mostly the object detection).

__init__(visualization=False, initial_robot_position=None, initial_object_pose=None, enable_cameras=False, time_step_s=0.001, object_type=ObjectType.COLORED_CUBE, camera_delay_steps=90)[source]

Initialize.

Parameters:
  • visualization (bool) – Set to true to run visualization.

  • initial_robot_position (Sequence[float] | None) – Initial robot joint angles

  • initial_object_pose – Only if object_type == COLORED_CUBE: Initial pose for the manipulation object. Can be any object with attributes position (x, y, z) and orientation (x, y, z, w). If not set, a default pose is used.

  • enable_cameras (bool) – Set to true to enable rendered images in the camera observations. If false camera observations are still available but the images will not be initialized. By default this is disabled as rendering of images takes a lot of computational power. Therefore the cameras should only be enabled if the images are actually used.

  • time_step_s (float) – Simulation time step duration in seconds.

  • object_type (ObjectType) – Which type of object to load. This also influences some other aspects: When using the cube, the camera observation will contain an attribute object_pose.

  • camera_delay_steps (int) – Number of time steps by which camera observations are held back after they are generated. This is used to simulate the delay of the camera observation that is happening on the real system due to processing (mostly the object detection).

Action(torque=None, position=None)

See trifinger_simulation.SimFinger.Action().

append_desired_action(action)[source]

See trifinger_simulation.SimFinger.append_desired_action().

get_current_timeindex()

See trifinger_simulation.SimFinger.get_current_timeindex().

get_timestamp_ms(t)

See trifinger_simulation.SimFinger.get_timestamp_ms().

get_desired_action(t)

See trifinger_simulation.SimFinger.get_desired_action().

get_applied_action(t)

See trifinger_simulation.SimFinger.get_applied_action().

get_robot_observation(t)

Get observation of the robot state (joint angles, torques, etc.). See trifinger_simulation.SimFinger.get_observation().

get_camera_observation(t)[source]

Get camera observation at time step t.

Important

Actual images are only rendered if the class was created with enable_cameras=True in the constructor, otherwise the images in the observation are set to None. Other fields are still valid, though.

Parameters:

t (int) – The time index of the step for which the observation is requested. Only the value returned by the last call of append_desired_action() is valid.

Returns:

Observations of the three cameras. Depending of the object type used, this may also contain the object pose.

Raises:

ValueError – If invalid time index t is passed.

Return type:

TriCameraObservation | TriCameraObjectObservation

store_action_log(filename)[source]

Store the action log to a JSON file.

Parameters:

filename (str) – Path to the JSON file to which the log shall be written. If the file exists already, it will be overwritten.


class trifinger_simulation.ObjectPose[source]

A pure-python copy of trifinger_object_tracking::ObjectPose.

confidence

Estimate of the confidence for this pose observation.

Type:

float

orientation

Orientation of the object as (x, y, z, w) quaternion.

Type:

array

position

Position (x, y, z) of the object. Units are meters.

Type:

array


class trifinger_simulation.CameraObservation[source]

Pure-python copy of trifinger_cameras.camera.CameraObservation.

image

The image.

Type:

array

timestamp

Timestamp when the image was received.

Type:

float


class trifinger_simulation.TriCameraObservation[source]

Python version of trifinger_cameras::TriCameraObservation.

This is a pure-python implementation of trifinger_cameras::TriCameraObservation, so we don’t need to depend on trifinger_cameras here.

cameras

List of observations of cameras “camera60”, “camera180” and “camera300” (in this order).

Type:

list of CameraObservation


class trifinger_simulation.TriCameraObjectObservation[source]

Python version of trifinger_object_tracking::TriCameraObjectObservation.

This is a pure-python implementation of trifinger_object_tracking::TriCameraObjectObservation, so we don’t need to depend on trifinger_object_tracking here.

cameras

List of observations of cameras “camera60”, “camera180” and “camera300” (in this order).

Type:

list of CameraObservation

filtered_object_pose

Filtered pose of the object in world coordinates. In simulation, this is the same as the unfiltered object_pose.

Type:

ObjectPose

object_pose

Pose of the object in world coordinates.

Type:

ObjectPose