Module camera
Classes and functions for simulating cameras.
Simple vs Calibrated Camera
There are two different camera implementations: Camera
and
CalibratedCamera
.
Camera
first one only takes the camera’s position, orientation and
field of view as input and uses PyBullet functions to compute the corresponding
transformation/projection from it.
CalibratedCamera
expects a full camera matrix and distortion
coefficients (like they are acquired through camera calibration. By using the
calibration parameters of a real camera here, the rendered images are much
closer to the real ones. On the downside the rendering process is a bit slower
as applying the distortion takes some time.
Loading calibration parameters from a single YAML file can be done using
CameraParameters.load()
. To load the calibration parameters of all three
cameras of a TriFinger platform, load_camera_parameters()
can be used.
Camera Arrays
CameraArray
provides a simple interface to get images from an
arbitrary number of cameras.
When specifically simulating the three-camera-setup of the TriFinger platform,
you can use TriFingerCameras
for the simple camera model or
create_trifinger_camera_array_from_config()
for the model using the full
calibration parameters.
- class trifinger_simulation.camera.CalibratedCamera(camera_matrix, distortion_coefficients, tf_world_to_camera, image_size, near_plane_distance, far_plane_distance, pybullet_client_id=0)[source]
Simulate a camera based on calibration parameters.
This class renders images from the simulation, using calibration parameters from a real camera. It uses a more accurate projection matrix as
Camera
and also takes distortion into account.- Parameters:
camera_matrix –
Camera matrix containing focal length and centre point:
\[\begin{split}\begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 0 \end{bmatrix}\end{split}\]distortion_coefficients – Distortion coefficients
(k_1, k_2, p_1, p_2, k_3)
tf_world_to_camera – Homogeneous transformation matrix from world to camera frame.
image_size – Size of the image given as
(width, height)
.near_plane_distance – Minimum distance to camera for objects to be rendered. Objects that are closer to the camera are clipped.
far_plane_distance – Maximum distance to the camera for objects to be rendered. Objects that are further away are clipped.
pybullet_client_id – Id of the pybullet client (needed when multiple clients are running in parallel).
- distort_image(image)[source]
Distort an image based on the cameras distortion coefficients.
- Parameters:
image – The undistorted image.
- Returns:
The distorted image.
- get_image(renderer=131072)[source]
Get a rendered and distorted image from the camera.
- Parameters:
renderer – Specify which renderer is to be used. The renderer used by default relies on X server. Note: this would need visualization to have access to OpenGL. In order to use the renderer without visualization, as in, in the “DIRECT” mode of connection, use the ER_TINY_RENDERER.
- Returns:
Rendered RGB image from the simulated camera.
- Return type:
array, shape=(height, width, 3)
- class trifinger_simulation.camera.Camera(camera_position, camera_orientation, image_size=(270, 270), field_of_view=52, near_plane_distance=0.001, far_plane_distance=100.0, pybullet_client_id=0)[source]
Represents a camera in the simulation environment.
Note: This class uses a simplified camera model. For images that better match with the real cameras, use
CalibratedCamera
.- Parameters:
camera_position – Position (x, y, z) of the camera w.r.t. the world frame.
camera_orientation – Quaternion (x, y, z, w) representing the orientation of the camera.
image_size – Tuple (width, height) specifying the size of the image.
field_of_view – Field of view of the camera
near_plane_distance – see OpenGL’s documentation for details
far_plane_distance – see OpenGL’s documentation for details
target_position – where should the camera be pointed at
camera_up_vector – the up axis of the camera
pybullet_client_id – Id of the pybullet client (needed when multiple clients are running in parallel).
- get_image(renderer=131072)[source]
Get a rendered image from the camera.
- Parameters:
renderer – Specify which renderer is to be used. The renderer used by default relies on X server. Note: this would need visualization to have access to OpenGL. In order to use the renderer without visualization, as in, in the “DIRECT” mode of connection, use the ER_TINY_RENDERER.
- Returns:
Rendered RGB image from the simulated camera.
- Return type:
array, shape=(height, width, 3)
- class trifinger_simulation.camera.CameraArray(cameras)[source]
Array of an arbitrary number of cameras.
- Parameters:
cameras (Sequence[BaseCamera]) – List of cameras.
- class trifinger_simulation.camera.CameraParameters(name, width, height, camera_matrix, distortion_coefficients, tf_world_to_camera)[source]
Represents intrinsic and extrinsic parameters of a camera.
See description of properties for the meaning of the constructor arguments.
Create new instance of CameraParameters(name, width, height, camera_matrix, distortion_coefficients, tf_world_to_camera)
- Parameters:
name (str)
width (int)
height (int)
camera_matrix (ndarray)
distortion_coefficients (ndarray)
tf_world_to_camera (ndarray)
- dump(stream)[source]
Dump camera parameters in YAML format to the given output stream.
- Parameters:
stream (TextIO) – Output stream.
- classmethod load(stream)[source]
Load camera parameters from a YAML stream.
- Parameters:
stream (TextIO) – Input stream of configuration in YAML format.
- Returns:
Instance of CameraParameters with values set based on the input YAML.
- Return type:
- camera_matrix: ndarray
Camera projection matrix. Shape = (3, 3)
- distortion_coefficients: ndarray
Distortion coefficients. Shape = (5,)
- height: int
Height of the images.
- name: str
Name of the camera.
- tf_world_to_camera: ndarray
Transformation matrix from world to camera frame. Shape = (4, 4)
- width: int
Width of the images.
- class trifinger_simulation.camera.TriFingerCameras(**kwargs)[source]
Simulate the three cameras of the TriFinger platform.
Note
This uses the simple camera model (see
Camera
). To get images that are more closely matching those of the real cameras usecreate_trifinger_camera_array_from_config()
instead.- Parameters:
cameras – List of cameras.
- trifinger_simulation.camera.calib_data_to_matrix(data)[source]
Extract a matrix from a camera parameter dict (as loaded from YAML).
- Parameters:
data (dict)
- Return type:
ndarray
- trifinger_simulation.camera.create_trifinger_camera_array(camera_parameters, pybullet_client_id=0)[source]
Create a TriFinger camera array using camera calibration parameters.
- Parameters:
camera_parameters (Iterable[CameraParameters]) – List of camera calibration parameters for the three cameras.
pybullet_client_id – Id of the pybullet client (needed when multiple clients are running in parallel).
- Returns:
CameraArray with three cameras.
- Return type:
- trifinger_simulation.camera.create_trifinger_camera_array_from_config(config_dir, calib_filename_pattern='camera{id}.yml', pybullet_client_id=0)[source]
Create a TriFinger camera array using camera calibration files.
Loads camera calibration files from the given directory and uses them to create a
CameraArray
ofCalibratedCamera
.- Parameters:
config_dir (Path) – Directory containing the camera calibration files.
calib_filename_pattern – Template for the camera calibration file names. ‘{id}’ will be replaced with the camera id (60, 180, 300).
pybullet_client_id – Id of the pybullet client (needed when multiple clients are running in parallel).
- Returns:
CameraArray with three cameras.
- Return type:
- trifinger_simulation.camera.rbg_to_bayer_bg(image)[source]
Convert an rgb image to a BG Bayer pattern.
This can be used to generate simulated raw camera data in Bayer format. Note that there will be some loss in image quality. It is mostly meant for testing the full software pipeline with the same conditions as on the real robot. It is not optimized of realistic images.
- Parameters:
image (ndarray) – RGB image.
- Returns:
Bayer pattern based on the input image. Height and width are the same as of the input image. The image can be converted using OpenCV’s COLOR_BAYER_BG2*.
- Return type:
ndarray