Updated: October 28, 2024 |
The configuration file is used to identify the sensors and cameras to use on your system. You can have multiple sensors and cameras physically connected to the board, but your system uses only the ones identified in this file.
You can configure as few as one camera or sensor. For testing purposes, you can simulate a camera or sensor by specifying a file. The file can contain sensor information, such as lidar data, or compressed or uncompressed video for playback.
In this release, the code that supports recording and playing video in MP4, UCV, or MOV format is contained in a library, libcamapi_video.so.1, that's separate from the Camera library, libcamapi.so.1. If you plan to record or play video in these formats, you must include this first library in your target image. For information on how to generate a target image, see the Building Embedded Systems guide in the QNX Neutrino documentation.
The reference images shipped with Sensor Framework include the libcamapi_video.so.1 library.
Global settings are set using the SENSOR_GLOBAL section of the configuration file. This section is enclosed by begin SENSOR_GLOBAL and end SENSOR_GLOBAL. There can only be one section of this kind in the file. Any parameters defined here are not specific to any sensor but are instead global to the entire system. The following is a valid configuration of SENSOR_GLOBAL:
begin SENSOR_GLOBAL external_platform_library_path = libsensor_platform_nxp.so external_platform_library_variant = PLATFORM_VARIANT_IMX8 end SENSOR_GLOBAL
The configuration for a camera or sensor is specified in a section enclosed by begin SENSUNIT and end SENSUNIT, where SENSUNIT is an enumerated value from sensor_unit_t such as SENSOR_UNIT_1. For more information about these values, see the sensor_unit_t section in the Sensor Library chapter of the Sensor Library Developer's Guide. If you are using the Camera library API, you must map the enumerated value from the camera_unit_t data type to the corresponding value from sensor_unit_t. For example, for CAMERA_UNIT_1, you would map it to SENSOR_UNIT_1.
begin SENSOR_UNIT_3 type = file_camera name = left address = /accounts/1000/shared/videos/frontviewvideo1.mp4 default_video_format = rgb8888 default_video_resolution = 640, 480 end SENSOR_UNIT_3 begin SENSOR_UNIT_1 type = file_data name = vlp-16 address = /accounts/1000/shared/videos/capture_data.mp4 playback_group = 1 direction = 0,0,0 position = 1000,0,1100 end SENSOR_UNIT_1 begin SENSOR_UNIT_2 type = radar name = front direction = 0,0,0 position = 3760,0,0 address = /dev/usb/io-usb-otg, -1, -1, -1, -1, 0, delphi_esr packet_size = 64 data_format = SENSOR_FORMAT_RADAR_POLAR end SENSOR_UNIT_2
Type | Format for address | |
---|---|---|
usb_camera | driver_path, bus, device, vendorID, deviceID | This format identifies a USB camera when you have more than one USB camera connected to your system. You can use a wild card (-1) to accept any USB camera found on the system. If you have only one such camera connected, using a wild card value of -1 for all fields works. However, if you have more than one camera, you must specify non-wild card values for the bus/device or vendorID/deviceID parameter pair. For example, if you have more than one camera of the same make or model, you must specify bus/device because specifying vendorID/deviceID doesn't uniquely identify the camera.
|
sensor_camera | cameraId, input | This format identifies a sensor camera when you have
more than one sensor camera connected to the system. The format is as follows:
|
file_camera | /path/to/file/video.mp4 | This format identifies a RAW, LZ4, GZ, MP4 (H.264 encoded video), UCV, or MOV (uncompressed video) file. The prerecorded file can be created using APIs from the Camera or Sensor library. |
file_data | /path/to/file/data.lz4 | This format identifies a prerecorded file containing sensor data, which can be raw binary (RAW). It can be also encoded (lossless compression) as a GZIP (GZ) or LZ4 file. |
imu |
driver_path, bus, device, vendorID, deviceID, model
or driver_path, model |
This format identifies an inertial measure unit (IMU) when you have more than one IMU connected to the system. You can use a wild card (-1) to accept any IMU found on the system. If you have only one IMU connected, using a wild card value of -1 for all fields works. However, if you have more than one IMU connected, you must specify non-wild card values for the bus/device or vendorID/deviceID parameter pair. For example, if you have more than one IMU of the same make or model, you must specify bus/device because specifying vendorID/deviceID doesn't uniquely identify the IMU.
|
ip_camera | ip_address, type_of_camera, [serial_num] | This format identifies an IP camera connected to your system. The ip_address segment represents the camera's IP address. If you don't know this address, use 0.0.0.0 to use auto-discovery; in this case, the serial_num segment isn't required. When you use a valid IP address, you don't need to specify a value in serial_num. The type_of_camera segment specifies the camera type, which can be onvif for an ONVIF-compliant camera or gige_vision for a GigE vision-compliant camera. The serial_num is an optional segment that represents the camera's serial number. When you use auto-discovery, it matches the specified serial number. If this segment is left blank when using auto-discovery, the first IP camera that's found on the system is used, which is typical when you have one camera connected to your system. |
lidar |
ip_address, ip_based_lidar_model
or serial_driver_path, device_address, serial_based_lidar_model |
This format identifies the lidar sensor when you have more than one lidar sensor connected to your system. To configure an IP-based lidar, the ip_address segment must be a valid IP address because it broadcasts the data over UDP. For the ip_based_lidar_model segment, you can use the string velodyne_vlp-16 or velodyne_vlp-16-high-res to refer to the IP-based lidar from Velodyne. You must use a string to match the model that you have connected to your target. To configure a serial-based lidar, the serial_driver_path segment is specified as /dev/serX (where X is the port and depends on your configuration). For device_address, you must specify the device address assigned to the lidar. For lidar_model, you can use leddartech_vu8 to specify a serial-based lidar from Leddartech. |
radar | driver_path, bus, device, vendorID, deviceID, channelID, model | This format identifies the radar sensor when you have more than one radar sensor connected to your system. You can use a wild card (-1) to accept any radar sensor connected to the system. If you have only one such sensor connected, using a wild card value of -1 for all fields works. However, if you have more than one sensor, you must specify non-wild card values for at least the bus/device parameter pair or the deviceID parameter. For example, if you have more than one radar sensor of the same make or model, you must specify bus/device because specifying deviceID doesn't uniquely identify the sensor.
|
gps |
driver_path, bus, device, vendorID, deviceID, model
or driver_path, model |
This format identifies the GPS sensor when you have more than one GPS sensor on your system. You can use a wild card (-1) to accept any GPS unit found on the system. If you have only one such unit connected, using a wild card value of -1 for all fields works. However, if you have more than one unit connected, you must specify non-wild card values for the bus/device or vendorID/deviceID parameter pair. For example, if you have more than one sensor of the same make or model, you must specify bus/device because specifying vendorID/deviceID doesn't uniquely identify the sensor.
|
external_camera | driver_path, input |
This format identifies the user-provided library that's used for external cameras on the system. The driver_path specifies the path of this library. The library must implement the functions defined in external_camera_api.h. For more information, see Using external camera drivers in the Camera Developer's Guide. The input specifies the value that the Sensor service passes to open_external_camera() as the input parameter to identify the camera to use when your driver supports multiple cameras. |
external_sensor | driver_path |
This format identifies the user-provided library that's used for external sensors on the system. The driver_path specifies the path of this library. The library must implement the functions defined in external_sensor_api.h. For more information, see Using external sensor drivers in the Sensor Developer's Guide. |
The coordinate_system parameter isn't applicable to cameras.
Type | Format |
---|---|
lidar |
|
radar |
|
gps, imu |
|
If the x-axis isn't the direction of travel in your reference coordinate system, then you must adjust the values of the angles for direction accordingly. The coordinate system that you use to specify the direction must be the same as that with which you specify the position.
For more information, see the XSens MTi 100-series GPS documentation on the XSens website (https://www.xsens.com/).
For Velodyne lidar sensors, lidar_fov specifies the horizontal rotation (i.e., the difference between FOV End and FOV Start) that you configured by using the Velodyne webserver user interface. Valid rotations are in the range [1..360].
For Leddartech lidar sensors, lidar_fov specifies the horizontal rotation that's supported by the sensor. For example, Leddartech's Vu8 is available in different configurations that can support 20, 48, or 100.
If the x-axis isn't the direction of travel in your reference coordinate system, then adjust the values of the distances for position accordingly. The coordinate system that you use to specify the position must be the same as that with which you specify the direction.
... reference_clock = external reference_clock_library = /usr/lib/libexternal_clock_example.so ...
You can configure each sensor to use a different external clock library, or multiple sensors to share one library. You may provide multiple external clock libraries. For more information, see Using external clocks in the Sensor Developer's Guide.