photonlibpy.simulation.simCameraProperties
SimCameraProperties
Calibration and performance values for this camera.
The resolution will affect the accuracy of projected(3d to 2d) target corners and similarly the severity of image noise on estimation(2d to 3d).
The camera intrinsics and distortion coefficients describe the results of calibration, and how to map between 3d field points and 2d image points.
The performance values (framerate/exposure time, latency) determine how often results should be updated and with how much latency in simulation. High exposure time causes motion blur which can inhibit target detection while moving. Note that latency estimation does not account for network latency and the latency reported will always be perfect.
PERFECT_90DEG()
classmethod
960x720 resolution, 90 degree FOV, "perfect" lagless camera
__init__()
Default constructor which is the same as {@link #PERFECT_90DEG}
estLatency()
Returns:
Type | Description |
---|---|
seconds
|
Noisy estimation of a frame's processing latency |
estPixelNoise(points)
Returns these points after applying this camera's estimated noise.
estSecUntilNextFrame()
Returns:
Type | Description |
---|---|
seconds
|
Estimate how long until the next frame should be processed in seconds |
getContourAreaPercent(points)
The percentage(0 - 100) of this camera's resolution the contour takes up in pixels of the image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
points
|
ndarray
|
Points of the contour |
required |
getCorrectedPixelRot(point)
Gives the yaw and pitch of the line intersecting the camera lens and the given pixel coordinates on the sensor. Yaw is positive left, and pitch positive down.
The pitch traditionally calculated from pixel offsets do not correctly account for non-zero values of yaw because of perspective distortion (not to be confused with lens distortion)-- for example, the pitch angle is naively calculated as:
pitch = arctan(pixel y offset / focal length y)
However, using focal length as a side of the associated right triangle is not correct when the pixel x value is not 0, because the distance from this pixel (projected on the x-axis) to the camera lens increases. Projecting a line back out of the camera with these naive angles will not intersect the 3d point that was originally projected into this 2d pixel. Instead, this length should be:
focal length y ⟶ (focal length y / cos(arctan(pixel x offset / focal length x)))
Returns:
Type | Description |
---|---|
Rotation3d
|
Rotation3d with yaw and pitch of the line projected out of the camera from the given pixel (roll is zero). |
getPixelPitch(pixelY)
The pitch from the principal point of this camera to the pixel y value. Pitch is positive down.
Note that this angle is naively computed and may be incorrect. See {@link
getCorrectedPixelRot(Point)}.
getPixelRot(point)
Finds the yaw and pitch to the given image point. Yaw is positive left, and pitch is positive down.
Note that pitch is naively computed and may be incorrect. See {@link
getCorrectedPixelRot(Point)}.
getPixelYaw(pixelX)
The yaw from the principal point of this camera to the pixel x value. Positive values left.
getVisibleLine(camRt, a, b)
Determines where the line segment defined by the two given translations intersects the camera's frustum/field-of-vision, if at all.
The line is parametrized so any of its points p = t * (b - a) + a
. This method
returns these values of t, minimum first, defining the region of the line segment which is
visible in the frustum. If both ends of the line segment are visible, this simply returns {0,
1}. If, for example, point b is visible while a is not, and half of the line segment is inside
the camera frustum, {0.5, 1} would be returned.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
camRt
|
RotTrlTransform3d
|
The change in basis from world coordinates to camera coordinates. See {@link RotTrlTransform3d#makeRelativeTo(Pose3d)}. |
required |
a
|
Translation3d
|
The initial translation of the line |
required |
b
|
Translation3d
|
The final translation of the line |
required |
Returns:
Type | Description |
---|---|
Tuple[float | None, float | None]
|
A Pair of Doubles. The values may be null: - {Double, Double} : Two parametrized values(t), minimum first, representing which segment of the line is visible in the camera frustum. - {Double, null} : One value(t) representing a single intersection point. For example, the line only intersects the intersection of two adjacent viewplanes. - {null, null} : No values. The line segment is not visible in the camera frustum. |
setAvgLatency(newAvgLatency)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
newAvgLatency
|
seconds
|
The average latency (from image capture to data published) in milliseconds a frame should have |
required |
setExposureTime(newExposureTime)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
newExposureTime
|
seconds
|
The amount of time the "shutter" is open for one frame. Affects motion blur. Frame speed(from FPS) is limited to this! |
required |
setFPS(fps)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fps
|
hertz
|
The average frames per second the camera should process at. :strong: |
required |
setLatencyStdDev(newLatencyStdDev)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
latencyStdDevMs
|
The standard deviation in milliseconds of the latency |
required |