Skip to content

photonlibpy.simulation.photonCameraSim

PhotonCameraSim

A handle for simulating :class:.PhotonCamera values. Processing simulated targets through this class will change the associated PhotonCamera's results.

__init__(camera, props=SimCameraProperties.PERFECT_90DEG(), tagLayout=AprilTagFieldLayout.loadField(AprilTagField.kDefaultField), minTargetAreaPercent=None, maxSightRange=None)

Constructs a handle for simulating :class:.PhotonCamera values. Processing simulated targets through this class will change the associated PhotonCamera's results.

By default, this constructor's camera has a 90 deg FOV with no simulated lag if props! By default, the minimum target area is 100 pixels and there is no maximum sight range unless both params are passed to override.

Parameters:

Name Type Description Default
camera PhotonCamera

The camera to be simulated

required
prop

Properties of this camera such as FOV and FPS

required
minTargetAreaPercent float | None

The minimum percentage(0 - 100) a detected target must take up of the camera's image to be processed. Match this with your contour filtering settings in the PhotonVision GUI.

None
maxSightRangeMeters

Maximum distance at which the target is illuminated to your camera. Note that minimum target area of the image is separate from this.

required

canSeeCorner(points)

Determines if all target points are inside the camera's image.

Parameters:

Name Type Description Default
points ndarray

The target's 2d image points

required

canSeeTargetPose(camPose, target)

Determines if this target's pose should be visible to the camera without considering its projected image points. Does not account for image area.

Parameters:

Name Type Description Default
camPose Pose3d

Camera's 3d pose

required
target VisionTargetSim

Vision target containing pose and shape

required

Returns:

Type Description
bool

If this vision target can be seen before image projection.

consumeNextEntryTime()

Determine if this camera should process a new frame based on performance metrics and the time since the last update. This returns an Optional which is either empty if no update should occur or a float of the timestamp in seconds of when the frame which should be received by NT. If a timestamp is returned, the last frame update time becomes that timestamp.

Returns:

Type Description
float | None

Optional float which is empty while blocked or the NT entry timestamp in seconds if ready

enableDrawWireframe(enabled)

Sets whether a wireframe of the field is drawn to the raw video stream.

Note: This will dramatically increase loop times.

enableProcessedStream(enabled)

Sets whether the processed video stream simulation is enabled.

enableRawStream(enabled)

Sets whether the raw video stream simulation is enabled.

Note: This may increase loop times.

setMaxSightRange(range)

Maximum distance at which the target is illuminated to your camera. Note that minimum target area of the image is separate from this.

setMinTargetAreaPercent(areaPercent)

The minimum percentage(0 - 100) a detected target must take up of the camera's image to be processed.

setMinTargetAreaPixels(areaPx)

The minimum number of pixels a detected target must take up in the camera's image to be processed.

setWireframeResolution(resolution)

Sets the resolution of the drawn wireframe if enabled. Drawn line segments will be subdivided into smaller segments based on a threshold set by the resolution.

Parameters:

Name Type Description Default
resolution float

Resolution as a fraction(0 - 1) of the video frame's diagonal length in pixels

required

submitProcessedFrame(result, receiveTimestamp_us=None)

Simulate one processed frame of vision data, putting one result to NT. Image capture timestamp overrides :meth:.PhotonPipelineResult.getTimestampSeconds for more precise latency simulation.

Parameters:

Name Type Description Default
result PhotonPipelineResult

The pipeline result to submit

required
receiveTimestamp

The (sim) timestamp when this result was read by NT in microseconds. If not passed image capture time is assumed be (current time - latency)

required