Skip to content

photonlibpy.estimation.openCVHelp

OpenCVHelp

rVecToRotation(rvecInput) staticmethod

Returns a 3d rotation from this :class:.Mat. The opencv rvec Mat is a vector with three elements representing the axis scaled by the angle in the EDN coordinate system. (angle = norm, and axis = rvec / norm)

Parameters:

Name Type Description Default
rvecInput ndarray

The rvec to create a Rotation3d from

required

reorderCircular(elements, backwards, shiftStart) staticmethod

Reorders the list, optionally indexing backwards and wrapping around to the last element after the first, and shifting all indices in the direction of indexing.

e.g.

({1,2,3}, false, 1) == {2,3,1}

({1,2,3}, true, 0) == {1,3,2}

({1,2,3}, true, 1) == {3,2,1}

Parameters:

Name Type Description Default
elements list[Any] | ndarray

list elements

required
backwards bool

If indexing should happen in reverse (0, size-1, size-2, ...)

required
shiftStart int

How much the initial index should be shifted (instead of starting at index 0, start at shiftStart, negated if backwards)

required

Returns:

Type Description
list[Any]

Reordered list

rotationEDNToNWU(rot) staticmethod

Convert a rotation delta from NWU to EDN. For example, if you have a rotation X,Y,Z {1, 0, 0} in NWU, this would be {0, 0, 1} in EDN.

rotationToRVec(rotation) staticmethod

Creates a new :class:.np.array with this 3d rotation. The opencv rvec Mat is a vector with three elements representing the axis scaled by the angle in the EDN coordinate system. (angle = norm, and axis = rvec / norm)

Parameters:

Name Type Description Default
rotation Rotation3d

The rotation to convert into a np.array

required

solvePNP_SQPNP(cameraMatrix, distCoeffs, modelTrls, imagePoints) staticmethod

Finds the transformation that maps the camera's pose to the origin of the supplied object. An "object" is simply a set of known 3d translations that correspond to the given 2d points. If, for example, the object translations are given relative to close-right corner of the blue alliance(the default origin), a camera-to-origin transformation is returned. If the translations are relative to a target's pose, a camera-to-target transformation is returned.

There must be at least 3 points to use this method. This does not return an alternate solution-- if you are intending to use solvePNP on a single AprilTag, see {@link

solvePNP_SQUARE} instead.

Parameters:

Name Type Description Default
cameraMatrix ndarray

The camera intrinsics matrix in standard opencv form

required
distCoeffs ndarray

The camera distortion matrix in standard opencv form

required
objectTrls

The translations of the object corners, relative to the field.

required
imagePoints ndarray

The projection of these 3d object points into the 2d camera image. The order should match the given object point translations.

required

Returns:

Type Description
PnpResult | None

The resulting transformation that maps the camera pose to the target pose. If the 3d model points are supplied relative to the origin, this transformation brings the camera to the origin.

solvePNP_Square(cameraMatrix, distCoeffs, modelTrls, imagePoints) staticmethod

Finds the transformation(s) that map the camera's pose to the target's pose. The camera's pose relative to the target is determined by the supplied 3d points of the target's model and their associated 2d points imaged by the camera. The supplied model translations must be relative to the target's pose.

For planar targets, there may be an alternate solution which is plausible given the 2d image points. This has an associated "ambiguity" which describes the ratio of reprojection error between the "best" and "alternate" solution.

This method is intended for use with individual AprilTags, and will not work unless 4 points are provided.

Parameters:

Name Type Description Default
cameraMatrix ndarray

The camera intrinsics matrix in standard opencv form

required
distCoeffs ndarray

The camera distortion matrix in standard opencv form

required
modelTrls list[Translation3d]

The translations of the object corners. These should have the object pose as their origin. These must come in a specific, pose-relative order (in NWU): - Point 0: [0, -squareLength / 2, squareLength / 2] - Point 1: [0, squareLength / 2, squareLength / 2] - Point 2: [0, squareLength / 2, -squareLength / 2] - Point 3: [0, -squareLength / 2, -squareLength / 2]

required
imagePoints ndarray

The projection of these 3d object points into the 2d camera image. The order should match the given object point translations.

required

Returns:

Type Description
PnpResult | None

The resulting transformation that maps the camera pose to the target pose and the ambiguity if an alternate solution is available.

tVecToTranslation(tvecInput) staticmethod

Returns a new 3d translation from this :class:.Mat. The opencv tvec is a vector with three elements representing {x, y, z} in the EDN coordinate system.

Parameters:

Name Type Description Default
tvecInput ndarray

The tvec to create a Translation3d from

required

translationEDNToNWU(trl) staticmethod

Convert a rotation delta from EDN to NWU. For example, if you have a rotation X,Y,Z {1, 0, 0} in EDN, this would be {0, -1, 0} in NWU.

translationToTVec(translations) staticmethod

Creates a new :class:np.array with these 3d translations. The opencv tvec is a vector with three elements representing {x, y, z} in the EDN coordinate system.

Parameters:

Name Type Description Default
translations list[Translation3d]

The translations to convert into a np.array

required