computer vision - Create OpenGL matrices from affine camera matrix -
i have set of 2d 3d point correspondences, , estimate affine camera matrix these correspondences (basically, assuming orthographic projection)[1]. output of camera estimation 3x4
matrix third row [0, 0, 0, 1]
(the "affine constraint").
now camera matrix, i'd render model using opengl. in essence, create modelview
, (orthographic) projection
matrix camera matrix have.
i've read hours of material on camera matrix dissection/decomposition, of perspective cameras, , "z" information missing in matrix, not apply these techniques.
i've tried numerous attempts, of failed @ stage. 1 thing tried taking dot product of first , second row of matrix (without last column of course) create new row transforms z
, it's [0, 0, 1]
, of course makes sense. thing tried converting 2d points clip coordinate space (i.e. "reversing" window transform) before estimating camera matrix, can take window transform , flipping y (in screen space, origin on upper-left) out of picture.
i'm frankly @ loss on how fundamentally approach problem.
[1]: gold standard algorithm estimating affine camera matrix world image correspondences, algorithm 7.2 in multiple view geometry, hartley & zisserman, 2nd edition, 2003.
the affine camera (see e.g. here) performs ortho projection followed affine warp.
given 1 can use opengl orthographic camera rasterization device arbitrary image transformation, affine camera can emulated follows:
- create orthographic camera scene. render scene using it, read generated image.
- apply affine transform induced camera model on image, pushing said affine transform texture transformation matrix.
- render rectangle in front of ortho camera (a "card") covering field of view, , texture said rectangle image produced in step 1. warped using affine transform requested.
this sequence can optimized if performance issue. read on "render texture" how.
Comments
Post a Comment