I'm still looking for a solution. I have a fuzzy idea. Please, someone, tell me that I'm wasting my time and there's already a well established solution for this problem.
My idea is that since we have only these 3 degrees of freedom we should use that and somehow add constraints. Or in other words, since the robot can only move on a flat surface then the solution doesn't have to be as comprehensive as 3-dimensional #HandEyeCalibration. Even though the camera looks at a 3d space, the target, if it's fixed in place, or more precisely, the measurements of it make up a 2-dimensional plane in that space. From those measurements we can find the equation of that plane in the camera coordinates. Which will give us the roll and pitch of the camera mount.
However, we don't know the yaw yet. We can't be sure if the camera is looking exactly forward, or at what angle, relative to the robot's kinematics. To find the yaw we would have to match the set of points where the camera saw the target with the points where the robot thought it was at that precise moment. Both sets of points are 2-dimensional. So matching them should be similar to or exactly finding a #homography between the two planes...
Would that homography matrix also hint us about the offset from the robot drivetrain's kinematic center and the camera mount?
🤔