#odometry

2024-11-25

What's on my mind?

Disappointment and Disillusionment!

- Invested 7 yrs in #GoPiGo3 #robot
- Created #ROS2 nodes for #proprioception, #odometry, #LIDAR, #docking, #tts #mapping, #navigation, #simulation, #ObjectRecognition, and #LifeLogging

- My robot cannot safely and reliably navigate in my complex home environment.
- No slower turns, no inflation value can fix

I'm done, destroyed, no strength left for "one more try". GoPi5Go-Dave is on the shelf with other "reached its limit" robots.

Shelf with WALL-E figures from 2023 Create3-WaLi robot, GoPi5Go-Dave robot (Minion Dave on top) built in 2018, Rug Warrior Pro robot built in year 2000.Shows totally wrong robot localization in a map of home built with ROS 2 cartographer.  The robot thinks the world is 45 degrees different than the actual environment.
2024-04-12

At this reduced transfer rate, the #GoPiGo3 can make a two byte #SPI transfer in 0.0003 seconds, which means it can retrieve the left and right wheel encoder values roughly 1700 times per second - plenty fast enough for my #ros2humble robot to publish #wheelencoders and the resultant #odometry at 30 Hz.

2024-01-03

#SLAM_Toolbox Mapping w/o Localization

Got slam_toolbox mapping with localization turned off, but managed to figure out how to serve up a better looking map from my house floor plan.

The weakness of dead reckoning #odometry becomes very evident with the floorplan map.

Next up - #DeadReckoning #navigation with #ros2humble #nav2 and #map_server

rviz2 display of walls and occupancy grid open areas detected with IR intensity sensorsrviz2 display of house floor plan with dead reckoning path of robot.ROS 2 slam_toolbox map of main areas of home built from IR intensity sensors with dead reckoning odometry - no localization possible with the limitations of the IR intensity sensors.  Shows odometry off by 0.4 meters in X and Y at end of 25 minute, 60 meter trip around the area.
2023-12-29

#Create3 #IRsensor SLAM - First Attempt

My ir2scan node is working well, but setting the 52 #slam_toolbox params is a mystery.

My create3_navigation slam.launch.py (using #Turtlebot4 LIDAR parms modified for Create3 IR "distance" sensor scan), with Wali facing a wall, maps a wall!

(bouncing around in angle in front of bot...)

As I proceed to drive Wali around the room, more walls enclosing a "known to be open" space are mapped, but not where the sensors and #odometry put them.

rviz2 application showing the path of the Create3 robot (Wali) and sensed walls, with a tiny, far-from-reality map produced by the SLAM toolbox package.Create3 robot, named Wali, scanning a wall with infrared return intensity sensors
2022-12-01

Anyone aware of any studies of #odometry accuracy of #encoder only versus encoder with #IMU using #EKF?

The #iRobot #Create3 even fuses encoders with IMU and an #optical flow sensor".

Does that suggest I should not expect great improvement with only encoders and IMU?

2021-06-27

Video De-shaker Software Measures Linear Rail Quality

Here's an interesting experiment that attempts to measure the quality of a linear rail by using a form of visual odometry, accomplished by mounting a camera on the rail and analyzing the video with open-source software usually used to stabilize shaky video footage. No linear rail is perfect, and it should be possible to measure the degree of imperfection by recording video footage while the camera moves down the length of the rail, and analyzing the result. Imperfections in the rail should cause the video to sway a proportional amount, which would allow one to characterize the rail's quality.

To test this idea, [Saulius] attached a high-definition camera to a linear rail, pointed the camera towards a high-contrast textured pattern (making the resulting video easier to analyze), and recorded video while moving the camera across the rail at a fixed speed. The resulting video gets fed into the Deshaker plugin for VirtualDub, of which the important part is the deshaker.log file, which contains X, Y, rotate, and zoom correction values required to stabilize the video. [Saulius] used these values to create a graph characterizing the linear rail's quality.

It's a clever proof of concept, especially in how it uses no special tools and leverages a video stabilizing algorithm in an unusual way. However, the results aren't exactly easy to turn into concrete, real-world measurements. Turning image results into micrometers is a matter of counting pixels, and for this task video stabilizing is an imperfect tool, since the algorithm prioritizes visual results instead of absolute measurements. Still, it's an interesting experiment, and perfectly capable of measuring rail quality in a relative sense. Can't help but be a bit curious about how it would profile something like these cardboard CNC modules.

#cnchacks #videohacks #cnc #linearrail #odometry #stabilization #video #virtualdub

image

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst