Mobile Robot Localization Using Particle Filters
Below is a particle filter implementation in matlab that allows a robot to localize itself on a map. The robot is the blue dot traversing the obstacle free path. The hollow green circle centers over the robot when the algorithm has high confidence that it has localized itself on the map. The pink stars represent features that the robot will use to localize itself as it moves around. Path was generated by Matlab's robotics package.
Below are three videos showing the performance with 100 , 500 and 1000 particles. This project was part of my Advanced Mobile Robotics Class.
Purpose of project:
The first thing people usually ask me is " what does your robot do?". At any point in my career, I've always built robots to implement and explore algorithms that are new to me. Fred-E is a research robot no different than anything I've built before, except in that it is at such a large scale that it can do actual house hold work. This promotes learning while implementing useful real world use cases.
Through Fred-E I am currently implementing and furthering my knowledge in topics such as: robot modeling, controls, obstacle avoidance, path planning, point clouds, stereo vision, image processing, and robot perception. Fred-E also increased my exposure to ROS, Gazeebo, Moveit, Vrep and most importantly crossing the reality gap.
Future plans for Fred-E: implementation of SLAM, and exploring machine Learning.
Custom Servo Design
In order to deliver the required torque and keep cost at bay, Fred-E's arm was custom designed from scratch. The servos have absolute position feedback and a relative encoder feedback allowing for different arm control implementations and avoids the use of limit switches for positioning. The arm can lift 300kg-cm at the shoulder and spin at 23 rpm. the torque is reduced by half to double the speed at the elbow joint servo. The shell design uses similar parts and construction to reduce the design and prototyping life-cycle. The aim was to design and build this arm in an eight week duration. Roboteq SDC 2130 motor controllers are being used to power up the 4 custom servos actuating the first 4 degrees of freedom of the arm. the last 2 degrees of freedom use giant servos encapsulated in shells with potentiometers to provide live feedback for controls.
After adding a second arm and rewiring the entire robot, i took FredE out for a quick spin to evaluate how the robot's reach around the appliances feels. The robot was being controlled using a PS3 joystick in open-loop mode. The joints, in open-loop, don't maintain their position and you can see it is very hard to control all the joints simultaneously with the PS3 controller.
Learning Optimal Path for navigating a map
In this Project the robot uses Q-Learning to attain an optimal policy by which it can traverse from any point on the map to the terminal state indicated by the pink ball.
To achieve this, the world is first discretized and the robot goes through its learning process using a grid model that represents the V-rep simulation world. after the Q-learning algorithm has converged the Q-table is used in the simulation to allow the robot to create the optimal path plan regardless of where it is placed on the map.
By using a separate representative model that does not have a physics engine, and does not simulate the actual robot motion, one can shrink attaining the Optimal Policy to less than an hour.
The actual simulation takes place in V-Rep using a differential drive robot, sonar for obstacle avoidance and an overhead camera for localization.
For this project to reach its next level, it needed some crucial upgrades. redistribution of batteries to better balance the weight and improve traction. It needed better wire management and mounting of all the electronics onboard. And finally, a second arm addition was essential to bring the capabilities of the robot to enable it to perform useful tasks around the house. I redistributed all the electronics, created new wire harnesses, upgraded the arm design with minor improvements, and assembled a second arm as can be seen in the slideshow to the left.
5 D.O.F Spray Paint Manipulator Modeling and Control
As part of a Robot modeling class, I designed this 5 D.O.F manipulator in SolidWorks and imported the model in V-Rep for spray paint simulation.
since this model was designed from scratch, DH parameters, Fwd/Inv kinametics had to be obtained first. Once the model was properly setup in V-rep, a simple controller was written in c++ to remotely move the arm over the spray paint area.
Buoy Detection was approached in two different ways. we first solved the problem by creating a univariate guassian model extracted from a training set, and then we solved the problem by using Gaussian Mixture Models to extract the mean and variance of the buoys. in both approaches the guassians are used to generate a probability map upon which thresholding takes place. blobs of color that have high probability are examined for their size and shape and then a contour is drawn around the centroid of the selected blob.
This video is a timelapse, the arm is being controlled in closed-loop mode using a PS3 joystick. I had set the resolution to a tenth of the default mode that I usually run in. The result, as can be seen, is very accurate, and very steady motion control of the arm.
Arm Modeling and Design
The arm has six degrees of freedom, and the link lengths are designed to resemble those of a human arm, roughly 50 cm without a gripper. The intention it to make Fred-E fit into a human environment as much as possible.
The parts were modeled in SolidWorks, and 3D printed using a Prusa MK3 3D-printer in white PLA. The shells are very solid and are fortified with aluminum or steal in certain places to reduce vibrations at the tip.
Improvements can be made to this design to make it lighter in order to increase the payload it can carry. The design can also be improved to make it more maintenance friendly. Currently mechanical maintenance can be very intrusive and time consuming.
Open-Loop Controls Layer Testing
Recently I started revamping FRED-E's software architecture. Cleaning up, refactoring and adding in place a distributed ROS architecture. In this video I am testing the new motor controller drivers that baisically fully configure all on board RoboteQ motor controllers and also testing the distributed architecture where some ROS nodes are running on board the robot and others on a base desktop. I'm controlling all joints through a PS3 Controller to get a feel for how well the mechanics and control system respond to the given commands.
This project is a straight forward implementation of lane detection. to simplify the implementation a trapezoidal region of interest is projected in front of the vehicle and search for edges in that region. Hough lines are then extracted under certain conditions to ensure that the lines selected represent actual lanes. the curvature of the road is then estimated by simply measuring how far the center of the image is moving from either detected lane. road curvature is indicated by changing the lane color to green on the corresponding side. when the road is straight both lanes are kept red.
At this point, both manipulators are integrated with Moveit. There is more work to be done on calibration, however here is a quick video of how the system currently operates.
This was a proof of concept in its early stages. To improve quality control and without creating a bottle neck at the beginning of the assembly line. The robot would be loaded with stack of glass on the bin on the left, then the arm would pick the top piece and place it in the fail or pass bins after a scan has been done. In the video the robot is completely autonomous, the glass scanning device is not visible however it towers about a foot above the glass.
The pick and place arm zeros its position, then makes sure there are no left over pieces in the pass fail bins then scanning commences if there is glass loaded in the left bin. A small vacuum pump engages and releases for pickup and drop off.
Touch Panel Tester
This is another proof of concept, also in its early stages. The idea was to speed up regression testing for different firmware releases and free testers to do more important work. The robot would run on different panels having concurrent firmware releases automatically after an update allowing test results to come out faster and to run over night if needed. The arm would go over pre-programmed button locations the output of the panel would then be compared to the expected output for a pass or fail.
Traffic Sign Detection and Classification
By using a combination of HSV thresholding, MSER and morphological operators one can create clean blobs for image segmentation. A ROI and Aspect ratio are used along side the expected size of the blob to filter out false positives. the extracted blob coordinates is used to extract a sub RGB image representing the sign. this small image is passed to a trained Support Vector Machine which identifies the detected Sign. the output of the SVM is then pasted next to the actual detected sign for comparison. note that in the SVM was not trained on detecting all street signs for this project.
INTELLIGENT GROUND VEHICLE COMPETITION (IGVC)
The Intelligent Ground Vehicle Competition (IGVC) is where it all started for me with autonomous vehicles. Below is the Exploratory Land Vehicle Intelligent System (E.L.V.I.S) that I worked on in 2008 alongside a few classmates at the City College of New York.
Overall I developed obstacle avoidance and handled all of the mechanics and electrical work.
E.L.V.I.S finished 4th place out of 50 schools, which helped our CCNY to move up from the 20th place, in the previous year, to the 4th place.
The robot was the collaborative effort of four students Igor Labutov, Pyong Cho,Ricardo Chincha, and myself.