Mobile Robot Localization Using Particle Filters
Below is a  particle filter implementation in matlab that allows a robot to localize itself on a map. The robot is the blue dot traversing the obstacle free  path. The hollow green circle centers over the robot when the algorithm has high confidence that it has localized itself on the map. The pink stars represent features that the robot will use to localize itself as it moves around. Path was generated by Matlab's robotics package. 


​Below are three videos showing the performance with 100 , 500 and 1000 particles. This project was part of my Advanced Mobile Robotics Class. 

​​

Learning Optimal Path for navigating a map
In this Project the robot uses  Q-Learning to attain an optimal policy by which it can traverse from any point on the map to the terminal state indicated by the pink ball.


To achieve this, the world is first discretized and the robot goes through its learning process using a grid model that represents the V-rep simulation world. after the Q-learning algorithm has converged the Q-table is used in the simulation to allow the robot to create the optimal path plan regardless of where it is placed on the map. 


By using a separate representative  model that does not have a physics engine, and does not simulate the actual robot motion, one can shrink attaining the Optimal Policy to less than an hour.   


The actual simulation takes place in V-Rep using a differential drive robot, sonar for obstacle avoidance and an overhead camera for localization. 

​​

 ROYBOTICS 

Probabilistic Robotics 

Robot Modeling

Touch Panel Tester
This is another proof of concept, also in its early stages. The idea was to speed up regression testing for different firmware releases and free testers to do more important work. The robot would run on different panels having concurrent firmware releases automatically after an update allowing test results to come out faster and to run over night if needed. The arm would go over pre-programmed button locations  the output of the panel would then be compared to the expected output for a pass or fail.


​​​

5 D.O.F Spray Paint Manipulator Modeling and Control
As part of a Robot modeling class, I designed this 5 D.O.F manipulator  in SolidWorks and imported the model in V-Rep for spray paint simulation.


since this model was designed from scratch, DH parameters, Fwd/Inv kinametics had to be obtained first. Once the model was properly setup in V-rep, a simple controller was written in c++ to remotely move the arm over the spray paint area.   

​​​

INTELLIGENT GROUND VEHICLE COMPETITION (IGVC)

The Intelligent Ground Vehicle Competition (IGVC) is where it all started for me with autonomous vehicles. Below is the Exploratory Land Vehicle Intelligent System (E.L.V.I.S) that I worked on in 2008 alongside a few classmates at the City College of New York. 


Overall I developed obstacle avoidance and handled all of the mechanics and electrical work. 

E.L.V.I.S held the first place all day at the competition and finally was pushed down to 4th place out of 50 schools thirty minutes before the end.

 which helped our school move up from the 20th place in the previous year to the 4th place.

The robot was the collaborative effort of four students Igor Labutov, Pyong Cho,Ricardo Chincha, and myself. 


 

​​

Mobile Robotics

Glass scanner

This was a proof of concept in its early stages. To improve quality control and without creating a bottle neck at the beginning of the assembly line. The robot would be loaded with stack of glass on the bin on the left, then the arm would pick the top piece and place it in the fail or pass bins after a scan has been done. In the video the robot is completely autonomous, the glass scanning device is not visible however it towers about a foot above the glass. 

The pick and place arm zeros its position, then makes sure there are no left over pieces in the pass fail bins then scanning commences if there is glass loaded in the left bin. A small vacuum pump engages and releases for pickup and drop off. 

Robot Perception

Lane Detection

This project is a straight forward  implementation of  lane detection. ​​​to simplify the implementation a trapezoidal region of interest is projected in front of the vehicle and search for edges in that region. Hough lines are then extracted under certain conditions to ensure that the lines selected represent actual lanes. the curvature of the road is then estimated by simply measuring how far the center of the image is moving from either detected lane. road curvature is indicated by changing the lane color to green on the corresponding side. when the road is straight both lanes are kept red. 

Reinforcement Learning