We created an autonomous robotic arm designed to detect and pickup simple objects in the area in front of it. The robot firstly performs a fast scan by rotating an IR distance sensor to detect if there is any objects in front of it. If there is an object, the robot will perform another scan to more accurately estimate the Cartesian coordinates of the object. When it has this information it will attempt to pick the object up with its claw and move it to the trash bin.
This is how it works:
The arm itself has 5 Dynamixel AX-12 servos and one AX-18 providing for 4 degrees of freedom. At its base it has a Sharp GP2D12 sensor. The servos are commanded directly from a laptop running a python script. The script gets input from the Arduino board which digitizes the analog output of the IR sensor. Outlier detection and averaging are used to make the signal more reliable. Furthermore, the signal is processed into distance by making use of interpolating functions that were determined from a calibration test. Distance data is then processed to estimate the size of the object. Location and size is then used by the program to command the arm to grasp and lift the object. In order to achieve this, firstly, a forward kinematic model was validated, followed by the implementation of the inverse kinematic model in a Python function. The claw is 3D printed and adapted from an open source project.
The project ended in a successful way, with the first prize of the national contest Infoeducatie 2013 and a special prize from Google. After a one week camp I managed to connect with different people working on very interesting projects. I got valuable feedback which I tried to implement quickly in order to improve my project. Here are a few pictures with Xtion Arm and with the rest of the competitors from Robotics and Software sections.
As I am qualified to participate to “Infoeducatie” National Contest, robotics section, I have started to build a new arm, 1:1 scale, stronger and which is capable of moving in 3 axis. In order to do this, I used 3 independent motors for the shoulder joint, one for elbow and one for rotation (in the biceps). In all, there are 5 motors (this model) 12 V DC, which I chose for its included gearbox with a powerful ratio (1:131) and as well as for its compactness.
Here are my sketches for the lathe man:
And here is the result:
For the skeleton, I used plexiglass as in the previous version, but this time a thicker one (6 mm). Here is the draft:
The following pdfs are exported from CAD. This project is open source, so feel free to cut your own parts using my sketches.
Last Christmas, I received a special gift from my parents. It was a sensor called Xtion from Asus, specially designed for developers. It’s actually a web cam and at the same time a IR cam (for depth) which simulates a 3D vision. It is very similar to Kinect sensor, but the Asus’s version is more flexible from the OS point of view. I am interested in working in Linux (Ubuntu), so this would be the perfect sensor for me.
As I am a beginner at this subject, I have started with different tutorials about Computer Vision and Body Tracking (OpenNI). After several weeks of practice, I made up my mind about how to use this sensor. I built a robotic arm which executes simultaneously, that is in real-time, the movements of my physical arm. I called it “The Xtion Arm”.
Here is my first version:
It is made by plexiglass (4 mm) and has 2 servos for the two joints (elbow and shoulder).
My movements are recorded by Asus Xtion and processed in Processing software which transfers the coordinates of the two points (joints) through a specific port to Arduino software which converts the coordinates of the 3-axis into angular dimensions. These values are sent to Arduino board which commands the servos.
In order for the software to perform this, I have written 3 applications. The first one is called “the Server”, it is developed in Processing and it gets the coordinates in “real world” dimensions. The second application is called “the Client”. The transmission of 9600 bits/s between Server and Client can be achieved either by serial (local: both arm and sensor are connected to the same computer) or via the Internet (sensor+Server on computer 1 and Xtrion Arm+Client on computer 2). The third application is Arduino, it communicates with the Client and sends values to servos. All these codes are published on my GitHub account here.
Two months ago, I was accepted for an internship at Proenerg (a company which focuses on power generators). Initially, I was there to see how a company is run and quite soon I become a part of their workshop team.
After only one week, I got involved in one of their new projects: a module for a generator which would charge the batteries more effectivly. This project was ordered by one of the most important clients who wanted 100 modules + generators. I decided to jump in and design a prototype based on Arduino board.
Batteries, in general, have a certain voltage when are fully charged (eg.24V) and when they are discharged, batts have a lower value (eg. 20 V). Starting from this fact, my prototype was meant to have the following specifications:
When tension reaches the bottom level, the module should close the contact into a relay in order to start the generator which is connected to a charger that would fully charge the batteries.
The module would monitor the batteries and would run the following algorithm:
If (V min < V monitored < V max) does nothing;
When (V monitored <= V min) close the relay;
When (V monitored >= V max) relay releases;
After two weeks I came up with an Arduino shield which worked properly and passed the tests. Here is the algorithm.
The board (the module) has:
a voltage divider in order to scale the battery voltage from 24 V down to 5 V (this is the maximum value which Arduino reads on each analog port);
a contact relay which is closed when the generator starts;
a relay for the electric starter motor which is closed for maximum 3 seconds and only after the contact is on;
an alarm relay which closes after 5 failures in starting the generator;
a 240 V relay which check the generator’s status (if it is on and charging the batteries or not).
The client was pleased with my prototype and the technicians and electricians helped me to develop the project. A production line was put in place and the integral order of 100 modules and generators was executed. I was very proud of this, as every single circuit board has a part of my innovation built-in it.
After my success with Spy Robot I decided to put together all my projects in one in order to create an Android. The purpose of this project is to control a humanlike robot. This idea comes from what I’ve done with the pan & tilt system which imitates a human neck. If I place the IMU sensor on my head, it will send my head’s coordinates and position to the servos which control the camera. In this way, the Android’s head (camera) is moving simultaneously with my head, recording and streaming in real-time the images. In this way, it gives me the opportunity to collect data from the place where the Android is sent.
I divided the Android into three smaller projects:
Next year, I will design and build one of the Android’s arm, which would be one of the most important abilities of the robot.
Spy Robot v2.0 is a better version from two points of view: the facilities and the platform.
It has the main functions which the previous version also had (collecting data about temperature, humidity and pressure, video and audio streaming). Furthermore, I have attached a robotic arm in order to grasp the objects if I want to. It has 4 joints (6 servos) which are connected to a 2.4 Ghz receiver and controlled by a remote control.
Everything started from the desire of exploring new territories and collecting data. I chose a RC car as a platform whereon I have attached an Arduino Uno board, temperature, humidity and pressure sensors and a pan & tilt system (2 servos). This system has a web cam holder in order to collect videos and images through a smart phone using Skype. The pan & tilt system is controlled by a Wii nunchuck in real-time. More information about this topic in this tutorial.
The communication between Spy Robot and me is achieved by means of two Arduino boards (one placed on the platform and the other one connected to my PC) which see each other via bluetooth. The video is transferred separately using Skype. Thus, I can see what Spy Robot sees, by controlling its eyes (the camera -smart phone-) with the Wii nunchuck. Meanwhile, I can analyse the information collected from sensors (also in real-time) on the terminal.
Yesterday I won the first prize in the regional stage of robotics contest for my robot and I qualified for “Infoeducatie” National Contest, robotics section that would take place in two months. To this respect, I intend to add an arm to my robot, in order to be able to grasp objects that Spy Robot sees during its explorations.