We created an autonomous robotic arm designed to detect and pickup simple objects in the area in front of it. The robot firstly performs a fast scan by rotating an IR distance sensor to detect if there is any objects in front of it. If there is an object, the robot will perform another scan to more accurately estimate the Cartesian coordinates of the object. When it has this information it will attempt to pick the object up with its claw and move it to the trash bin.
This is how it works:
The arm itself has 5 Dynamixel AX-12 servos and one AX-18 providing for 4 degrees of freedom. At its base it has a Sharp GP2D12 sensor. The servos are commanded directly from a laptop running a python script. The script gets input from the Arduino board which digitizes the analog output of the IR sensor. Outlier detection and averaging are used to make the signal more reliable. Furthermore, the signal is processed into distance by making use of interpolating functions that were determined from a calibration test. Distance data is then processed to estimate the size of the object. Location and size is then used by the program to command the arm to grasp and lift the object. In order to achieve this, firstly, a forward kinematic model was validated, followed by the implementation of the inverse kinematic model in a Python function. The claw is 3D printed and adapted from an open source project.
The last part of my build deals with electronics, which combines the mechanical movements with the software algorithm that analyse the 3D model. The electronic diagram looks easy:
This is the RAMPS 1.4 board which can be bought or made. It connects to an Arduino Mega board which has the USB interface for the computer. Except from the two boards, I also needed some powerful drivers. In my opinion, the most important parts of a printer that have to be qualitative are the hot end (I used E3D), the linear bearings (SKF) and the drivers, which are in this case the DRV8825. They have a 1/32 step which increase the resolution of movement, therefore a smoother part results out of the printer.
Last Christmas, I received a special gift from my parents. It was a sensor called Xtion from Asus, specially designed for developers. It’s actually a web cam and at the same time a IR cam (for depth) which simulates a 3D vision. It is very similar to Kinect sensor, but the Asus’s version is more flexible from the OS point of view. I am interested in working in Linux (Ubuntu), so this would be the perfect sensor for me.
As I am a beginner at this subject, I have started with different tutorials about Computer Vision and Body Tracking (OpenNI). After several weeks of practice, I made up my mind about how to use this sensor. I built a robotic arm which executes simultaneously, that is in real-time, the movements of my physical arm. I called it “The Xtion Arm”.
Here is my first version:
It is made by plexiglass (4 mm) and has 2 servos for the two joints (elbow and shoulder).
My movements are recorded by Asus Xtion and processed in Processing software which transfers the coordinates of the two points (joints) through a specific port to Arduino software which converts the coordinates of the 3-axis into angular dimensions. These values are sent to Arduino board which commands the servos.
In order for the software to perform this, I have written 3 applications. The first one is called “the Server”, it is developed in Processing and it gets the coordinates in “real world” dimensions. The second application is called “the Client”. The transmission of 9600 bits/s between Server and Client can be achieved either by serial (local: both arm and sensor are connected to the same computer) or via the Internet (sensor+Server on computer 1 and Xtrion Arm+Client on computer 2). The third application is Arduino, it communicates with the Client and sends values to servos. All these codes are published on my GitHub account here.
This week-end I took part in the Raspberry Pi Hackathon. Except the fact that I needed 15 hours of sleep in order to recover from the time spent building the project, it was a wonderful experience both for beginners and the advanced ones in this field. I, my dad and two students from Electrical and Electronic Engineering department from Polytechnic University of Bucharest made an effective team and took part in the hardware section.
Purpose of the project: remote monitoring of a room or area where access is difficult or dangerous
Examples of use :
monitoring parameters of comfort or safety through a web interface (it can be accessed also on the smart phone);
regular use of sensors (temperature, humidity, pressure, wind, proximity, etc.);
comfort control parameters by pressing a button ( floor heating, water pumps, sockets/lights, etc.);
using a webcam with motion detection which sends pictures on a predefined email address.
2) professional use
centralizing the aggregation parameters from distributed sensors (eg scattered mini weather stations)
collecting data from mobile stations (video/audio, collections of various sensors) using a platform with wheels (line follower) controlled by Arduino which communicate with raspberry pi pyfirmata
Raspberry Pi sensor I2C or SPI
Usb wi-fi (for communication with the mobile platform)
Arduino Mega controller for 4 motors drivers and additional analog and digital sensors (IR, ultrasound)
Arduino Uno controller for relays which control the sockets and lights
Two months ago, I was accepted for an internship at Proenerg (a company which focuses on power generators). Initially, I was there to see how a company is run and quite soon I become a part of their workshop team.
After only one week, I got involved in one of their new projects: a module for a generator which would charge the batteries more effectivly. This project was ordered by one of the most important clients who wanted 100 modules + generators. I decided to jump in and design a prototype based on Arduino board.
Batteries, in general, have a certain voltage when are fully charged (eg.24V) and when they are discharged, batts have a lower value (eg. 20 V). Starting from this fact, my prototype was meant to have the following specifications:
When tension reaches the bottom level, the module should close the contact into a relay in order to start the generator which is connected to a charger that would fully charge the batteries.
The module would monitor the batteries and would run the following algorithm:
If (V min < V monitored < V max) does nothing;
When (V monitored <= V min) close the relay;
When (V monitored >= V max) relay releases;
After two weeks I came up with an Arduino shield which worked properly and passed the tests. Here is the algorithm.
The board (the module) has:
a voltage divider in order to scale the battery voltage from 24 V down to 5 V (this is the maximum value which Arduino reads on each analog port);
a contact relay which is closed when the generator starts;
a relay for the electric starter motor which is closed for maximum 3 seconds and only after the contact is on;
an alarm relay which closes after 5 failures in starting the generator;
a 240 V relay which check the generator’s status (if it is on and charging the batteries or not).
The client was pleased with my prototype and the technicians and electricians helped me to develop the project. A production line was put in place and the integral order of 100 modules and generators was executed. I was very proud of this, as every single circuit board has a part of my innovation built-in it.
Everything started from the desire of exploring new territories and collecting data. I chose a RC car as a platform whereon I have attached an Arduino Uno board, temperature, humidity and pressure sensors and a pan & tilt system (2 servos). This system has a web cam holder in order to collect videos and images through a smart phone using Skype. The pan & tilt system is controlled by a Wii nunchuck in real-time. More information about this topic in this tutorial.
The communication between Spy Robot and me is achieved by means of two Arduino boards (one placed on the platform and the other one connected to my PC) which see each other via bluetooth. The video is transferred separately using Skype. Thus, I can see what Spy Robot sees, by controlling its eyes (the camera -smart phone-) with the Wii nunchuck. Meanwhile, I can analyse the information collected from sensors (also in real-time) on the terminal.
Yesterday I won the first prize in the regional stage of robotics contest for my robot and I qualified for “Infoeducatie” National Contest, robotics section that would take place in two months. To this respect, I intend to add an arm to my robot, in order to be able to grasp objects that Spy Robot sees during its explorations.