Posts Simon
Post
Cancel

Simon

As more and more automation is introduced into every industry, the overall costs of programming robots with precise movement will rise in all of those industries. However, modern sensors are becoming more and more pervasive (and cheaper!).

Sensors will continue to improve and real-time systems will continue to get better at integrating sensor data. To that end, robots that perform precise movements will no longer be blind machines that follow exact trajectories using precise motor actuations (expensive!) but rather low precision motors that are self adjusting based on sensing.

With all of that in mind, we came up with the idea for this hackathon project called The Simon System. Which, looking back on it now, is kind of a cringey name but whatever.

As usual, if you are interested, here is a link to the Devpost page for the project.

What Does Simon Do?

A brief video of a demonstration of Simon can be found here.

Basically what we wanted to do was to allow for a human to perform some action and then have a robot mimic that action with fairly high precision. To do that with minimal complications, we built 2 “arenas,” one for the human action and one for the robot action.

system The 2 arenas side by side

On the human side there are 2 blocks. If the robot is not currently finishing up an action, a human user can go ahead and move the blocks around in the human arena. All of this will be picked up by a phone camera. The robot will then decide the most efficient path to perform that same action and then perform it.

The LEDs around the arenas are there to provide feedback to the human users as to what is happening. For example when the LEDs are green that means that you can go ahead and interact. When it is yellow (human side only) that means it is recording your actions. And when it is red that means don’t touch.

Sensing

You may have noticed that there is a phone above the robot arena as well. That’s because, as mentioned in the introduction of this post, the motors we were using for the robot were very imprecise. Turning left and turning right required different amounts of total current and time. The robot will often overshoot but we couldn’t tell by how much.

To combat this, the movement instructions are actually streamed to the robot in real time from one of our laptops. And while the instructions are streaming, the phone above the robot is capturing the trajectory of the robot and giving feedback back to the computer in order to course correct.

system A visualization of the path planning which was done using a modified A* search algorithm

And not just the robot, but the entire arena dimensions are recomputed on the fly. This was to adjust for changes in lighting that could cause perception warping. So all of the blocks and the corners of the arena were tagged with QR codes to maintain the correct dimensions of the arena.

Prizes

We didn’t end up winning any major prizes for this project. I think the conceptual underpinnings of the project were extremely interesting and the space we explored is actually quite relevant. However, our exact execution wasn’t super great. The robot system was slow and the instruction streaming was laggy. We ended up being named a Top 30 Hack.

This post is licensed under CC BY 4.0 by the author.