This project was created at PennApps XVIII in 36 hours on September 8, 2018. We won Top 30.
Companies that want to put autonomous robotic systems into their manufacturing pipeline require a lot of overhead in terms of building specialized robots specifically for their manufacturing pipelines as well as needing to hire engineers who can program these specialized robots. Rather than having to make specialized robots very various tasks, we sought to create one general purpose robot that could be "taught" how to perform specialized tasks by watching a human.
We built The Simon System by first creating a >140 piece CAD assembly for two "playing fields." In one field a human will perform an action on some cubes, which are captured by a phone camera. That data is then sent to the cloud where it goes through a dimension and aspect ratio transformation (because the two cameras above the fields had different intrinsics) and is matched up to the visual data from the robot playing field.
From there we use a D-star search algorithm to find an efficient path for a wheeled robot to navigate in order to complete the same task that the human just performed. As the robot is moving, the camera on the robot playing field is tracking the robot's movements and sending correctionao instructions down in real time to keep the robot on course towards its objective.
More details, photos and a video of The Simon System in action can be found on our Devpost page.