We’ve been fantasizing about this project for the last three years. Techgarage seems to create hundreds and hundreds of random parts at random locations and we need a way to sort all of them. Typically we do it by hand using a bit of labor but its just to slow and Techagarage still looks like a mess. In comes the part sorter. The goal of this project is to create a robust part sorter that can separate the hundreds of different parts we have at techgarage. We finally started designing, building, and testing the part sorter during the summer camp.
Mechanical
We started with a big wooden plate attached to a motor. This wooden plate had ridges on the edges to attach small blue bins that the parts would then go into. This design was optimal because it only required one motor to organize ten different parts making it cost-effective. It is also very compact because it doesn’t require a longer conveyor every time you add parts. The motor also has a built-in encoder so we can track the position of the circle and know where to move it so the proper bin line sup with the dropper. To drop the parts we used a simple servo attached to a wooden panel. For now, the part sorter only works one part at a time but it was also built by the campers. In the new design, we will make it will be completely stackable and allow for rapid sorting of multiple parts.
Electrical
To control this machine we used a google coral which is a raspberry pi like board developed by Google. It has a built-in TPU for AI applications and can run models at very high framerates. My friend Danny and I have been working on creating an easy to use API for the coral that you can find here. We ran into problems reading the encoder values using the coral because there is a lot of overhead running a full OS. So to read encoder values we are using a simple Arduino encoder reader that communicates over serial to the google coral. This works pretty well but isn’t easily scalable so we are going to have to figure out another option.
Code
Google created a very cool demo which uses image embeddings and modifies the last layer of an existing neural network to dynamically change what your model is actually designed to classify. For example, we can take one part take a few pictures of it using the 0 key, then take another different part and take a few pictures of it using the 1 key. The model will then tell us is it 0 or is it 1 depending on which part we hold up. This is exactly what we are using for the part sorter. The biggest problem though is that it is a very manual process to do that and if we had to do it for hundreds of parts it just wouldn’t work. So we are going to need to switch to an object detection neural network that is custom trained on all of our parts.
Stacking
The last week of camp we worked to try and add a second layer to the part sorter. All we had to do was copy what we already did and just put it above the existing one. However, the goal is to only have to use one google coral because they are expensive. To accomplish this, we raised the google coral up higher so it drops past the second one first. Before dropping the second one moves to an empty space on its circle so the part can fall through to the proper bin below. Theorteicially this stackable up until we run out of motor control outputs.
However, the hardest part about this is the code and we found a few problems today with the code that Winston work to try and get it working. The biggest issue stems around not sending negative encoder values. When he rewrote the Arduino code he wrote it to just send the numeric value but not necessarily the negative which caused the sorter to do a few things…
- Would cause the PID motion control to move the plate back and forth endlessly
- If we moved the plate by hand in the wrong direction at start it would spin endlessly
We can fix this by sending negative values as well and theoretically it would work perfectly. However, the encoder and motor solution is a little wonky. We talked about using optical encoders or barcodes but this would require another camera for every layer and the movement would have to be smooth enough for the barcode to be scanned. We could also use RFID tags at each bin but it requires the same amount of hardware as an encoder but only with fewer capabilities.