SpinEye is a vision-based system for cobots that removes the need for expensive fixtures and handling uncertainties in screwdriving tasks, leading to increased product quality. The SpinEye project aims to create and demonstrate a world class vision system for detecting screw holes in products, specifically in the electronics and automotive industry.
The innovative project “SpinEye” has been designed and implemented in collaboration with Spin Robotics, during the Trinity Open Call 2.
Small part assembly is one of the most under-automated tasks in the electronics and automotive industry, the need for flexibility for low batch sizes with high product variations is definite and not catered in the market.
The aim is to create and demonstrate a world-class vision system for detecting screw holes in products, specifically in the electronics and automotive industry.
SpinEye is a vision-guided system for cobots that combines AI-enabled visual detection of the screwdriving positions, along with the human input provided by an intuitive user interface, to guarantee a lower rate of quality errors with fast changeover times between assembly tasks.
Having the human in the loop ensures a higher detection quality because the human pinpoints the area of interest in the image during the “teach-in” phase. Later, the system utilizes that knowledge in its future detections (bootstraping).
The SpinEye system utilizes a strong AI vision system that consists of an industrial camera, a small Edge computer and an online Cloud infrastructure responsible for validating the system’s efficiency.
SpinEye is a robot agnostic plug’n’play vision system equipped with a screwdriver “teach-in” interface, for preliminary user annotations, leading to a low-cost and highly accurate Industry 4.0 solution.
For testing and demonstration purposes, a Universal UR5 e-series robot, provided by Spin Robotics has been used.
Customers that will deploy SpinEye will benefit from faster changeover between tasks, while there is also no need for highly skilled workers to setup an assembly task.
The quality rate will be increased, as the vision system will be able to detect slight mispositions of the fixture and adjust the cobot’s instructions.