Visual Pick and Place: Difference between revisions

From Computer Laboratory Group Design Projects
Jump to navigationJump to search
No edit summary
No edit summary
Line 1: Line 1:
Client: Theo Markettos, Computer Lab <atm26@cam.ac.uk>
Client: Theo Markettos, Computer Lab <atm26@cam.ac.uk>


We have a robot that picks up and places components when assembling circuit boards https://youtu.be/t__ybwOufyg.  It has top and bottom cameras which are used for basic placement tasks like finding holes in component tapes. There are many more tasks it could do - such as reading printed labels on parts, identifying orientations and reading the writing on components, or informing the user they made a mistake. Ideally a user would export their parts list from CAD software, print sticky labels for each part, and the vision system would identify the parts, their orientation and automatically configure the machine.
We have a LitePlacer robot that can automatically assemble circuit boards (https://youtu.be/t__ybwOufyg), and would like it to self-configure using its machine vision system, which is based on OpenPnP. Our robot currently uses the vision system to recognise board features and component tapes, enabling automated alignment of the components it is placing. We would like to extend the capabilities of the system to automate the setup of the machine, for instance printing sticky labels from CAD software and reading them via vision, recognising components, detecting their alignment, reading their values and preventing the user making mistakes.

Revision as of 16:38, 13 November 2018

Client: Theo Markettos, Computer Lab <atm26@cam.ac.uk>

We have a LitePlacer robot that can automatically assemble circuit boards (https://youtu.be/t__ybwOufyg), and would like it to self-configure using its machine vision system, which is based on OpenPnP. Our robot currently uses the vision system to recognise board features and component tapes, enabling automated alignment of the components it is placing. We would like to extend the capabilities of the system to automate the setup of the machine, for instance printing sticky labels from CAD software and reading them via vision, recognising components, detecting their alignment, reading their values and preventing the user making mistakes.