Visual Pick and Place: Difference between revisions
From Computer Laboratory Group Design Projects
Jump to navigationJump to search
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
Client: Theo Markettos, Computer Lab <atm26@cam.ac.uk> | Client: Theo Markettos, Computer Lab <atm26@cam.ac.uk> | ||
We have a robot that | We have a LitePlacer robot that can automatically assemble circuit boards (https://youtu.be/t__ybwOufyg), and would like it to self-configure using its machine vision system, which is based on OpenPnP. Our robot currently uses the vision system to recognise board features and component tapes, enabling automated alignment of the components it is placing. We would like to extend the capabilities of the system to automate the setup of the machine, for instance printing sticky labels from CAD software and reading them via vision, recognising components, detecting their alignment, reading their values and preventing the user making mistakes. |
Revision as of 16:38, 13 November 2018
Client: Theo Markettos, Computer Lab <atm26@cam.ac.uk>
We have a LitePlacer robot that can automatically assemble circuit boards (https://youtu.be/t__ybwOufyg), and would like it to self-configure using its machine vision system, which is based on OpenPnP. Our robot currently uses the vision system to recognise board features and component tapes, enabling automated alignment of the components it is placing. We would like to extend the capabilities of the system to automate the setup of the machine, for instance printing sticky labels from CAD software and reading them via vision, recognising components, detecting their alignment, reading their values and preventing the user making mistakes.