Automation: Collaborative Robots Get Vision Capabilities

Vision adds adaptive pick-and-place and stacking abilities for Universal Robots.

Related Topics:

Related Suppliers

At the recent ATX Automation Technology East show in N.Y.C., Universal Robots USA, Inc., Ann Arbor, Mich., showed off several new vision capabilities for its UR “collaborative” robots, or “cobots.” Cameras, software, grippers, force/torque sensors, and a number of other auxiliaries are available via the new Universal Robots+ “app store,” which catalogs products certified as compatible with UR cobots.

At the ATX East show, two partners demonstrated some of their enhancements for UR cobots. Robotic Vision Technologies, Silver Spring, Md., demonstrated vision capabilities such as shape/pattern recognition for pick-and-place applications where part position or orientation varies. This requires no programming—the user shows the robot what to look for by means of a camera image and Microsoft Paint software. The cobot can even accomplish stacking, effectively using 2.5D vision with only a single 2D camera.

Another UR partner, Energid Technologies, Cambridge, Mass., demonstrated vision-based abilities for collision avoidance, finding a path around obstacles, and for “adaptive” pick-and-place (the ability to find an object whose location varies). The software involves importing a CAD image of the target object.