Search the Community
Showing results for tags 'aubo robotics'.
I've been supporting the implementation of a Kassow KR1410 collaborative robot for a pack out application, I wanted to provide some tips on integrating a collaborative robot based on my experiences. Fully understand the system requirements I cannot emphasize how important it is to understand the system requirements, or if there is a machine that already does a similar job it is very important to study that machine and fully understand the process. After I’m confident in the requirements, I try to come up with a plan or a sequence in my mind on how I should be programming the robot, which brings me to the second step. Make a sequence of operations document Write down the sequence of operations. This is different from how the machine is going to work, this is how components of the machine are going to communicate. Take a look at the snippet below; This sequence can be tweaked when actually programming the machine. Often there are multiple programmers working on a machine so for example it is good to have a document where the PLC guy understands what the Robot guy is going to do in their program, and vice versa. As you can see in the snippet above, feel free to assign the I/O and registers on how the communications are going to work. Write pseudocode For those who don’t know what pseudocode is, it’s just a simplified program. This step cuts down the time that’s spent on the machine by a lot. I usually have most of the program already written out before I even start working with the cobot. Take a look at the code below, it’s mostly comments but now I know what exactly I need to program. Work with the cobot and machine After having the pseudocode, I go on the machine and start by setting up all my communications, Modbus, TCP/IP, check digital I/O, etc.. Next, I start teaching positions and building the sequence step by step. Test I am not going to call this the last step because you would be more successful if you have already been testing along with the rest of the above steps. Whenever I feel like a part of the program is done, I test to make sure that part works. This works well when you’ve written your code in a modular way. For example, the main program that I run on the robot will only consist of several lines. I usually have two loops, one that runs once, my initialization loop, and a loop that has my main program that always runs. In the initialization loop, I make sure all my variables are reset, I read all the sensors tied in and make sure I am aware of everything that is going on that the robot should know. This should happen only once in the program. My main loop is most often a “while (1) loop”. This is an infinite loop that never ends, unless I call a “break” from the loop. This break prevents me from getting stuck anywhere inside my program and I never leave the robot in a state that’s not great. For example, I always try to go back to a home or safe position when a task is complete. One situation where I make this loop to rely on a variable is if I have another controller, a PC or a PLC that tells the robots what to do. Going back to the testing, (it is really hard but) make sure to try to account for all cases that can happen. This could be safety related, operation related or even part related. Testing all the “what-if” scenarios can be the most time consuming but the more you test the lower your risk of unexpected problems later. I believe that everything seems very complicated until you break it down into pieces and tackle them one by one. With these 5 steps, hopefully it will be easier for you to implement a collaborative robot. If it still seems confusing or you need any help following these general steps, please reach us at https://www.gibsonengineering.com/ or leave a comment below!
Let’s say you are trying to implement a pick and place application with your robot. Industrial robots are amazing in terms of going to the place they were told to go. But what if that place we told them to go changes constantly and we don’t know where the part is going to be next time around. That’s when we use machine vision’s help to guide our robot to the right pick location. The general idea is that a vision system needs to be looking at the potential pick locations, and tell the robot where to go and pick up the next part. I’m sure a lot of you would agree that communication is the key to success. That is no different in this case. If two people are speaking different languages that conversation is not going to work great. In digital cameras, there is a sensor that collects the light from the outside world and converts it into electricity. The sensor has “points” (or you can call it a grid) on it that are called pixels. The images we obtain from the camera are represented in these pixels. Robots on the other hand, have coordinate systems. And they usually get represented in meters or millimeters. Two different languages... The whole magic is to be able to know where the camera is located relative to the robot end of arm tooling. Camera can either be mounted at: The end of arm tooling A stationary location If the camera is mounted at the end of arm, we need to know the location of the camera relative to our gripper. At this stage, we only need to know the relative location in terms of two dimensions. The third dimension, we usually can control. For example, if the camera is mounted 3 inches in X and 1.5 inches in Y away from the end of arm tooling, and our part is in the middle of camera’s field of view, the robot needs to move -3 inches in X and -1.5 inches in Y (on its end of arm tooling’s coordinate system) to be able to grab the part. Wait a second, how about the Z? In the robot program, I always have a set location to take my picture before the pick so I know how far the robot end of arm tool is relative to my parts in terms of Z. But what if the part is not in the center of the field of view of the camera, the camera needs to report the part’s location somehow to the robot right? Yes, and that’s when the calibration comes into play. Basic idea is to calibrate the camera’s pixel readings into robot coordinates. Since most of the time, the robots work in real world coordinates (mm or m), you can use the built in calibration function on your camera software if it has one. These routines usually require some type of a grid with known size squares or circles so the camera can do math to convert it’s pixels to real world coordinates. I usually don’t do that. I would have to make sure my axes are aligned perfectly and would have to take lens distortion into consideration somehow. I make a randomly marked paper or plate (see the plate below with holes in it) and use that as the calibration grid. Here are some simple steps and tips to get the calibration process working; Jog the robot to the location where the picture is going to be taken and take a picture. Make sure all the markers on the plate are visible on the image and the top of the plate is the same distance away from the camera as the top of the part you are eventually going to pick up. Note all the locations of the markers in pixels from the camera software. You will need to implement some tools in the vision system to be able to get location data from these markers. Jog the robot to each of these markers and note down the locations in robot coordinates. All you need to do now is to match the pixel location of the marker and the robot location of the marker and do math! Cognex Insight cameras have a built vision tool called N-Point calibration to do this very easily. You select the markers on the plate and write down the corresponding robot coordinate in a table. The software takes care of the rest and your location tools (like PatMax) will report back in robot coordinates now. Almost the same process when the camera is mounted on a stationary location. You just need to know where the camera is located relative to the arm. Only other thing to be careful about now is that the camera needs to be farther away from the pick area so that the robot arm can swing in and grab the part without hitting the camera. Understanding the basics, it’s not so scary to do a vision guided pick and place anymore. Does it still scare you? If so leave a comment below or reach out to us at https://www.gibsonengineering.com/. Disclaimer: This blog post is valid for using 2D cameras for doing a pick and place of the same type of part. Depending on the parts and the hardware used, some additional steps might need to be taken.