Jump to content

Batu Sipka

Gibson Employees
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Batu Sipka's Achievements

Newbie

Newbie (1/14)

  • Week One Done
  • One Month Later
  • One Year In

Recent Badges

2

Reputation

  1. This is a tutorial to setup the Google Drive storage and load a program and a workcell into the robot. Click on the 3 dots on the right top corner and Select Storage from the drop down. Click on the Add/Choose Account. This will prompt you to select a WiFi to connect. After you insert the correct credentials for the WiFi account, click on the Add/Choose Account again. This time the Google Drive plugin will show up. Select the Add Account button and fill in the email address and password of the account that you have the program and workspace in. After completing the account adding, you should see your account pop up, like shown below. Close out of this and click on the 3 dots on the right corner again. Select Open Program. Then click on where it says “Tablet” and choose MyDrive and navigate to the correct directory that your .kr2 file is. Select the program and click yes on the prompt. Do the same process for the Workcell. This time navigate to the .wc file and select that. If you see the Cbun tile highlighted red in your program, you will need to initialize the RG6 Cbun. For that, you might need to download the OnRobot CBun, similar to how we did with the program and workcell. Navigate to CBuns under the 3 dots on the right top. Click on the + sign on the left top to add a CBun. Make sure to select MyDrive instead of the Tablet or Robot from the pop up screen and navigate to where the CBun is located in your google drive. I recommend not using the built-in CBun that’s under Robot since the OnRobot version has more functionality. After that step, click on the + button next to RG6 or whichever gripper you are using. That will redirect you to the Workcell. Now what you need to do is to change the Robot’s ethernet address to something that will match the Compute Box’s address. Compute box ships with 192.168.1.1 so setting the “ethnet” to 192.168.1.2 would be good. Then, select the OR_RG6 under Custom Devices and configure it. Change the connection method to Compute Box and change the IP address to 192.168.1.1 under the Configure tab. Under the Mounting tab, click on Mount to set LOAD1 as your gripper. Remember, LOAD2 is the variable that you will need to configure to set your tool's weight. Make sure to click activate under Configure tab and watch the OR_RG6 tiles turn green.
  2. In the 2D industrial imaging realm, I am going to split the cameras into 2 groups: Monochrome and Color cameras. They both have different applications and we usually see monochrome cameras used in the automation industry and machine vision applications. Take a look at the bottom infographic for how to turn light into a digital file. Industrial cameras are digital cameras so this applies to them too. Light hits the subject (no light, no image), a lens collects the light then gets reflected on a sensor that gets converted into signals. Let’s dive into the different parts of this process. Lensing Why do you need a lens to get an image? It is a given that our cell phone cameras have lenses on them, some of our industrial cameras come with autofocus lenses but do we really need a lens to get an image? Actually, you don’t. You can get an image without a lens, it just might not look great. If you have an interchangeable lens DSLR sitting around or if you have a removable lens industrial camera laying around, take off the lens (please be careful with your image sensor…). Take a needle, and make a hole in one of your business cards. Put the business card where the lens might go, and you are going to find that you can actually get an image this way. Crazy! So the job of a lens is to collect light and redirect it to the image sensor. You can accomplish the same function with a pinhole(called a pinhole camera), it’s just not very effective. Image Sensor/Imager Image sensor is where most of the magic happens. Manufacturers mostly use two different image sensors: 1) CMOS (Complementary metal–oxide–semiconductor) 2) CCD (Charged coupled device). I am not going to dive into these technologies but CMOS is what is being used mostly because of the cost and speed of the technology grabbing information from the world. Briefly, on a CMOS sensor, there are little “sensors”(called transistors) that reflect a voltage value depending on the amount of light they receive. On a monochrome camera the light that got collected from the lens hits each pixel on the sensor, the transistor behind reflects a voltage value depending on the brightness, you end up with various voltage levels that represent a grayscale image. Color Filters and Color Cameras Mainly, there are 2 types of hardware technologies manufacturers use to get color images from their image sensors. The main component is the filter in front of the sensor. The phenomenon that you need to understand here is that, when you look through a red film, you will be seeing red. Conversely, if you were a monochrome camera, looking through a red film, the red lights would show up as white. The first and expensive way of getting a color image is to have 3 different image sensors, one to capture Red, one to capture Green and one for Blue. So then, in software, you can create that RGB image. Gets very complicated with how to interact between 3 different imagers, triple the cost because of the hardware, bigger cameras to fit all the sensors and circuitry. Second way of getting a color image is to implement a Bayer filter in front of the image sensor. You can see an image of a Bayer filter in the picture below; It is important to spot that there are way more green pixels than blue and red (%50 green, %25 blue and %25 red to be exact). This is because human eyes like to see green. There is not a lot of reason other than it looks good to the human eye. And if you have been working with industrial cameras for some time now, what looks good to the human eye might not actually be the best image for inspection purposes. Another disadvantage is that you can basically think that you are losing 1/3rd of the resolution of your image. In machine vision, specifically in measuring application, we see the resolution of the camera being very important to hit the right tolerances. So you will find yourself paying for a more expensive color camera with more resolution to get the same tolerances as a monochrome camera. To sum up, it is possible to summarize the image acquisition process in terms of hardware into a couple of steps. Light hits the subject, a lens collects the light, depending on the type of sensor or the filter on the sensor, light hits the pixels and we convert the light into electrical signals. Rest is software. Image Resources :https://en.wikipedia.org/wiki/Bayer_filter https://www.loveyourlens.co.uk/which_entry_level_digital_camera/digital-camera-sensor/ https://meroli.web.cern.ch/lecture_cmos_vs_ccd_pixel_sensor.html
  3. To start, an executive round table happened on Wednesday, September 9th, called “The State of the Global Robotics Industry” at RIA Robotics Week (Link here if you want to watch it). It was an interesting talk about the past couple of months, the current and most importantly the future of advanced manufacturing and robotics. I thought it was great to hear the opinions of some important names of the automation industry. (If you haven’t watched the round table, feel free to read the summary I wrote at the end of this post.) The round table started with analyzing how the last quarter went in terms of numbers. With all the stoppage in manufacturing, it seems like the automotive and aerospace industry saw the biggest hit. However the rebounding already has started and if the automotive projection numbers hold true, we will only be seeing a 15% decrease in year to year numbers. In Gibson’s territory, we don’t see a ton of automotive applications comparatively to Midwest or the rest of the country. It has been mentioned several times in the talk that medical industries are surging. The reason for this is that advanced manufacturing has been able to react so quickly to the changes. GM is a big example of this, a powerhouse in automotive, producing masks. It’s amazing what could be achieved if we work as a team. I agree with the point that Robert Little, CEO of ATI brought up about re-shoring and producing locally. With the COVID crisis, the biggest supporters of most industries were local manufacturers. This brings up one of the biggest (maybe?) questions of the decade. With the unemployment rate going from 3.4% to astronomical numbers, how do we justify implementing robots and automation? It’s pretty simple, it has been proven that robotics and automation actually do add jobs for people, they are different jobs but it does add them. Only problem is the talent gap and finding that talent. Michael Cicco of FANUC summarized it great by giving two options: You either find the talent, or you up-skill your current workforce. Implementing advanced manufacturing techniques and getting high school, trade school or college students touching robots is going to take time, maybe up-skilling current workforce is a better option right at this second. Very good segway to integrators and/or value-add distributors. At Gibson Engineering, we make sure all of our team can assist with any technical problem or applications of customers and that’s how we show we are dedicated to the solution. This aligns with the thoughts of the group. The execs thought there is going to be a huge need for systems integrators in the next year and the upcoming 5 years. In my opinion, the point that has not been emphasized enough is the type of help needed from the systems integrators. A lot of small companies will try to automate themselves and are not going to have enough funds to afford systems integrators. The need from the systems integrators is going to shift from “here is WHAT you want” to “here is HOW you do it”. Education is going to be key for advanced manufacturing and companies who are trying to implement robotics. A very interesting discussion that’s going to emerge soon in pretty much every industry out there is the work from home and how effective it is. I think a lot of engineers would agree with Robert Little of ATI, where he put it as his engineers have been “lighting up a fire” and being so much more productive than ever. I personally think working from home helped a lot of people to be more productive, made them realize that they have a life other than work since they now don't have the commuting time. I am sure the lunches with the family are amazing at home, however maybe a hybrid system in the future is the best way to move forward. Definitely a longer discussion.. To sum up, I found a couple of takeaways from this very informative discussion; - COVID crisis hit the robotics and advanced manufacturing market hard seeing lows similar to Q1 in 2014 - We already have been seeing a rebound and the attendees seem to agree that the future is bright and next year is going to be busy - With the acceleration of introducing automation in companies, finding talent is going to get harder. Upskilling the current workforce is something everyone should focus on. - The need, especially next year for systems integrators, is going to be huge. It’s important to keep in mind that the function of these integrators is going to be different than before. - The COVID Crisis did not create some of the needs in automation or advanced manufacturing, it accelerated it. In terms of the last point, I think that’s the biggest silver lining to the current situation we are going through. The COVID Crisis hurt a lot of people, it is an absolutely horrible time. I think about the sales team doing web conferences or virtual sales calls before COVID times and it just doesn’t make sense when we can drive and talk to them face to face. As mentioned in the talk, COVID did not create the change, just accelerated it. We were going to make more virtual sales calls at some point in time, we just didn’t think it was this close. Or in another example, we were going to start producing more web content, host more webinars, and increase our focus on educating customers, we just didn’t think it was this close. It is remarkable how the people reacted to the tough times and how advanced manufacturing reacted to help. I hope we keep this versatility and embrace change to bigger and better things together. As Gibson Engineering, we have already been trying our best to help customers navigate these difficult times. We try to embrace the change and do our best to collaborate with our customers, analyze their projects and applications together, and give them application support to the best of our abilities. If you are interested, the summary of the round table is below. Summary: The roundtable started with analyzing how the past quarter went. The notes were towards aerospace and automotive industries seeing a big hit, Milton Guerry from Schunk mentioned that the automotive industry probably has never seen such a stop before, but it is picking back up. Michael Cicco from FANUC mentioned that all the industries closed down in April and May and that is reflecting on the numbers we see for Q2 of this year. He also mentioned that the non-automotive sectors are only %6 down and it is impressive how advanced automation reacted to this crisis. Milton Guerry from Schunk chimed in that the aerospace industry has seen a very big hit and will take time to get back on track, he added that the demand for general goods is stable. All the attendees agreed on how impressive the advanced manufacturing stepping in was. Milton Guerry also mentioned with the current projections on the automotive industry, we will see less than %15 drop Year to Year. Robert Little, CEO of ATI gave a general overview of how the trade war has been affecting the US and we should trend towards reshoring. Cost of manufacturing is going up in China, so it only makes sense for advanced manufacturing and automation to reshore. An important question got raised asking how the conversation about robotics is going considering the %3.4 unemployment rate in the beginning of the year is at astronomical numbers. Michael Cicco from FANUC answered the question by discussing finding talent. He mentioned there are 2 ways to find talent; - Find people that understand advanced manufacturing - Upskill the existing workforce He also mentioned how important it is that from high school level to technical college to college, advanced manufacturing needs to be taught. Currently, half a million students are touching robotics or advanced manufacturing. He segwayed into the apprenticeship programs and all the attendees agreed this is an interesting idea that has been implemented in Germany for some time now. This ties in with upskilling the existing workforce. Next topic was systems integrators. Robert Little from ATI started the discussion with a huge optimism for the next year. He believes that there is going to be a shortage of systems integrators because of the demand for automation. The whole group appreciated the optimism and agreed with Robert, it was mentioned that the smaller companies are going to try to automate now because of social distancing and working from home. Also, there is going to be a different need from systems integrators: Going from “we will do everything for you” to “let’s educate you”. The conversation turned into the idea of COVID crisis accelerating some of the needs. Klaus Koenig from KUKA had an interesting idea about digital factories and systems integrators being able to supply to manufacturers and customers with the right technology. This crisis made a lot of people more open to the idea of change and doing things differently now. A question was directed to Milton Guerry of Schunk asking if the robots, end of arm tooling and sensors are easier to implement now. He started answering by agreeing with the attendees where “the rising tide is going to lift all the boats” and open up a lot more companies into automation. He added that the connectivity, ease of use and implementation are crucial for automation and satisfying the needs to companies trying to automate. The systems integrators and vendors need to make sure they focus on quick successes and move on to the next project. Klaus Koenig from KUKA mentioned a forgotten but crucial part of industrial automation, where he mentioned that the “robots behind fences” also need to be easy to use and implement not only collaborative robots. Towards the closing notes, Robert Little from ATI talked about how ATI engineers have “lit up a fire” working from home and how they have been more efficient and productive than ever. (I think a lot of us engineers will agree with that statement). He added that this is a great time to reinvest into your company with implementing a new CRM, ERP or other aspects of the company. Milton Guerry from Schunk talked about how we are going to see automation go into places that we never thought it was going to go. He repeatedly mentioned that this was going to happen, COVID crisis just accelerated the process. He talked briefly about the skill gap and how we need to upskill the current workforce. Tagging along the optimistic approach that Robert Little from ATI started, he believes that we need to look at the potential, get excited however not forget to get ready for it and fix the problems we had before the crisis. The last topic was the future of automation and advanced manufacturing. The systems integrators are key, and in 5 years, there are going to be a lot of startups and new integrators that use cutting edge technology.
  4. I am going to be honest here, I didn’t think I needed a 7th axis on a robot. I thought it was a marketing gimmick or an additional linkage to make the kinematics equations harder… I was wrong. Disclaimer: There are two different options to get 7 different axis; 1) a 7 axis robot, 2) an additional 7th axis for a 6 axis robot. Two different things. I am going to be talking about 7 axis robots in this post rather than adding an elevator or a slide to a 6 axis arm. Save time on the design stage You don’t need to spend tons of time and money designing your machine around your robot. For example, let’s think of a machine tending application. With a 6 axis robot, you need to make sure you can clear the door of the CNC machine, and your reach is enough to get to all of the positions that you need to go. Even then, you might see some surprises where a joint is out of limit after you reach into the machine or when you are closing the door. With a 7 axis robot, you can easily reach around the door and you don’t really need to think where the robot should go to be able to do the application. Just a couple of basic measurements and you should be good to go! Save money If you have ever worked with a 6 axis arm, you know that if you are packing a box or if you need to operate close to the base of your robot, you just can’t. To be able to get the rotation on a 6 axis arm, you have to rotate your 1st joint. Take a look at the picture below and see how the first joint of the robot is pointing at a different angle than the end-effector. This helps with distributing the load on each axis better since you can rotate joint 1 and actuate other joints to achieve the same rotation of the overall robot as a 6 axis robot, you end up not exhausting that Joint 1. This leads to longer lifetime of the robot and saving money in the long term. (Check out this link to see a video of a 7 axis robot going through motions that you wouldn't be able to do with a 6 axis robot.) 3. Save space on your factory floor With a 7 axis robot, it is no problem to have a working area right near your robot base.. The 7th axis helps with bending the arm in such a different way so that you can easily work right in front of you. But what does this mean? This means when you are building a machine, your machine can take way less space than if it would with a 6 axis robot. Everything can be compact. And you can also get away with purchasing a robot that has shorter reach for the same application compared to a 6 axis robot. 4. Save time on deployment Maybe in the design stage, you didn’t think about the limitations of a 6 axis robot. And you put the place station a little too close to your robot. Now you need to spend time and money to change the layout of your application. Or, if you know a 6 axis robot can definitely do this application, you might spend hours if not days more to program the robot. Usually robot programmer times are expensive, so this could be the factor that makes or breaks the project financially. I don’t want to sound wrong. 6 axis robots definitely have their application. Every product has their place. But sometimes you have some details in the application specifications that makes getting a 7 axis robot more sense. Now that you know where you can use a 6 axis or 7 axis robot, please reach out with your application details and we’ll be more than happy to discuss it with you at Gibson Engineering.
  5. If you don’t have the right end of arm tooling for your application, it might not matter how good of a robot you pick. Robots and the end of arm tooling go hand in hand together. (Please read my other blog “5 Major Factors to Consider When Choosing a Robot” to get a better idea on how to choose a robot) Case 1: Let’s pick an application for a vacuum gripper, a bottle pack out machine. The bottles have flat caps and are fairly easy to get a seal with almost any vacuum cups that are the right size. Everything should go smoothly, right? That’s where it gets dangerous. Since the bottles are heavy, and the boxes that they go in are tight, the same vacuum cup with bellows will not work as good as the vacuum cups without any bellows. You have the risk of bottles peeling off from the cups easier with no bellows however, you lose a lot of the positional accuracy of the robot since bellows will introduce flexibility with the weight of the bottles. If the pack out boxes are too tight, the robot selection might not really matter if your bottles are drooping because of the wrong vacuum cup selection. Case 2: Let me give you a different application. A flexible feeding solution, Flexibowl with a Scara robot, Intelligent Actuator’s really fast IXA. My goto candy at the quarantine, Starbursts as the parts (I like the pink ones the best..). Let’s go with a parallel gripper to pick up these parts and place them in a stationary nest. The parallel gripper was picked for this application because it was the cost-effective solution. Now, let’s think about the application. A camera looks at the bowl, and finds a Starburst that is available to pick. Camera locates the candy, gives the coordinates to the robot. Sounds simple, works for about a couple of minutes and then the customer starts seeing crushed Starbursts. It turns out, the available Starburst pattern doesn’t actually look if there are any candies around it. So when the candies end up next to each other, the fingers of the parallel gripper crushes the candy that’s next to the one the robot is going to pick. Parallel gripper was picked because of cost savings. However, with the time that got spent on vision programming made this solution way more expensive than a vacuum pick solution. Case 3: This time I am going to give an example from the quotation stage of an application. A simple deburring application. Robot will pick up a part from a nest, debur it, and bring it back to its place. A vacuum gripper might be able to get this application done, however, it is going to make the robot programming a little bit harder compared to using a 2 finger gripper. In a deburring application, depending on the amount of material that needs to be taken off, a vacuum grip might see a lot of lateral forces acting on the cup which might result in the part falling off the gripper. Going at a slower pace might help but now the programmer needs to take this into consideration. On the other hand, using a 2 finger gripper, having the clamping force high, the probability of the part falling out of the finger is lower. Case 4: This time, we chose the right gripper and selected the OnRobot RG2 for the application. The goal was to save some cost so the very versatile default fingers were used in a very critical precise application. Robot’s repeatability specs are well under the specifications that the part needs to be at. Customer realizes that the Cognex vision system picks up some variability in the finished part, at the inspection process. The vision system gets checked, the robot positions get checked but nothing could be found. It takes 2 days of engineering time to troubleshoot the system that the part was slipping slightly from the fingers of the gripper. The need for this application was to get a custom set of machined fingers that fits the parts exactly right so we could use the full power of the awesome RG2 gripper. To sum up, choosing the right end of arm tooling will make a project go smoother and might actually be more important than which robot to use in some cases. When a challenging application gets presented to me, I think about how to handle the part because if I succeed at that portion of the project, the rest will be easier.
  6. There are 5 major factors that I think about when I am specifying a robot for an application; reach, payload, speed, application type/versatility and lastly, cost. Reach Reach is one of the most important factors to getting the right size robot for your application. Some applications are as easy as just a simple pick and place and you can measure from the pick position to the place position and verify what kind of a reach you need from your robot. However, sometimes you need to reach around certain objects, for example, for a machine tending application, you might need to account for the door of a CNC machine. Or the pick/place locations change dynamically. It gets tricky to measure exactly how much of a reach you need in these scenarios. In cases like these, I usually use a model of the robot and manipulate it to see if it can actually move around the cell or its workspace. If the robot has a simulator, that is something to consider because it will make it very easy to see if that particular robot is going to work for your application. Payload Payload is the second most important factor. If the robot can’t carry the parts you are handling, then it is not going to work. Payload calculations get trickier than the reach calculations sometimes. It is a common mistake to forget to include the gripper’s weight into consideration when you do these payload calculations. And that’s not even enough. Most robot manufacturers will specify their robot’s payload exactly at the end of it’s last joint. So you need to find a graph in their manual that shows the payload capacity of the robot at a distance from the end of the arm. Finding that graph is definitely harder than calculating the load.. Speed Most of the applications, the robot is servicing another machine or limited by another equipment down the manufacturing line. So the robot’s speed is dictated by the cycle time of those bottlenecks. Let’s say we are in a bottling plant and every bottle takes about 5 seconds to get filled, labeled and inspected. Every 5 seconds, we will be seeing a bottle to be picked up by the robot. Our robot needs to be able to complete its task under 5 seconds and be ready for the next one. This is where we can start being clever. 5 seconds is not too fast for industrial robots, but for cobots, it definitely stretches their “safe” speeds. Now, we should ask the question, can we handle more than one bottle at a time? If we can handle 6 bottles at a time and do the task that’s required, now our cycle time is about 30 seconds. It is definitely smart to explore the options of handling more than 1 part to reduce cycle time but we still need to be careful to be under the payload of the robot. Also, usually handling multiple parts is harder than handling only 1. After making sure of the details and understanding how many parts to handle, the best way to test is to do a real time cycle time test with the specific robot. If the robot manufacturer offers a simulator, that might also be a good way to estimate the cycle time. Speeds that are listed on spec sheets are good for getting an overall feel, but you really should test the robot if your application is speed critical. 4. Application Type and Versatility It is important to consider all types of robots in the application at first but some application types just require different robots. Doing a very delicate and precise quality inspection, you might need a robot with high precision, or if you need to orient your part, you want to go with an articulated robot rather than a SCARA. You also need to consider if the specified robot can handle other applications in the future. This is one of the reasons collaborative robots are so popular these days since they are easy to deploy and versatile. Cost Cost is obviously a very strong factor in the decision making process. The key parts to justifying the cost of the robot is to understand the cost reduction that it will introduce and the return on investment of it. Obviously that return on investment will take a shorter amount of time if the price of the robot is cheaper, however, you should get the robot that is suitable for the job and also checks out the other factors listed. These are 5 major factors I personally consider when I am picking a robot. In some scenarios, one factor might be more important than others, but everything starts with understanding the application requirements. Please reach out to Gibson Engineering if you have an application and we can choose the robot that fits your needs together!
  7. I've been supporting the implementation of a Kassow KR1410 collaborative robot for a pack out application, I wanted to provide some tips on integrating a collaborative robot based on my experiences. Fully understand the system requirements I cannot emphasize how important it is to understand the system requirements, or if there is a machine that already does a similar job it is very important to study that machine and fully understand the process. After I’m confident in the requirements, I try to come up with a plan or a sequence in my mind on how I should be programming the robot, which brings me to the second step. Make a sequence of operations document Write down the sequence of operations. This is different from how the machine is going to work, this is how components of the machine are going to communicate. Take a look at the snippet below; This sequence can be tweaked when actually programming the machine. Often there are multiple programmers working on a machine so for example it is good to have a document where the PLC guy understands what the Robot guy is going to do in their program, and vice versa. As you can see in the snippet above, feel free to assign the I/O and registers on how the communications are going to work. Write pseudocode For those who don’t know what pseudocode is, it’s just a simplified program. This step cuts down the time that’s spent on the machine by a lot. I usually have most of the program already written out before I even start working with the cobot. Take a look at the code below, it’s mostly comments but now I know what exactly I need to program. Work with the cobot and machine After having the pseudocode, I go on the machine and start by setting up all my communications, Modbus, TCP/IP, check digital I/O, etc.. Next, I start teaching positions and building the sequence step by step. Test I am not going to call this the last step because you would be more successful if you have already been testing along with the rest of the above steps. Whenever I feel like a part of the program is done, I test to make sure that part works. This works well when you’ve written your code in a modular way. For example, the main program that I run on the robot will only consist of several lines. I usually have two loops, one that runs once, my initialization loop, and a loop that has my main program that always runs. In the initialization loop, I make sure all my variables are reset, I read all the sensors tied in and make sure I am aware of everything that is going on that the robot should know. This should happen only once in the program. My main loop is most often a “while (1) loop”. This is an infinite loop that never ends, unless I call a “break” from the loop. This break prevents me from getting stuck anywhere inside my program and I never leave the robot in a state that’s not great. For example, I always try to go back to a home or safe position when a task is complete. One situation where I make this loop to rely on a variable is if I have another controller, a PC or a PLC that tells the robots what to do. Going back to the testing, (it is really hard but) make sure to try to account for all cases that can happen. This could be safety related, operation related or even part related. Testing all the “what-if” scenarios can be the most time consuming but the more you test the lower your risk of unexpected problems later. I believe that everything seems very complicated until you break it down into pieces and tackle them one by one. With these 5 steps, hopefully it will be easier for you to implement a collaborative robot. If it still seems confusing or you need any help following these general steps, please reach us at https://www.gibsonengineering.com/ or leave a comment below!
  8. Let’s say you are trying to implement a pick and place application with your robot. Industrial robots are amazing in terms of going to the place they were told to go. But what if that place we told them to go changes constantly and we don’t know where the part is going to be next time around. That’s when we use machine vision’s help to guide our robot to the right pick location. The general idea is that a vision system needs to be looking at the potential pick locations, and tell the robot where to go and pick up the next part. I’m sure a lot of you would agree that communication is the key to success. That is no different in this case. If two people are speaking different languages that conversation is not going to work great. In digital cameras, there is a sensor that collects the light from the outside world and converts it into electricity. The sensor has “points” (or you can call it a grid) on it that are called pixels. The images we obtain from the camera are represented in these pixels. Robots on the other hand, have coordinate systems. And they usually get represented in meters or millimeters. Two different languages... The whole magic is to be able to know where the camera is located relative to the robot end of arm tooling. Camera can either be mounted at: The end of arm tooling A stationary location If the camera is mounted at the end of arm, we need to know the location of the camera relative to our gripper. At this stage, we only need to know the relative location in terms of two dimensions. The third dimension, we usually can control. For example, if the camera is mounted 3 inches in X and 1.5 inches in Y away from the end of arm tooling, and our part is in the middle of camera’s field of view, the robot needs to move -3 inches in X and -1.5 inches in Y (on its end of arm tooling’s coordinate system) to be able to grab the part. Wait a second, how about the Z? In the robot program, I always have a set location to take my picture before the pick so I know how far the robot end of arm tool is relative to my parts in terms of Z. But what if the part is not in the center of the field of view of the camera, the camera needs to report the part’s location somehow to the robot right? Yes, and that’s when the calibration comes into play. Basic idea is to calibrate the camera’s pixel readings into robot coordinates. Since most of the time, the robots work in real world coordinates (mm or m), you can use the built in calibration function on your camera software if it has one. These routines usually require some type of a grid with known size squares or circles so the camera can do math to convert it’s pixels to real world coordinates. I usually don’t do that. I would have to make sure my axes are aligned perfectly and would have to take lens distortion into consideration somehow. I make a randomly marked paper or plate (see the plate below with holes in it) and use that as the calibration grid. Here are some simple steps and tips to get the calibration process working; Jog the robot to the location where the picture is going to be taken and take a picture. Make sure all the markers on the plate are visible on the image and the top of the plate is the same distance away from the camera as the top of the part you are eventually going to pick up. Note all the locations of the markers in pixels from the camera software. You will need to implement some tools in the vision system to be able to get location data from these markers. Jog the robot to each of these markers and note down the locations in robot coordinates. All you need to do now is to match the pixel location of the marker and the robot location of the marker and do math! Cognex Insight cameras have a built vision tool called N-Point calibration to do this very easily. You select the markers on the plate and write down the corresponding robot coordinate in a table. The software takes care of the rest and your location tools (like PatMax) will report back in robot coordinates now. Almost the same process when the camera is mounted on a stationary location. You just need to know where the camera is located relative to the arm. Only other thing to be careful about now is that the camera needs to be farther away from the pick area so that the robot arm can swing in and grab the part without hitting the camera. Understanding the basics, it’s not so scary to do a vision guided pick and place anymore. Does it still scare you? If so leave a comment below or reach out to us at https://www.gibsonengineering.com/. Disclaimer: This blog post is valid for using 2D cameras for doing a pick and place of the same type of part. Depending on the parts and the hardware used, some additional steps might need to be taken.
×
×
  • Create New...