Jump to content

Search the Community

Showing results for tags 'machine vision'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Support

  • Technical Support Forums
    • Cognex
    • Collaborative Robots
    • Epson
    • Intelligent Actuator (IAI)
    • Mitsubishi Electric
    • SICK
    • Other Vendors

News & Blogs

  • News & Announcements
  • Products & Technology

Categories

  • Cognex
    • 3D Vision Systems
    • Dataman Barcode Readers
    • Deep Learning Systems
    • In-Sight Vision Sensors
    • In-Sight Vision Systems
  • Collaborative Robots
    • Kassow Robots
    • Mecademic
    • MiR Mobile Industrial Robots
    • OnRobot
    • Precise Automation
    • Robotiq
    • ROEQ
    • Schunk
    • Universal Robots (UR)
  • Epson
    • Robots
  • Intelligent Actuator (IAI)
    • Positioning Controllers
    • Programmable Controllers
    • Fieldbus Protocols
  • Mitsubishi
    • Controllers
    • Drive Products
    • Robots
    • Visualization
    • Other Mitsubishi Products
  • SICK
    • Identification & Vision
    • Measurement & Ranging
    • Safety
    • Sensors
    • Other Sick Products
  • Other Vendors
    • Advanced Illumination
    • Beijer Electronics
    • Captron
    • CCS America
    • Crevis
    • Ellitek (Data Commander)
    • Flexibowl
    • GAM
    • HMS (eWon/Anybus)
    • Moritex
    • Oriental Motor
    • Pepperl + Fuchs
    • RFID Inc.
    • Robotunits
    • ROLLON Corp
    • Schmalz
    • Schmersal
    • SHIMPO
    • Smart Vision Lights
    • Tri-Tronics
    • WAGO
    • Wittenstein (Alpha Gear)
    • Other

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Company Name

Found 7 results

  1. Version 1.0.0

    4 downloads

    Want to learn the steps on how to set up a vision-guided robot application? Attend this powerful session to learn how to use the RT Toolbox3 software from Mitsubishi and Cognex's InSight Explorer software to easily simulate a vision-guided robot project. Perfect for validating an application's proof of concept without hardware or while working remotely! Learn the steps to make this connection, see how easy these projects are to simulate, and walk away with sample code & a project that you can start using today!
  2. Expiration dates, lot codes, and other important texts are on all our consumer products. Federal regulations require that food and medical related items have this important information and that it is easily read by the end customer. Consequently, manufacturers are responsible for making sure these texts are present and accurate. Traditional optical character recognition (OCR) tool sets use a combination of image filters and pattern matching to determine which character is being read. A large set of tool parameters can be adjusted to help decide if what the camera sees is a character or just a mess of like colored pixels. Below you can see how an application using a traditional OCR tool works great when the text is clearly printed and very consistent. What happens when the surface for the printing is uneven (color, texture, reflectivity) or perspective distortion causes the shape and/or proportions of the character to fall outside the nominal range set in the parameters? The toolset struggles to properly segment the characters and is confused as to what some of them are. The badly scratched/smudged ‘R’ is even being found by the OCR tool as two separate characters. Enter Deep Learning OCR tools. ViDiRead from Cognex leverages deep learning algorithms to decipher badly deformed, skewed, and poorly etched codes using OCR. The In-Sight ViDiRead tool works right out of the box thanks to pre-trained font libraries which dramatically reduce development time. Simply define the region of interest (ROI) and set the character size. In situations where new characters are introduced, you simply capture a handful of images, label the unknown character, and click train. Using the same first image from before, we can train the ViDiRead tool and it reads as expected. A closeup of the last two characters shows nicely formed characters that the tool has no issues decoding. Now when we use the image of the damaged label, the ViDiRead tool has absolutely no problem reading the characters. Even though the ‘R’ is poorly printed due to scratching/smudging, ViDiRead does not have any issues reading it. Traditional OCR tools are great in many applications but there are some extra difficult-to-read texts that just do not allow these tools to be used. When all others have failed, ViDiRead will succeed.
  3. Version 1.0.0

    0 downloads

    We have been encountering the buzzwords "AI", "Deep Learning", etc.. Now it's reality with Cognex's new Insight D900 system with Deep Learning Technology. Check out this video to reveal why we call it revolutionary!
  4. Version 1.0.0

    0 downloads

    Proper lighting & lensing is critical for the success of any machine vision application. Please join us for a quick presentation and learn how to pick the right light & lens for your upcoming machine vision project. Steven Pereira from MORITEX will show how contrast, resolution, repeatability, and accuracy play a part in forming an optimal image.
  5. Version 1.0.0

    0 downloads

    Please join us for a short technical webinar to learn about the different 3D technologies available for machine vision and what applications they're best suited for. See first-hand the benefits of the different technologies utilized in Cogenx's 3D inspection portfolio, complete with application-specific examples and live demonstrations!
  6. In the 2D industrial imaging realm, I am going to split the cameras into 2 groups: Monochrome and Color cameras. They both have different applications and we usually see monochrome cameras used in the automation industry and machine vision applications. Take a look at the bottom infographic for how to turn light into a digital file. Industrial cameras are digital cameras so this applies to them too. Light hits the subject (no light, no image), a lens collects the light then gets reflected on a sensor that gets converted into signals. Let’s dive into the different parts of this process. Lensing Why do you need a lens to get an image? It is a given that our cell phone cameras have lenses on them, some of our industrial cameras come with autofocus lenses but do we really need a lens to get an image? Actually, you don’t. You can get an image without a lens, it just might not look great. If you have an interchangeable lens DSLR sitting around or if you have a removable lens industrial camera laying around, take off the lens (please be careful with your image sensor…). Take a needle, and make a hole in one of your business cards. Put the business card where the lens might go, and you are going to find that you can actually get an image this way. Crazy! So the job of a lens is to collect light and redirect it to the image sensor. You can accomplish the same function with a pinhole(called a pinhole camera), it’s just not very effective. Image Sensor/Imager Image sensor is where most of the magic happens. Manufacturers mostly use two different image sensors: 1) CMOS (Complementary metal–oxide–semiconductor) 2) CCD (Charged coupled device). I am not going to dive into these technologies but CMOS is what is being used mostly because of the cost and speed of the technology grabbing information from the world. Briefly, on a CMOS sensor, there are little “sensors”(called transistors) that reflect a voltage value depending on the amount of light they receive. On a monochrome camera the light that got collected from the lens hits each pixel on the sensor, the transistor behind reflects a voltage value depending on the brightness, you end up with various voltage levels that represent a grayscale image. Color Filters and Color Cameras Mainly, there are 2 types of hardware technologies manufacturers use to get color images from their image sensors. The main component is the filter in front of the sensor. The phenomenon that you need to understand here is that, when you look through a red film, you will be seeing red. Conversely, if you were a monochrome camera, looking through a red film, the red lights would show up as white. The first and expensive way of getting a color image is to have 3 different image sensors, one to capture Red, one to capture Green and one for Blue. So then, in software, you can create that RGB image. Gets very complicated with how to interact between 3 different imagers, triple the cost because of the hardware, bigger cameras to fit all the sensors and circuitry. Second way of getting a color image is to implement a Bayer filter in front of the image sensor. You can see an image of a Bayer filter in the picture below; It is important to spot that there are way more green pixels than blue and red (%50 green, %25 blue and %25 red to be exact). This is because human eyes like to see green. There is not a lot of reason other than it looks good to the human eye. And if you have been working with industrial cameras for some time now, what looks good to the human eye might not actually be the best image for inspection purposes. Another disadvantage is that you can basically think that you are losing 1/3rd of the resolution of your image. In machine vision, specifically in measuring application, we see the resolution of the camera being very important to hit the right tolerances. So you will find yourself paying for a more expensive color camera with more resolution to get the same tolerances as a monochrome camera. To sum up, it is possible to summarize the image acquisition process in terms of hardware into a couple of steps. Light hits the subject, a lens collects the light, depending on the type of sensor or the filter on the sensor, light hits the pixels and we convert the light into electrical signals. Rest is software. Image Resources :https://en.wikipedia.org/wiki/Bayer_filter https://www.loveyourlens.co.uk/which_entry_level_digital_camera/digital-camera-sensor/ https://meroli.web.cern.ch/lecture_cmos_vs_ccd_pixel_sensor.html
  7. Let’s say you are trying to implement a pick and place application with your robot. Industrial robots are amazing in terms of going to the place they were told to go. But what if that place we told them to go changes constantly and we don’t know where the part is going to be next time around. That’s when we use machine vision’s help to guide our robot to the right pick location. The general idea is that a vision system needs to be looking at the potential pick locations, and tell the robot where to go and pick up the next part. I’m sure a lot of you would agree that communication is the key to success. That is no different in this case. If two people are speaking different languages that conversation is not going to work great. In digital cameras, there is a sensor that collects the light from the outside world and converts it into electricity. The sensor has “points” (or you can call it a grid) on it that are called pixels. The images we obtain from the camera are represented in these pixels. Robots on the other hand, have coordinate systems. And they usually get represented in meters or millimeters. Two different languages... The whole magic is to be able to know where the camera is located relative to the robot end of arm tooling. Camera can either be mounted at: The end of arm tooling A stationary location If the camera is mounted at the end of arm, we need to know the location of the camera relative to our gripper. At this stage, we only need to know the relative location in terms of two dimensions. The third dimension, we usually can control. For example, if the camera is mounted 3 inches in X and 1.5 inches in Y away from the end of arm tooling, and our part is in the middle of camera’s field of view, the robot needs to move -3 inches in X and -1.5 inches in Y (on its end of arm tooling’s coordinate system) to be able to grab the part. Wait a second, how about the Z? In the robot program, I always have a set location to take my picture before the pick so I know how far the robot end of arm tool is relative to my parts in terms of Z. But what if the part is not in the center of the field of view of the camera, the camera needs to report the part’s location somehow to the robot right? Yes, and that’s when the calibration comes into play. Basic idea is to calibrate the camera’s pixel readings into robot coordinates. Since most of the time, the robots work in real world coordinates (mm or m), you can use the built in calibration function on your camera software if it has one. These routines usually require some type of a grid with known size squares or circles so the camera can do math to convert it’s pixels to real world coordinates. I usually don’t do that. I would have to make sure my axes are aligned perfectly and would have to take lens distortion into consideration somehow. I make a randomly marked paper or plate (see the plate below with holes in it) and use that as the calibration grid. Here are some simple steps and tips to get the calibration process working; Jog the robot to the location where the picture is going to be taken and take a picture. Make sure all the markers on the plate are visible on the image and the top of the plate is the same distance away from the camera as the top of the part you are eventually going to pick up. Note all the locations of the markers in pixels from the camera software. You will need to implement some tools in the vision system to be able to get location data from these markers. Jog the robot to each of these markers and note down the locations in robot coordinates. All you need to do now is to match the pixel location of the marker and the robot location of the marker and do math! Cognex Insight cameras have a built vision tool called N-Point calibration to do this very easily. You select the markers on the plate and write down the corresponding robot coordinate in a table. The software takes care of the rest and your location tools (like PatMax) will report back in robot coordinates now. Almost the same process when the camera is mounted on a stationary location. You just need to know where the camera is located relative to the arm. Only other thing to be careful about now is that the camera needs to be farther away from the pick area so that the robot arm can swing in and grab the part without hitting the camera. Understanding the basics, it’s not so scary to do a vision guided pick and place anymore. Does it still scare you? If so leave a comment below or reach out to us at https://www.gibsonengineering.com/. Disclaimer: This blog post is valid for using 2D cameras for doing a pick and place of the same type of part. Depending on the parts and the hardware used, some additional steps might need to be taken.
×
×
  • Create New...