Jump to content

How do machine vision cameras work?


Batu Sipka

2,219 views

In the 2D industrial imaging realm, I am going to split the cameras into 2 groups: Monochrome and Color cameras. They both have different applications and we usually see monochrome cameras used in the automation industry and machine vision applications.

Take a look at the bottom infographic for how to turn light into a digital file. Industrial cameras are digital cameras so this applies to them too.

digital-camera-sensor.jpg.33fd030a910f29618c52b2355987aad3.jpg

Light hits the subject (no light, no image), a lens collects the light then gets reflected on a sensor that gets converted into signals. Let’s dive into the different parts of this process.

Lensing

Why do you need a lens to get an image? It is a given that our cell phone cameras have lenses on them, some of our industrial cameras come with autofocus lenses but do we really need a lens to get an image? Actually, you don’t. You can get an image without a lens, it just might not look great. 

If you have an interchangeable lens DSLR sitting around or if you have a removable lens industrial camera laying around, take off the lens (please be careful with your image sensor…). Take a needle, and make a hole in one of your business cards. Put the business card where the lens might go, and you are going to find that you can actually get an image this way. Crazy! So the job of a lens is to collect light and redirect it to the image sensor. You can accomplish the same function with a pinhole(called a pinhole camera), it’s just not very effective.

Image Sensor/Imager

Image sensor is where most of the magic happens. Manufacturers mostly use two different image sensors:

1) CMOS (Complementary metal–oxide–semiconductor)clip_image002.jpg.f623df49c43fdabab08b4f42ee4abdba.jpg

2) CCD (Charged coupled device).

I am not going to dive into these technologies but CMOS is what is being used mostly because of the  cost and speed of the technology grabbing information from the world. Briefly, on a CMOS sensor, there are little “sensors”(called transistors) that reflect a voltage value depending on the amount of light they receive. On a monochrome camera the light that got collected from the lens hits each pixel on the sensor, the transistor behind reflects a voltage value depending on the brightness, you end up with various voltage levels that represent a grayscale image. 

Color Filters and Color Cameras

Mainly, there are 2 types of hardware technologies manufacturers use to get color images from their image sensors. The main component is the filter in front of the sensor. The phenomenon that you need to understand here is that, when you look through a red film, you will be seeing red. Conversely, if you were a monochrome camera, looking through a red film, the red lights would show up as white. 

The first and expensive way of getting a color image is to have 3 different image sensors, one to capture Red, one to capture Green and one for Blue. So then, in software, you can create that RGB image. Gets very complicated with how to interact between 3 different imagers, triple the cost because of the hardware, bigger cameras to fit all the sensors and circuitry. 

Second way of getting a color image is to implement a Bayer filter in front of the image sensor. You can see an image of a Bayer filter in the picture below;

1200px-Bayer_pattern_on_sensor_svg.thumb.png.51d02105ca24b06cd43c506b109adf5d.png

It is important to spot that there are way more green pixels than blue and red (%50 green, %25 blue and %25 red to be exact). This is because human eyes like to see green. There is not a lot of reason other than it looks good to the human eye. And if you have been working with industrial cameras for some time now, what looks good to the human eye might not actually be the best image for inspection purposes. 

Another disadvantage is that you can basically think that you are losing 1/3rd of the resolution of your image. In machine vision, specifically in measuring application, we see the resolution of the camera being very important to hit the right tolerances. So you will find yourself paying for a more expensive color camera with more resolution to get the same tolerances as a monochrome camera.

To sum up, it is possible to summarize the image acquisition process in terms of hardware into a couple of steps. Light hits the subject, a lens collects the light, depending on the type of sensor or the filter on the sensor, light hits the pixels and we convert the light into electrical signals. Rest is software. 

 

Image Resources

:https://en.wikipedia.org/wiki/Bayer_filter

https://www.loveyourlens.co.uk/which_entry_level_digital_camera/digital-camera-sensor/

https://meroli.web.cern.ch/lecture_cmos_vs_ccd_pixel_sensor.html

0 Comments


Recommended Comments

There are no comments to display.

Guest
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...