Jump to content

Mark Guida

Administrators
  • Posts

    12
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Mark Guida's Achievements

Newbie

Newbie (1/14)

  • Week One Done
  • One Month Later
  • One Year In

Recent Badges

1

Reputation

  1. Version 1.0.0

    3 downloads

    Cognex produces a range of Vision Sensors, Vision Systems, 2D Profilers, 3D Profilers, and Deep Learning cameras. Historically one of the best tools for displaying multiple Vision Systems at the same time was the VisionView system. This could be a hardware VisionView like the older VisionView 700 or the newer VisionView 900. Or it could be VisionView CE on a Windows CE panel, or VisionView software running on a PC. The VisionView package is a great solution as it provides the ability to see multiple cameras at the same time and allows you to monitor up to 9 In-Sight cameras, or Dataman barcode scanners, or even older DVT cameras. However, newer cameras like the profilers and Deep Learning systems do not use the same interface for generating graphics and showing images, but instead uses a Web interface using HTML5. Starting with In-Sight 5.6.0 and later you also have WebHMI functionality available for the In-Sight cameras. For all of these web interfaces, you typically just use a standard web browser that's capable of HTML5 and put the IP address in the address bar and the port number. Something like http://192.168.1.100:5555 where 192.168.1.100 is the IP address of the camera and 5555 is the port number as specified in the camera's setup. However, that only let's you monitor one camera at a time. With the attached file you can monitor up to 4 cameras at a time, and with modification you could do more or less. In the file you will see that we use iframes to hold the individual cameras. Near the bottom of the HTML is where you edit the IP addresses and ports, do not change the iframe id here, only IP Address and port (in bold). <div class="mainContainer"> <div></div> <div id="cameraGrid" class="iframe-grid"> <div class="iframe-container"> <div><iframe id="Camera01" src="http://10.0.0.10:8087"></iframe></div> </div> <div class="iframe-container"> <div><iframe id="Camera02" src="http://10.0.0.11:8087"></iframe></div> </div> <div class="iframe-container"> <div><iframe id="Camera03" src="http://10.0.0.12:8087"></iframe></div> </div> <div class="iframe-container"> <div><iframe id="Camera04" src="http://10.0.0.13:8087"></iframe></div> </div> </div> </div> There's also a Menu on the side of this web page for switching between views so you can look at one camera at a time. And you can edit it to name your cameras. <div id="leftMenu" class="menuLinks" onmouseover="openMenu()" onmouseleave="closeMenu()"> <a onclick="minCameras()">&#9974; <span style="font-size: 30px">Grid View</span></a></br> <a onclick="maxCamera01()">&#9843; <span style="font-size: 30px">Camera 1</span></a></br> <a onclick="maxCamera02()">&#9844; <span style="font-size: 30px">Camera 2</span></a></br> <a onclick="maxCamera03()">&#9845; <span style="font-size: 30px">Camera 3</span></a></br> <a onclick="maxCamera04()">&#9846; <span style="font-size: 30px">Camera 4</span></a></br> <!--<span id="settingsMenu" class="menuSettings"><a onclick="openForm()">&#9881; <span style="font-size: 30px">Settings</a></span>--> </div> This is just an example, provided without further support. I hope it helps.
  2. Expiration dates, lot codes, and other important texts are on all our consumer products. Federal regulations require that food and medical related items have this important information and that it is easily read by the end customer. Consequently, manufacturers are responsible for making sure these texts are present and accurate. Traditional optical character recognition (OCR) tool sets use a combination of image filters and pattern matching to determine which character is being read. A large set of tool parameters can be adjusted to help decide if what the camera sees is a character or just a mess of like colored pixels. Below you can see how an application using a traditional OCR tool works great when the text is clearly printed and very consistent. What happens when the surface for the printing is uneven (color, texture, reflectivity) or perspective distortion causes the shape and/or proportions of the character to fall outside the nominal range set in the parameters? The toolset struggles to properly segment the characters and is confused as to what some of them are. The badly scratched/smudged ‘R’ is even being found by the OCR tool as two separate characters. Enter Deep Learning OCR tools. ViDiRead from Cognex leverages deep learning algorithms to decipher badly deformed, skewed, and poorly etched codes using OCR. The In-Sight ViDiRead tool works right out of the box thanks to pre-trained font libraries which dramatically reduce development time. Simply define the region of interest (ROI) and set the character size. In situations where new characters are introduced, you simply capture a handful of images, label the unknown character, and click train. Using the same first image from before, we can train the ViDiRead tool and it reads as expected. A closeup of the last two characters shows nicely formed characters that the tool has no issues decoding. Now when we use the image of the damaged label, the ViDiRead tool has absolutely no problem reading the characters. Even though the ‘R’ is poorly printed due to scratching/smudging, ViDiRead does not have any issues reading it. Traditional OCR tools are great in many applications but there are some extra difficult-to-read texts that just do not allow these tools to be used. When all others have failed, ViDiRead will succeed.
  3. For most of us in the industrial automation field, seeing a vision solution on a machine is commonplace. Some of us have specified them, some of us have programmed them, and some of us only know, “that’s the vision system over there.” With the latter being the most common, it becomes difficult for some to decide between the types of vision solutions. There are typically two types of self-contained (no separate controllers required) options: vision sensors and vision systems. There is a third type, PC based vision systems, that is an entirely different animal and you will know when you need it. So why would you want a smart sensor in your application? Smart sensors give great ‘bang for the buck’. They carry a good set of features comparable to low end smart cameras at a fraction of the cost. Since these sensors do not require the computational horsepower or high-end feature sets of smart cameras, they can use smaller components with less heat dissipation requirements. This allows them to be packaged in much smaller housings that take up less space in a machine and reduces the overall weight of the sensor to reduce robotic payload burdens if mounted to end of arm tooling. If all you are looking for is to know the absence or presence of a feature pattern, edge, simple OCR/OCV, blobs, etc then a smart sensor is a great choice. A key differentiator from sensors to systems is that the sensors are generally much easier to program. Graphical programming environments with slider bars that immediately update the results of an inspection help set up a sensor very quickly. If you need more horsepower because of inspection speed requirements, more internal logic capabilities, higher resolutions (2Mp and up), dimensional measurements, advanced OCR/OCV, 1D/2D code reading, defect detection, etc, then you will need to look to a vision system to solve your application. There are a lot of different models that will solve a given application with a price and feature set to match what you need. These vision systems can be programmed to solve just about any vision task thrown at them. The newest systems leverage machine A.I. and deep learning to solve applications previously only able to be completed by humans. These full vision systems are typically configured by an integrator or distribution partner who has been trained in setting up vision systems which adds to the cost of the entire system. With all the differences, obvious or subtle, between the two types of solutions, there are a bunch of features to note that are common to both when talking about Cognex as a vision solution manufacturer. Programming software (except A.I. and deep learning versions) is the same for both the sensors and systems. Both are available with color imagers, autofocus lenses, integrated lighting/lens filter choices (white, colored, UV, and IR). So, whether your application is simple or complex, high speed or slow, or on a tight budget, there is a solution for you. Knowing which is the right solution might just take a closer look.
×
×
  • Create New...