https://www.automationdirect.com/vision-sensors?utm_source=n9BedCtoz78&utm_medium=VideoTeamDescription
(VID-VIS-0003)
The Datalogic Smart Vision Sensor offers a great option for vision pass-fail evaluation at a price that is hard to believe. Part 1 in this series covers initial set up and configuration of your sensor. Part 2 actually dives in depth on a real-world, high-speed application of the camera to evaluate good and bad parts! See Part 1 here: https://youtu.be/dLyj5qnpRAc
Vision System PLC Project files: https://cdn.automationdirect.com/static/video-resources/VID-VIS-0002%20Smart%20Vision%20Sensor/VisionSystemProject.adpro
https://cdn.automationdirect.com/static/video-resources/VID-VIS-0002%20Smart%20Vision%20Sensor/VisionSystemProject_Basic.csv
https://cdn.automationdirect.com/static/video-resources/VID-VIS-0002%20Smart%20Vision%20Sensor/VisionSystemProject_Extended.csv
Online Support Page: https://community.automationdirect.com/s/?utm_source=n9BedCtoz78&utm_medium=VideoTeamDescription
**Please check our website for our most up-to-date product pricing and availability.
In this video, we are going to be doing some work with the Datalogic Smart Vision Sensor from AutomationDirect. To give you a brief overview of my setup, I have a belt and pulley system being driven by an LS Electric L7P servo. This servo is running on a pulse train from a Productivity PS-AMC module connected to a P1-550 PLC. The Datalogic Smart Vision Sensor is connected to my PLC via I/O, and to my computer via an ethernet connection. This video is the second in a two-part series on setting up the Datalogic Smart Vision Sensor. To view the first video, click here. As we discussed in Part 1, the browser-based user interface is the best way to configure this sensor as it is the only way to see what the camera is seeing. This camera is capable of a 50 ms evaluation and response time. That’s 20 evaluations every second. I have written a short PLC program to index the conveyor belt one spot, trigger a camera read, wait for a return signal, and then store the data if it is a good or a bad part. It will then begin the next cycle. I have set it up, so it cycles 10 times per test. I have also given the PLC a slow and a fast speed option. For the fast speed option, I have been able to get it to around 10 indexes per second with this simple setup. With the correct motion hardware, more tuning time, and optimized programming, we could probably get this closer to 15 indexes per second. However, 10 per second is good enough for this example. I also programmed a manual index function to allow us to put good and bad parts in front of the camera for teaching and a return to home function to allow us to reset for the next cycle. This camera is a simple image comparison sensor. It is not taking a picture and comparing it to a good image based on a similarity threshold. This is a common method for other pass-fail cameras, but this sensor works differently. It simultaneously compares the captured image to both the good and the bad taught images. It then selects good or bad based on which image is closer to what it captured. There are no key datum options or customizations other than simple good & bad image comparison. It is therefore very important to teach it every possible process outcome. If you do not, it may see an image unlike anything in its library and will decide good or bad based on what is closest. And what the camera thinks is closest may not match up with what you think is closest. Let’s look at an example of the importance of teaching the camera every process outcome. In Part 1 of this series, I taught the camera that a yellow clip is good, and a white clip is no-good, but I did not address a backwards clip nor an empty conveyor. In that video, we also set the camera outputs to be push-pull active low, and increased the output hold time to 45 milliseconds. Let’s return the belt to the home position to queue up the first part. Remember, that we have taught the camera a good part is a yellow clip and a bad part is a white clip, but we haven’t taught it anything for a backward part, nor an empty belt. We want the backwards part and the empty belt to be rejected. Let’s trigger our cycle of 10 parts. The first three will be good parts, the next 2 will be bad parts, and the last 5 should also be bad parts. (2 are backward and 3 are empty belt) Let’s go to the monitoring menu in the camera browser interface and watch the camera while we start our 10-part cycle. You can see the images as they come in front of the camera, but you cannot see the image history as captured. To review the camera’s image history, press the pause button in the lower left corner of the screen. We can see that the camera did pass the first three, and reject the next two, but when it didn’t have any clear comparison image for the reversed parts and the belt with nothing on it, it counted those as passing images. This is obviously a problem. To fix this, let’s teach it again and this time we will add the backward part and the empty belt as a bad image. To teach it again, we either delete the current job by pressing the “X” in the upper right corner or we can just add a new job. Here again, we have options. We can overwrite the existing program, or we can add a new program into one of the empty banks. Because our existing program is actually bad, and we don’t want to ever use it accidentally we will actually save the new program into Bank 0 over the existing program. Let’s call it Camera Test 2. We can see that our camera has no part in front of it, so let’s send the belt home and then manually index one position to put our first good part under the camera. Now that we have a good part, let’s use the automatic setup to allow the camera to adjust the exposure settings. That looks pretty good. Now let’s teach it this good image, manually index it, and teach it a second good image. Now let’s move onto the no-good image teach and index a no-good part in front of the camera. We will teach it our three no-good images. One of a white clip, one of a backward clip, and one of no clip at all. Once we have done that, we can train this into our camera. With the camera training complete let’s send the belt home again. We are now ready of our second set of 10 cycles. We are still hoping to see 3 good parts, and 7 bad parts… And that’s exactly what the camera saw. Excellent. It's important to note that this entire setup was done at a slower speed. Let’s speed it up to roughly 10 parts per second and run the cycle again. We send the belt home, change the speed setting to fast, and run the cycle. Uh-oh! The results don’t match what they should. To find out why, let’s look at our image history. We pause the camera and if we look at the captured images versus the taught images, we see that the increased speed caused the camera to trigger prior to the belt completing its move. This is because, in our PLC program, the camera gets triggered as soon as the pulses are finished from the PLC motion controller. The servo pulse following delay is more noticeable when you are running faster. We could fix this one of two ways. If we didn’t want to increase our cycle time at all, we could add a positional offset into our teach step so that the clips were in the same position when teaching and when cycling. However, we don’t mind a slight slowdown, so let’s use one of the camera’s features to address this instead. If we go to I/O settings, we can actually adjust the delay from when the camera receives the trigger input and when it takes the picture. By default, it has a 0-millisecond delay. Let’s set this to 10 milliseconds and see how the sensor behaves. We reset our statistics, move the belt back to home, and run the cycle again. Excellent! This time the camera delayed the capture long enough for the servo to get the clips in the correct position. We can see this if we look at the image history. You can see that the images from the camera during the cycle are very close to the images in the good/no-good library. We might have added a few milliseconds to each index, but it is well worth it to get quality evaluation data. In summation, this camera has a lot of capability for the price point. It isn’t the highest end vision camera on the market, but it will do an excellent job at a very good rate of speed to give you pass-fail recognition for your process. It’s all up to you to teach it well and give it a reference for every potential outcome. For further questions, access our award-winning technical support here. To see more information on identification products from AutomationDirect, click here. To subscribe to our Youtube channel click here!
Voted #1 mid-sized employer in Atlanta
Check out our
job openings