Where should I start when it comes to selecting enses (and cameras) for a multistereo solution? I'm looking to observe a 200'x40'x40'x volume. There will have to be multiple cameras due to occlusion, but I imagine even different lenses or at least variable focal lengths to generate accurate point clouds.
Not sure what my budget is so more important is knowing what characteristics I should be looking for or how to determine them.
What lenses to use for a multistereoscopic solution (to 3d pointcloud)
new to NI vision builder: Calculating the area of a component
Hi all,
I am new to this whole NI vision builder. I would like to know what are the steps that need to be taken to calculate an area of a component.
Thank you
feature extraction for classification
hello all
i am trying to do feature extraction and classification of an image
my project is to take an input image extract hue of the image and threshold it and extract the features of the image and based on those feature clasify its color and shape of the image using imaq classifier.
questios:
1) what features are required to calculate size and shape of the image
2) what feature are required to calculate color of the image.
can you please guide me how to go forward to complete this task. here i ma attaching my code and snapshots and the input diagram please go through it and help me with possible ideas
thanks in advance.
Ni 1744 camera programming with PLC
Hi,
I have a N I 1744 camera. It scans the lenght and position of pins. There are several lenghts of pins. So there are +10 programs needed. The programs were made with NI Vision builder. The right program for the right pin is now inserted in the camera with visionbuilder. I was hoping to get rid of the laptop and be able to select the program with the installed S7-1200 PLC. It would save some time.
Would this be possible or is there any otherway to make an easier selection of the program?
Thanks in advance.
How do I find the corners of a trapezoid particle%3F
I am trying to locate the pixel location of the corners of a U shaped particle. I can successfully filter for the particle and locate the upper left corner using the IMAQ_MT_FIRST_PIXEL_X and Y functions. However, depending on the angle, the U becomes trapezoidal, so the containing rectangle won't perfectly border it.
Is there a way to locate the pixel location of the upper right, lower right, and lower left corners without knowing whether the left or right, top or bottom will be bigger?
How do I create a pause in my inspection so that I can change a setting on my DUT, then resume with the rest of the inspection?
New to NI Vision and Labview in general here.
How do I create a pause in my inspection so that I can change a setting on my DUT, then resume testing?
I want my inspection to run algorithms on two different image acquisitions, hence the pause to change my DUT settings. I thought the breakpoint was working well when I am in the configuration interface and I run the state, but it seems like when I run the inspection it skips over the breakpoint and continues onto the next inspection step without waiting.
How do i auto detect the target using VBAI Trigger mode
Hi everyone,
I got a machine vision system ready to take pictuers using VBAI 2013 sp1. Also, i am using a sensor switch to trigger the camera to taking pictures.
When i hit the running loop, it just keep taking pictures no matther the sensor switch triggering the camera or not. I think it is the step i used to logging the image, in the image logging step, i used log image always, but i don't want use this other option which is only when inspection status fails.
So how can i let the camera wait until it got the trigger signal from the sensor switch, then take a picture, and save the picture?
the inspection step for acquire image, i used trigger mode and everything worked fine, except the time error settting up, i used 5000ms, and if i set bigger like 50000ms, VBAI turned really slow and showed time out error.
So can anyone help me on this deal?
Thanks
NI Vision Builder: calculating orientation of the component relative to the first image in degrees
Hi all
I have a query regarding a component of an aircraft. There's 9 images of the component with and without defects. I would like to kindly request for some guidance to calculate the orientation of the component to the first image in degrees*.
All advice will be gratefully appreciated.
Thanks guys
PS: if you require further detail, please let me know and I will extend my post.
calibration grid
Hello,
I am using the NI Vision calibration grid which is (917x918) but my image I want to calibrate is 1920x1080. How can I get this to work or is this even possible?
Thanks
Computer Lens Distortion Model
Hi all
When you calibrate the image on NI Vision Builder, I would like to ask what is the use of a "computer lens distortion model". What is it for? It's an option which is ignored when measurements are taken from the image, for example, of a component. What happens when you switch it on or tick it?
Thanks
BP Evaluation Score
Hello,
We are working on a machine learning system using the machine learning toolkit. We are using the BP Learn VI to calculate a solution to our classifications of images. Then at runtime we can run images through the BP Eval VI to get a classification out. What we are struggling to understand is how to get some sort of score for the output at runtime. When we do the learning there is no "Unknown" but it seems that if I run an image through that is nothing like any of the learned images it should be "Unknown". Any information on this front will help. Thanks.
Error -1074395628 occurred at IMAQ Match Geometric Pattern 2
Hello IMAQ fellows,
I created code for Geometric Matching of th Road Signs, but I have this (Error -1074395628 occurred at IMAQ Match Geometric Pattern 2) error...
Can you help me solve my problem ?
Thanks
Using PCIe-1433 in Halcon
Hi,
our company recently switched from LabView to Halcon for image processing. However, Halcon does not seem to support framegrabbers from NI (halcon / image acqusition)
(It only supports digital I/O devices over DAQmx.)
It'd be a shame if we couldn't use our reliable NI framegrabber (PCIe 1433) anymore.
Has anyone found a workaround or an extension package?
Thanks in advance.
Training classifer Manual..
hello all
i need a small help in my project i need to do object classification based on their color and size. As i know that by using "color classification training" i can train my classifier with color but i shouldnt use it i need to create a program my own training phase by using labview vi´s.
is it possible to do it by using labview classification vi's like imaq write classifer imaq read classifier imaq add sample etc...
i am so confused how to use them....
Anyway to adjust the weight of an IMAQ Overlay Point and IMAQ Overlay Line?
Is there any way to adjust the weight of an IMAQ Overlay Point? I find it odd that this is not an option.
help image acquisition only when light is on
Hi, i need help to create an instrument.
i have to create an instrument to do:
i have a black box with a camera. this camera will be connected via USB to the pc. When i press a pedal a strong light comes througt it.
i need that labview shows a real time image when the light comes in and stops showing the image when the light comes out, showing the last frame that had light. it is like a system called last image hold.
for example:
i have a pedal that start a high light. LV does not show any image, when i press the pedal and light comes in, LV stars to show the image, when i stop pressing the button LV shows the last image or frame that had light. If it start another time to press the pedal LV start to show the new images until i stop to press the pedal.
any idea?????
thanks
Jorge
Distorted Image
I am getting the attached distorted image. There is a discolored bar at the top, and the bar on the left hand side belongs on the right hand side.
I'm using an IMAQdxSnap and am planning to run it through a color filter to generate a binary image, but can't figure out the distortion.
Any help?
Thanks in advance
How to recognize and segment certain area of pork loin picture from VBAI?
Hi Every professional experts out there,
I got a big image processing issue worried me about couple months now, please help me with your brilliant idea.
Operating system: Windows 8
NI platform: NI VBAI 2013 SP1
Camera: NI smart camera 17XX
Question: I want using NI Vision Assistant from VBAI separate/segment certain area (yellow line) from pork loin images. But i tried every possible ways i knew, still didn' get a really good result.
- I tried using color difference to separate the area but some images really don't have huge color difference.
- Texture feature would be another good way but i really don't know how to do this in VBAI.
- I was thinking use a ROI area, like locate the loin image and cut a maximum square box out of the interest area i want extract, but let would release another issue: how to auto put the box in the area i interested (yellow line)??
- I've also tried find edge step, but that just give me nothing but length value.
So please someone give me some ideas how to processing these image to get the target area i want.
Thank you so much!!
Help in detecting the presence of Faston connectors
Good morning. I really need some advices in a task that i need to complete.
I had done an "inspection application" for electronic boards. A client has the need for investigate the presence of some Faston connectors. This connectors are manually mounted and they are shining. Due to the changing conditions in position and reflectivity i found difficult to obtain stable conditions between inspections. I can't find an inspection algorithm adaptable to this varying conditions. See the image below for an example.
Thank you in advance,
FM
Anyone using LabVIEW and the Olympus i-Speed TR high speed camera?
I'm curious if anyone has attempted to interface with an Olympus i-Speed TR high speed camera using LabVIEW? Or any of the i-Speed 3 series high speed cameras?