Is there any way to simulate a frame grabber (Like a 1433) in LabView like one can do with DAQmx devices? I need to develop and test VIs that would use this on another machine, but my development machine is a simple laptop.
Thanks,
XL600
Is there any way to simulate a frame grabber (Like a 1433) in LabView like one can do with DAQmx devices? I need to develop and test VIs that would use this on another machine, but my development machine is a simple laptop.
Thanks,
XL600
When I generate an array of images with text overlays, if I perform any processing those overlays disappear. What's going on?
In the attached example, I generate a series of images. Those are passed into a rotation VI which seems to strip the overlay away. Is this expected?
Thanks,
XL600
Is there a National Instruments Vision package that would access generic USB cameras?
If not what type of USB cameras does the vision tools support?
I would like to be able to take measurements using edge/contrast detection. I currently have LabView 2012 full dev. I would also like to build exe's for deployment.
Thanks
Brian
I am trying to quantify displacements (in the range of 0.1 to 0.5 mm) of different points of a circular object while it shrinks. Basically the points closer to the edges will have larger displacements and the points near the center will have negligible displacement. I do this by using the Particle Image Velocimetry (PIV) method where pictures of the object are taken at different times and then each frame is meshed and analyzed so the movement of the pixels can be quantified and converted back to physical lengths.
I am using the example "optical flow.vi" to do that, using the Lucas Kanade algorithm. The subvi for the algorithm is protected, so I can't see what it is doing exactly. I searched for information on the algorithm and I see that one of the main assumptions is that flow is constant. Since my application does not involve flow, I am unsure if this example program can work for my case. I have the following questions:
1. What exactly are the values of the red vectors calculated with the optical flow.vi example?
2. Can the Lucas Kanade algorithm be used for my case, where I am trying to measure the total displacement of points inside a shrinking circle?
3. Can the magnitude of the vectors be printed out in a text file? (since the subvi is protected)
Hello.
I'm using Basler acA2000-340kc camera link camera which can grab image with a maximum image resolution of 2040x1086. However I'd like to capture image with a lower resolution without cropping it so the avi files that i am creating are not as big in size. I've figured that i'm supposed to use the IMAQ Set Image Size but i'm just not quite sure where to put it in my vi. I've both attatched vi and uploaded a picture of it so it's easier for you to see. I'm all new to LabVIEW and i'd appreciate any help.
Hello,
I am currently working with a basler ace camera : acA2040-90umNIR.
It is written in the documentation that the smallest exposure time is 42us when using a format pixel of 8 bits. This works perfectly with my VI.
Now I would like to push the exposure time to its limits. It is written in Basler doc that it is possible to set exposure time to 28us when using a Mono 12 pixel format. When I use Vision acquisition I am able to set the exposure time to 28 after choosing the right pixel format.
But when I try to do an acquisition I get an error : "Attribute value is out of range".
If I set again the exposure time to 42us I have no more problem.
I think this is a Labview error but I don't know how to override it.
Could anyone help me with this?
Thank you and have a good day!
Is there anyway to capture image events like "Click Event, Draw Event, Size Event, Scroll Event, Double click event" in the LV Event Structure instead of polling continuously for an event in a loop.
"Even in Dynamic Event is also fair enough to go", but its just supporting basic LV Event Properties, no specific Events for Image type
Hello.
I have recently switched from LabVIEW 2011 to 2017 and from Windows 7 to Windows 10 when this issue occured. I have a Basler Camera Link camera (acA200-340kc) and an NI PCIe-1433 frame grabber. I cannot detect the camera in NI-MAX or Basler’s pylon software (the CL configurator doesn't detect the national instruments port). Any way to fix this issue? I have attatched screenshot of both MAX and pylon CL configurator below. Appreciate all the help!
Hi,
While making an LabVIEW application installer i cant see an option to include VBAI ADE to include within build installer setting in LabVIEW project. Any option?
Now, I am studying avi compress with some models, but results are not perfect. I want to collect video for hours with labview, yet the limition that 1.96G makes me not move. who has good ideal to deal with the promble.
I want to overlay Text in an image for Mismatched Pixels (This may be N numbers N May Varies from 0 to 10k or More based on Number of Pixel Mismatch Found from template image to Acquired image)As IMAQ Overlay points allows Array on points as input, No issues when using IMAQ Overlay points.
But when using IMAQ Overlay Text i am using loops to overlay text as it allows one point at a time and it gets hanged when the N reaches more than 5k.
Any Suggestion on the same will be helpful.
Hi all,
Software used: NI Labview 2012, NI Vision, normalized cross correlation, match pattern 2
What im trying to do:
- For a manufacturing inspection we want to compare beam images of an illumination device. i.e. we want to compare each beam image with another image in the manufacturing lot to detect the stability or variations.
- The images are automatically optimised regarding exposing time so that the intensity range always ends up to be 0-240..255 8bit
Solution approach:
- I do a normalized cross correlation between each of the images of one lot. then for each correlation. I take the maximum of the correlation image/map as best correlation/ similarity measure. I generate a matrix where each of the images is correlated with each other.
- to proof the concept i searched for a second possibility to do the same. I found the match pattern 2. I always used one image as template and the respective other images as search image. I use the match score as similarity measure. I use learn mode all and match mode shift (cause we do not await to have >4° tilt)
- beforehand all images are cropped to an equal ROI size and downsampled to equal size to increase speed, (cause we are not interested in the high frequency/ low size features)
Questions:
1. The match score seems not to be proportional to the correlation maximum, What can be the reason? cause xcorr is just shift invariant?
2. The match score seems to be much more sensitive than the x corr. Ranges normalized to 1, results of one test run on same images:
xcorr range (0.90...0.988)
match score (0.6...0.96)
3. Does the match pattern work similarly to cross corr? I mean we are comparing images of the same size. Usually learn patterns are smaller than the search image. Am i right to assume that in pat. match. also the entire images are x correlated and so comparing images of the same size directly with each other makes sense?
Attachments: the vi with vision xcorr and match.
Hello There,
I searched the whole forum for an answer to my question, but I havent really found anything useful. That's why i am now making my own Post.
The Situation:
I have a Development PC where I have NI Vision Development Module 2016, LabView 2016 and NI Vision Acquistion Software 2016. On this PC i need to develop a VI which takes a picture, preocesses it and saves it. This VI needs to be included in an already running Teststand Sequence as an improvement of Testing. This Teststand Sequence runs on another PC, where I have all the runtime licenses installed.
I have developed a LabView VI which acquires a picture, processes it, returns the results and saves the picture with the express VI's for NI Vision Assistant and NI Image Acquisition Software. I created the same process with the NI Vision Development Module and converted it into a VI before, but that VI is so vast and clunky, that I want to stick to the express VI solution.
The problem I am having now is that the express VI is not executable on the PC with the runtimes. Normally, LabView VIs dont have a problem being executed when the Runtime is installed.
Maybe there is something I am not seeing or did not understand, but to me, this should work, right?
I hope i explained myself clearly enough for you to understand my issue.
Thank you in advance for answering me!
Best regards,
Alexander Egg
Hi, I'm new to the machine vision systems and I'd like to get your opinion regarding the execution time. I would need this inspection to run in less than 100 ms.
I have a monochrome ISC-1782 NI Linux Real Time Smart Camera running code made with Vision Builder 2015. The camera waits for a rising edge from a real time system, acquires 1920 x 1200 image then makes the following inspection steps.
-search horizontally white to black and black to white transitions (2 vertical lines).
-search vertically black to white and white to black transitions (2 horizontal lines).
-calculate the width of this newly found rectangle.
-search for non-white particles inside the rectangle.
-send test status (pass/fail) to real time system over ethernet.
The region of interest for the inspection is around 300 x 500 px. The real time system triggering the camera logs the time between trigger signal and camera response. It's currently around 140 ms. Does this sound normal? Is there any ways I could make it faster?
I haven't made the Vision Builder code myself but I needed to change some parameters with the evalution version. Could this have increased the execution time?
Hello everybody.
I have a short question regarding to the saving of the vision extra information like the overlay informationen, pattern matching information and the calibration information in a PNG-file with the 'IMAQ write file 2'. How is the extra vision informationen stored within the png-file? I already found out that all the extra information is stored via the chunk 'niEi' in compressed form (https://forums.ni.com/t5/Machine-Vision/How-to-extract-Overlay-data-from-png-file/td-p/1256544) but how is the data structured after decompression? The background of my question is that we would like to read the calibration information outside from LabVIEW with Python capabilities. Thanks for every hint.
Best regards
Falk
I've got a color GigE camera with 1920x1200 image size, and I am seeing over 300 msec execution time on all Threshold Image steps using HSL in VBAI 2015f3. Perhaps that is normal, but it feels kind of slow. That step does not accept an RIO so I could perhaps move it into a Vision Assistiant step, but then it makes my image calibration complicated.
Does that seem like expected execution time?
I have an USB3 camera from IDS that I want to use for a project. (It's a UI 3140CP.) The camera shows up in MAX but I don't get the maximum framerate, plus, it's supposed to be a color image but it only appears as grayscale. I've been following the instructions in this document - Getting Started with USB3 Vision Cameras and NI Vision Acquisition Software - but it's missing a step for me.
The doc says to make sure in Device Manager the camera shows up under "NI Vision Acquisition Devices". On my Win 7 laptop the camera appears under "Universal Serial Bus Controllers". The NI doc says to change the driver to IMAQdx.
How do I do this?
I was wondering if anyone else has seen this problem? When I run this VI with either the Codec Source control set to "Default" or "System" the VI just hangs and then 10 seconds later all of LabVIEW crashes with no warning or anything. This is just running the VI standalone, not in any code or anything else. This is occurring on Windows 10 running LabVIEW 2018 64 bit.
My goal is to be able to write monochrome 16bit video to a AVI file. Is there a way to just have it write uncompressed to an AVI file? I don't really care about file size.
On the left edge, some yellow dots got ignored. The red line is the BEST fit for yellow dots.
This doesn't work for me. I need a "rightmost fit".
If I could get the those yellow dots, I would use 25% rightmost dots to fit a straight line.
The edge is damaged. Therefore not well focused / illuminated.
It's hard to fine tune edge search parameters since we are doing thousands of edge detection in one scan.
I am working on a project of sorting fruit by machine vision and I used k-NN classifier as classification. Are the feature extraction approach and also the appropriate features determined by the software itself? cant The person participate in select them?
best regards