I use ROI in my programs heavily. So, is it possible to build one ROI with a set a parameters and then keep changing the SAME ROI with different parameters for a different purpose?
Dynamic Region of Interest in Vision Builder AI
where can I get a PCI-1424 frame grabber
Dear,
I recently bought a imager sensor that would require PCI-1424 frame grabber to build up a camera. However I found that NI doesn't sell PCI-1424 anymore. Does anyone know where I can still get a PCI-1424? Or if not, which type of frame grabber can be a replacement of PCI-1424.
The board I have now has a 100-pin connector that is said to be directly interface with PCI-1424.
kind regards,
Morphology black picture out
Hello,
i am facing a curius problem with the output image from morphology. In vision assistant i see clear the result. Now i receive always black picture.
Tipp; after the Cast Image the picture is correct.
What i missed? Could somebody helps?
Thanks!
Acquire two images from one snap: gige camera teledyne
Hi,
I'm try to show and save two images from one camera that snaps two different pictures with one trigger, I'm using LV 17. The camera is set to a multi-frame mode so when i trigger the camera it snap a first picture with one setting and then cycle to a second state with different parameters so snap the second picture. How can i configure this in LV ? Is there a VI that can handle this? Or is there a way to save to different configuration using MAX ?
IC-312x Windows vs. Linux RT
We know that the CVS-1458RT is the same hardware as the IC-3120.
However, the CVS-1458 (Windows version) shows 32 GB of storage while the Windows version of the IC-3120 shows the same storage size as the Linux RT version (2 GB). Is that correct?
The difference in cost between the IC-3120 Linux RT and Windows version suggest that there is some more hardware capability on the Windows version, and the additional 30 GB could almost explain that.
Problem with double frame acquisition (PIV, Dantec-DynamicStudio)
Hello, here is my problem : I’m doing experimental measurements of a flow using PIV measurements. For the images acquisition I’m using DynamicStudio (Dantec).
I think I have a problem of synchronisation when I’m using double frame mode. When I apply PIV analysis, I get erroneous vectors for image 1 till 4 but from image 5, I get good results (problem is detected for the first images (1 to 4)). You can see an example of vectors image (from 1 to 4) and the vectors image 5 (where vectors are OK). Once checking the two frames (Frame1 and frame2), I found out that in frame 2 there is a kind of “out synchronisation” (or maybe something else, I don’t know where it comes from !). Here’s images : from 1 to 4 : frame 1 and frame 2 (where all frames 1 are OK but Frame 2 NO), whereas for image 5 (and others) : Frame 1 and Frame 2 are normal.
Sorry for being so long, I hope You get my issue. Thanks for your help
Cheers
Souria
What would be the most robust way to identify/classify colors?
Hello all,
I'm new to both to Machine Vision and the LabVIEW Machine Vision Toolkit (although I’ve tinkering with it daily for the past 3 weeks). The first challenge I've set myself to involves color: I want to identify the color of the object that’s under the camera in a robust way using LabVIEW. I know, it is an enourmous problem.
Before working with “the real deal”, I’m trying the color-related VIs that LabVIEW has to offer with "pure-color" '.png' files. I know that it is a vast and complex field, where things that seem very simple are greatly complex.
So far, I’ve kind of discarded some ‘possible solutions’ for my problem, such as:
- ColorLearn/Color spectrum: Low, Medium or High – all configurations end up incorrectly identifying a color (Ex: correctly identifies basic colors but mistakes an orange one for red);
- color histograms: although I’m not discarding using histograms per se, I’ve found that small changes on either RGB or HSL planes drastically change the colors and I couldn’t find a pattern for identifying them (and ranges) by their histograms;
- Pixel Value: it works for pure colored images, but not for camera taken pictures.
My last hope was to train a color classification file with a great number of color images (pure RGB-generated by according to a RAL spreadsheet), but no matter the Engine Options, the results are abysmal (both on the Color Classification Training Interface and by training on the block diagram [and that dammed feature vector…]).
What would be the most effective approach to robustly identify colors using LabVIEW?
At first, I hope to robustly identify basic colors. Then, I hope to be able to identify correctly colors like RAL 2000, 2005 and 3001.
I know that my problem is more related to Machine Vision concepts than to LabVIEW, but this community is great and I not only want to be a part of it but want to start contributing to it: maybe some answers here could answer questions of other shy users.
I'm planning to attend to the LabVIEW Machine Vision Course that we'll have here in Brazil at the end of September and I'm finishing the LabVIEW Core 1 Self-Pacing course (and hoping to start Core 2 by the end of the week).
Thank you in advance.
Vinicius.
Vision Builder AI cooridnate system
I would like to be able to create a coordinate system from two points within "Vision Builder for Automated inspection", where the first point (x1,y1) is the origin, and the second point (x2,y2) is the x axis, but also the direction from x1 to x2 is positive. This does not seem to function in VB4AI, as it will take the coordinates, and use the correct angle, but does not accept direction.
Any help thoughts would be appreciated.
Convert a stream image data into image control without IMAQ
Hi all,
I'm new to working with LabView and got stuck in a problem that may be easy/silly. I have a pixeLINK camera (PL-D7715) and can successfully connected it and get the stream in a separate window (using the LabView wrappers provided by PixeLINK SDK). I can also get the current frame. The problem is for some development and deployment reasons we don't have Vision Acquisition Software (VAS) installed and my aim is to get that frame and display it on a control on the UI. The UI should display
- Live view of current view
- Image taken at a specific time from the stream from item (1) and enhance it and then display it as a second control
The function getNextFrame provides the Data Out as an array (1D) which I need to convert and display as an image (probably Draw Flattened Pixmap)? I can either use this to display in a loop or use the pixeLINK setStream (but that only opens in a separate window maximized). The description of getNextFrame input/output are also copied below for a quick look. Any help regarding the above will be greatly appreciated please please?
Controls and Indicators
hCamera IN is the camera handle. This value is returned by Initialize.vi and is passed between PixeLINK VIs as a reference.
uBufferSize IN The size of the image buffer required in bytes. If Mode IN is set to UsePointer, uBufferSize should indicate the size of the buffer pointed to by pPixel IN. This buffer must be large enough to hold the requested image data.
OutputMode IN Determines whether the data is passed in the form delivered by the camera (default), converted to RGB32 compatible with the NI IMAQ RGB image format or converted to a RGB24 buffer. Use RGB32 when connected to a Color camera in Raw8 (Bayer 8) or Raw16 (Bayer 16) mode with an IMAQ image type set to RGB. Use RGB32 when connected to a Color camera and using an array. Note that the buffer size required must be set for the number of RGB pixels at one byte per pixel. For monochrome cameras, use IMAQ Image types of MONO8 or MONO16 and set OutputMode IN to Default. OutputMode IN = RGB32 is relevant only if Mode In is set to UserPointer, OutputMode IN = RGB24 is relevant only if Mode IN is set to UserArray.
Mode IN Determines whether GetNextFrame returns an array containing the image data or fills the data at a location indicated by Pixel IN. If set to UseArray, pPixel IN will be ignored. If set to UsePointer, then data OUT will be empty.
pPixel IN A pointer to a image buffer of sufficient size to hold the image data. Ignored Mode IN is set to UseArray.
pDescriptor A structure containing descriptive information about the frame returned from the camera. It contains the values of all camera settings used to capture the image. See the API reference manual for more information.
hCamera OUT has the same value as hCamera IN.
uBufferSize OUT A pass-through of the uBufferSize IN variable.
Data OUT An array of image data returned from the camera. The data will only be valid if Mode IN is set to UseArray. To interpret the data, check the PixeL Format settings in pDescriptor.
calculate distance for two points one stationary one changing (extension of a stem)
Hello,
I would like to measure how much a stem is elongating in real time. I will be using a camera to capture the video, but I need to learn if there is a way I can get real time measurements of the extension. I will probably add a marker that serves as a stationary reference point and the point that is extending.
I have no LabVIEW experience and don't know how to simply get the image or video displayed on LabVIEW. Can anyone suggest a method on how to do this. If it's possible? I read through some threads on image processing, but it didn't have the specific details I am looking for.
Any help will be truly appreciated..
IMAQdx USB3 camera errors running with run-time
Hello,
I created an executable/installer to run USB3 camera (Tornado spectormeter) on the target machine. It installed correctly since MAX can see the camera. But when I try to run the camera, it shows below error. VAS is installed and licensed. I ran the software from the manufacture, it runs fine. Has anyone encountered such problem?
Windows 10-64bit
Thank you again.
Region of Interest - magic wand
Goal: Create a polygonal region of interest.
Approach: I am using magic wand in Region of Interest setup, where seed point x, y and their offsets, and tolerance are set through parameters obtained from an ini file. When I run the program multiple times the feature under magic wand completely changes from what I had set it to. For example, in the ideal case it would return a polygonal shape that I expected (magic_wand_good.PNG), when I ran it the second time it returned me an 1/6th of a circle (magic_wand_bad.PNG).
What should I do to prevent this random behavior of magic wand? Is there another approach that I can implement to achieve the above goal? Any help would be appreciated.
Thank you,
Kaivan
lighting system
I am working on drying clay, I need to measure area decrease with a camera install on top and height decrease with a camera installed in front, my problem is whats the best lighting system to have good result in image processing? with present lighting from top system i only have good result to measure area but a bad result in height measurement.
Webcam frames acquisition framerate
I have a Logitech C920 webcam that I wish to control via LabVIEW. I want to be able to trigger the acquisition of frames from within a VI, acquire 1080p resolution images at 30fps, generate a timestamp for each frame and then save each as a PNG. I knocked up the attached VI based upon some butchering of the 'Grab and Basic Attributes' example that ships with the Vision toolkit, and I've translated it into a producer/consumer wherein the first loop is used to handle the acquisition and the second the saving task, which essentially works.
However, the issue I get is that when the enqueue operation is included the framerate drops to about 11-13fps and frames are missed. If I disable the enqueue operation then the framerate jumps back to the expected 30fps, however I'm obviously then not able to save the acquired images. It's obviously not an easy task to troubleshoot in the absence of the specific hardware, but If anyone has any advice, or is able to explain this behaviour then that would be much appreciated!
Owen
Angle values in vision assistant
Hi
The project, I am working on, is sensitive to angle values. Why the angle values varies for every iteration? How to stabilize these values?
Thanks
How to get 16 bit depth image from Ni Vision Acquisition
Actually, I am using a Intel RealSense SR300 to acquire 16 bit grayscale depth image from the camera. However, it is returning RGB image which is whole green.
Can anybody suggest how to get the the required depth image??
Regards,
Kashish Dhal
Image contrast is too low
Hello,All!
I use NI Vision to measure the protective film's boundary dimension for mobilephone screen glass,but the original image's contrast is very low(< 20) because of the protective film is too thin.When I use "Edge Detector" to find the edge,I must set the "min edge strength" to very low(almost 10) or I can't find the edge.The problem is,when the parameter is changed so small,the test result is very unstable ,often find the wrong edge.I think some image enhancement methods can improve the contrast,but I'm not familiar with this ,does anyone can give some suggestions?Best give me some image processing sample code according to the image in the my annex .Thanks very much!
Move Coordinate axis by an angle
How to move a co-ordinate axis or a line by an angle(15 degrees)?
Customized region of interest
I want to be able to draw a ROI using the cursor. Right now Region of Interest only has these following options: point ,line, rectangle, oval and magic wand. Nothing that would let me draw using the cursor. How can I do that?
Light Controller PAD2 1136/1
I have controller PAD2 1136/1 for lighting my vision project. (LATAB
Trying to write LabVIEW program to control my device but no luck so far.
Looks like "simple program" for RS232!!
Any one has an experience with this device?