hi
how i can to apply this image that read from a usb2 webcam to my application ?
when this application is running , i see some errors .
thanks.
hi
how i can to apply this image that read from a usb2 webcam to my application ?
when this application is running , i see some errors .
thanks.
Basler acA2000-165um USB 3.0 area scan camera.
Its specified frame rate is 165 f/s. However, the best I could get to in MAX is only around 90-100 f/s, even after I dropped the exposure time down to minimal. Please check the following screenshot.
My first thought was maybe I plugged to the USB 2.0 instead. But no, this camera only accepts USB 3.0 slot; it actually displays an error message if plugged into a USB 2.0.
I don't think I need a NI acquisition board for it either. It is USB, correct?
Please share your thoughts about how I can speed up the frame rate. Thanks much.
Dear Forum,
How can I create an accumulated histogram from a sequence of images (100) with different exposures?
need help
I don't understand vision software, how to identify objects in video, build coordinate systems and draw object coordinates How do I make a diagram of displacement versus time? Who can help me? Thank you very much!
Is there a way to acquire saved images from an external HDD on the CVS via ethernet sharing? The use would be e.g., saving imagery to an external HDD (on the CVS) and then trying to access those images via. another PC that's connected to the CVS's network port.
Who can help me find the problem and provide solutions?
I am using CVI 2015 SP1 and Vision Acquisition version May 2017. When I use
error = imaqWriteFile (image, "Combo_myfile_01.tif", NULL);
to save an image, a valid image is written to disk. If I call this function again with the same file name and the file already exists, the file creation date does not change in the file browser.
Note that I have already set default drive and directory using SetDrive() and SetDir().
Is this normal behavior? Can I change this behavior so I can overwrite a file? Do I need to make a system call to delete previous file before saving new file with same name?
thanks, Ron
hi, i am making project using myrio,can we store images in myrio,if yes please provide example of how to do and i want to do image to array conversion so does the myrio can do the job or we have to use another hardware please provide me sites and examples.i request you because i am new to labview but i took a big project based on myrio.
How do I output the position X from matches? I want to extract the X elemen. Who can help me please?
Hi,
I am trying to solve problem Error 0xBFF69011 - "Unable to get attribute" for EASYCAP USB frame grabber. Measurement and Automation Explorer is not capable to locate or generate XML file for this frame grabber in NI-IMAQdx. I have tried every possible described solution without success.
I have noticed that several people have managed somehow to use this USB FRAME grabber.
Is it possible that somebody send to me XML file for EASYCAP USB frame grabber?
I would like to analyse it and try to adopt it to my EASYCAP USB frame grabber.
Location of the XML file is usually: C:\Users\Public\Documents\National Instruments\NI-IMAQdx\Data\XML
Regards,
Roman
Hello,everyone!
First, let's make a statement ,please don't laugh at me for my poor english
OK,Let's get to the point.I need to measure the radius for the object's rounded corner(qurant),at first, I use "IMAQ Find Circular Edge 3.vi" to measure the radius,I test 100 times with statistical repeatability precision(object and it's position remain unchanged) ,I found the test results varied greatly.I adjust "Kernel Size"、"Projection width"、"Minimum Edge Strength" ......,I found the test results also bad.Then I use "IMAQ Detect Circles.vi" to measure the radius,the repeatability precision still not well.Finally,I don't measure the object's rounded corner(qurant),I measure the semi-circle and full circle on the object,I found the radius's repeatability precision is very stable.Whatever using "IMAQ Find Circular Edge 3.vi" or "IMAQ Detect Circles.vi" .From the test results ,it seems the algorithm with NI Vision find circles limited by the length of arc.
However I need measure the qurant's radius and repeatability precision is required ,Does anyone can give me some idea?
Dear Labview users,
I need help to show the colormap with my images in labview. Can anyone please help how to show colormap for example for rainbow palette. I have attached a small portion of my detector image.
Thanks,
nomi
Hello. I am a student trying to track the motion of resin flow in a laminate using NI Vision builder. I took a video of the process and saved it as an avi file. However, if i try to upload it to vision builder, it gives me the error message " you have selected invalid image type" This confuses me because the avi file type is clearly stated as an acceptable format for video uploads to ni vision builder.
I read somewhere that i have to change it to grayscale, but doesn't that mean the video has to be uploaded first?
Hi,
I have an issue and I was wondering if anyone here might have an idea what could be causing it.
I'm working on a automated quality control system, which is currently made out of NI 1454 and two Sony cameras (XCD-SX910 and XCD-SX710) which are connected to it and work fine. This system was bought whole like that and now I plan to change/upgrade it.
Now I have bought an additional XCD-SX910 camera and this camera doesn't show anything except gray screen (even if I fully cover the lense - where other cameras show black screen).
MAX detects and shows it as a normal working camera and also without any errors. The camera is identical as the pre-owned one, the only difference I have noticed is, it is a different version as the other two. MAX says the SX710 and the first SX910 are v2.20E, while the additional SX910 is v2.01D.
I have tried disconnecting and connecting them in all possible combinations, but the issue persist always on the same camera.
NI IMAQ for IEEE 1394 RT is version 1.5.2
Could the problem be because of the different version? (don't know why would that be an issue)
Or is it probably just a faulty camera? (it was bought as a used one)
Thank you
hi ,
i want to use opencv library in LabVIEW 2014 for use in machine vision.
what should I do?
if opencv has a download link , please Place it here
thanks .
Hi,
I have a color image grabbed, tresholded and then analysed. After the analysis I put some overlay on the binary image.
If the result of the test is K.O. I would like to save the original color image next to the binary image in a single image file but i can't find how to do it.
Suggestion?
My labview 2014 SP1 image capture tool has been in use for multiple years since I wrote it. It has successfully streamed and captured upon request multiple gigabytes of image files for IMAQ and IMAQdx cameras, and GEV cameras. Back in late 2015 I had a problem where connected GEV cameras no longer streamed in video. That was fixed after updating the drivers in my LV2014 install. The problem here is on two different workstations, one named grab1 the other is picasso, both were working previously, well okay, last I recall them working with RS-170 cameras was mid last year. No further NI LabVIEW updates have been installed, and these workstations are on an isolated lab network, meaning there's no outside world connection to the internet. Somehow I now have a similar problem with the IMAQ side of things on both machines and I know where it fails, and I have read other posts and did all the suggested things others tried. The two workstations both have LabVIEW 2014 SP1 and Vision Acquisition, I know MAX is v15 on grab1 and I believe v17 on picasso. Identical error -1074397153 which in short is an unrecognizable video source. In MAX on both machines the video is streaming in perfectly. I tried white and black levels, tried creating a new RS-170 camera file, and as of this morning this is consistent with 5 different manufacturers RS-170 cameras, so I cannot say it's only one model of manufacturer.
The issue is with the VI used to set up for a grab session, that's the one causing the error, which occurs when I tell my tool to connect on a chosen interface. I was trying to get a jpg file to attach, but so far that's on my phone and my office network seems to be dumping the file.
While waiting on all you brilliant people to think and reply, I am going to try a connection with a camera link camera to make sure it isn't all IMAQ camera types I work with.
Any ideas?
-- Bill
I have an image processing function that looks at the difference between the current image and a reference image stored earlier. I am trying to implement this calculation on an FPGA but I'm having trouble with recalling the stored image from DRAM at 100 MHz. It seems to only be able to recall every other pixel and even then the values that come back are offset randomly. I'm using a PXIe-7966R with a NI-1483 capture card. Is it possible to access DRAM at 100 MHz?
I've been trying to find the correct way to compute the orientation of the gradient in Sobel operator. The formula
atan[x, y] = Math.Atan2((respondY), (respondX)) * (180.00 / Math.PI);
if (atan[x, y] < 0) atan[x, y] = atan[x, y] + 180;
seems clear enough to understand and it looks like it should work. And it works for horizontal and vertical lines. But if the input image has a diagonal, then the computed angle is incorrect.
This is my code:
double[,] respond = new double[4, 4];
double[,] atan = new double[4, 4];
double respondX;
double respondY;
int[,] Gx = new int[3, 3] { { -1, 0, 1 }, { -2, 0, 2 }, { -1, 0, 1 } };
int[,] Gy = new int[3, 3] { { -1, -2, -1 }, { 0, 0, 0 }, { 1, 2, 1 } };
int[,] image = new int[6, 6] { {255, 255, 255, 255, 255, 0},
{255,255,255, 255, 0, 255},
{255,255,255, 0, 255, 255},
{255,255,0, 255, 255, 255},
{255,0, 255, 255, 255, 255},
{0, 255,255, 255, 255, 255},
};
for (int x = 0; x < atan.GetLength(0); x++)
for (int y = 0; y < atan.GetLength(1); y++)
{
respondX = 0.0;
respondY = 0.0;
for (int u = 0; u < 3; u++)
for (int v = 0; v < 3; v++)
{
respondX += Gx[u, v] * image[u + x, v + y];
respondY += Gy[u, v] * image[u + x, v + y];
}
// finding the magnitude
atan[x, y] = (Math.Atan2((respondY), (respondX)) * (180.00 / Math.PI));
if (atan[x, y] < 0) atan[x, y] = atan[x, y] + 180;
if ((atan[x, y] > 0 && atan[x, y] < 22.5) || (atan[x, y] > 157.5 && atan[x, y] <= 180))
{
atan[x, y] = 0;
}
else if (atan[x, y] > 22.5 && atan[x, y] < 67.5)
{
atan[x, y] = 45;
}
else if (atan[x, y] > 67.5 && atan[x, y] < 112.5)
{
atan[x, y] = 90;
}
else if (atan[x, y] > 112.5 && atan[x, y] < 157.5)
{
atan[x, y] = 135;
}
respond[x, y] = Math.Sqrt(respondX * respondX + respondY * respondY);
}
And as you can see from the image array the edge is "travelling" from the bottom left to the upper right corner (what is equivalent to 45°). So, because the angle that I get (the gradient orientation) should be perpendicular to the edge, the result should cointain a value of 135°. But I get the next values in the atan array:
0 45 45 0
45 45 0 45
45 0 45 45
0 45 45 0
What am I doing wrong?
Thanks in advance!