Quantcast
Channel: Machine Vision topics
Viewing all 3182 articles
Browse latest View live

improving performance

$
0
0

hello, i am using the vision acquisition, vision assistant and a webcam to be able to identify obejcts and to track them . i did that using the pattern identification inside the vision assistant but the problem is that the streaming is very slow. for example if i move my hand in a fast way in front of the webcam i don't see it in the image display is there a way to make it faster ? or it is an issue of my pc specifications(processor ,RAM) ?

 

and i have one more question regarding the identification ,for example if the obejct is in front of the webcam it will be easy to identify it, but if i move it a little bit far it wont be anymore identified, and sometimes if i move it to the right or to the left too . i tried to decrease the score but at some level it will start detecting other objects not the one mentioned in the template so is there a way to deal with this issue? or this is because i am using a webcam so i should use something else with higher resolution maybe?  i need my program to keep tracking the object in a fast way even if i move it in any direction.


Devices not showing up in Measurement & Automation

$
0
0

Old system ... XP operating system ... PCI-1407, PCI-1408, IMAQ cards and a 6602 timing card.  None of these devices show up in devices and interfaces.  Any ideas?

image is not large enough for the operation

$
0
0

Hi, guys, i have checked through the forum for the solution for it.
Some saying can connect the error cable to the other, but i found out that my problem is still occurred.

And some saying is the problem of the memory that the src image and the dst image.

But in my case, the problem is that the IMAQ Find Straight Edge does not have the image dst, so i can't use the IMAQ create to create a memory location for it.

Can anyone tells me how to solve for this problem.

And here is the snapshot on the part of my vi that has this problem.

I had tried this by using the flat sequence structure, it do works... but when i connect it into the while loop, the problem has occurred.

 

imagentlarge.png

how to triger a light processor using ieee camera

Cannot acquire image with NI1744 using the configuration interface of Vision Builder AI 2011: error -1073774588

$
0
0

I am using an NI 1744 with NI Vision Builder AI 2011. When I am in the inspection interface, the camera will acquire and display an image. When I switch over to the configuration interface and try to set up the acquire image step, I get error -1073774588: undefined error. I sometimes also get error -1074396159: not enough memory.

Camera compatibility to use machine vision

$
0
0

Is there any restriction that only specific cameras can be used for machine vision Labview...Or I can use any digital cameras or web cam.....

integration of matlab code with labview

$
0
0

hello , I want to know whether we can put image processing matlab code into the labview environment .I had seen the code such as fft,dft and some signal processing code can be executed in the labview . whether it is possible to run image processing code ?

 

1. I want to know openCV how compatible and used with labview?

2. how openCV used with labview and I need the necessary steps to be followed?

3. whether it is  useful for tracking and recognition of moving objects ?

 

 

please reply it will be helpful  for our project

Color quantization

$
0
0

Hi,

    can you please provide me with the source code for color quantization for an image in labview.It would be very helpful for me.

 

Thank you


How to calibrate a stereo vision system with tow cameras?

$
0
0

Hello! Everyone!

Since NI release LabVIEW 2012 and VDM 2012, I began to use stereo vision example.vi to calibrate my stereo vision system, but in one year time, I try hundrends times and succeed few times. Depth image is always like this 

替换basler相机.jpg

So I think stereo vision model has some problem.I wait for update...... 

After VDM 2012 sp1 is released. I update my VDM, but there is a little improvement. I already can calibrate my system using five image with all most 100% Grid Coverge.

替换basler相机1.jpg

I searched for help. Thanks for Klemen! He helps a lot. There are many things in his post as followed

http://forums.ni.com/t5/Machine-Vision/Stereo-library-2012-pointers/m-p/2171812/highlight/true#M3672...

and his code

http://forums.ni.com/t5/Machine-Vision/Stereo-Vision-and-Projected-Light/m-p/2499898#M39335

Few days ago, NI released LabVIEW 2012 and VDM 2012, To my great surprise, Stereo Vision model is not update! It seems that NI is confident with VDM 2012 SP1.

So my calibration way must be wrong.

OK! After lot of useless words, there is the way to calibrate stereo vision system with two camera!

1.First image must be put as far as you can! this determine your system precision.

2.Calibrate the center grid latest! So you can calibate other grids as good as you can before programe think your system is calibrated! Just like this 

示范.jpg

 3.Take care about angles between each image, 3 degrees apart from other images.

Here is the result

效果.png

I use LabVIEW 2012 f5 +VDM SP1 and two Basler GigE Vision cameras. I update some VI as followed

update vi.png

Hope to help someone!

hdr capable

$
0
0

Hello,

 

I have a need to create an HDR image using different exposures from a camera. At the end, I'd like to cast the image to 8 bit, without any bright or dark clipping. based on the image, I'll know a nominal background threshold value say 500 in a float image, but the dark can go as low as 50 and the bright as high as 3000. How do I scale middle to 128, the lower end to 1 to 127 and the upper end to 129 to 255?

 

Thanks

color histogram bin reduction

$
0
0

Hi,

    What method can be used for "color histogram bin reduction",i.e,by using which method we can perform histogram bin reduction for an color image?

if you can provide me with an alogrithm it would be helpful.

 

Thank you

Vision Executable does not work

$
0
0

I have Imaging source camera dmk 23g445. It is a great camera with very sensitive sensor but there are first of all issues with it in MAX.

It works OK with manufacturers software and drivers.

In MAX it appears as 2 camera: DirectShow and GigE camera. Works very good as a Directshow camera quite stable but as GigE camera is a bit less stable. To make it working I have to reduce packetsize to 1400KB and FPS 3.75. If the value is any higher the images would be distorted. That is a first issue. So there are 2 question related to it: is it possible to use it as a DirectShow camera on clients PC after creating exe file? and Second question is: why packet size and FPS  should be so low?

 

There are some LabView examples provided by manufacturer but I could not get how to use them in my program as the look like completely different to what I am using to make application. They do not use MAX at all.

 

 

After creating executable of my application on another PC I faced different problem: It just does not work.

Exe starts but it does not show images from camera. Everything else is working but there is nothing to analyze. I have marked ENABLE DEBUGGING when I was creating executable but it still dous not show any errormessages.

It is less stable in MAX as well. Most of the time it just does not work at all,  shows a message Error 0xBFF69031 The system did not receive a test packet....

When it works with the same low FPS and Packet size it does not work in exe file.

 

I have on my PC LabView 8.6.1

Vision 8.6

NI-Imaqdx 4.0

 

I have installed LabView Run-Time 8.6.1

and VAS_August2012 on another PC where I would like to run exe.

 

And to add to the question LAst week I have tried to install it as well on Friday it did not work, but when I came back and tried it on Wednsday exe file worked OK.

I removed  and reinstalled it again and now I could not make it working again.

Any ideas wher to start to sort this ?

 

 

 

imageTOimage background

$
0
0

Hello,

 

i was wondering if anyone had a similar problem as me - i am using "IMAQ Image to image 2" to join a smaller (static) image and an image from a live stream (webcamera) as is shown in the image below.

 

image.png

 

The diffrent image resolution is clearly responsible for the "background" area which is surrounded with a green rectangle (overlaid with an image editor to present my problem more clearly). I am wondering why there are periodic stripes and not just a black background instead?

 

Thank you and best regards,

K

Target feature

$
0
0

Hi,

    In the attached flow chart,i would like to know what does "target feature" mean?.

In this i am giving an input image(attached),from the image i have to detect(mark with a rectangular box) the printer in the image at the output(after LBP histogram matching)

 

 

Screen Shot 2013-08-17 at 12.51.48 AM.png

 

 

 

 

 

 

 

Thank you

Détection de forme

$
0
0

Bonjour,

 

J'essaye de développer un VI afin de détecter une image, ici un rubic's cube de face. J'ai paramétré l'assistant vision mais j'aimerai pouvoir l'affiner, car il ne reconnait la forme que une fois sur deux à peu prêt.

Auriez vous des conseils ou des exemples qui pourrait m'aider ?

 

Merci d'avance !


Get Array of live video of the subtraction and add a scalar value

$
0
0

Hi all,

 

I have a case in which I have to perform a subtraction of a live video grab (from a usb camera) from a still image to find the displacement in the object, I am trying to convert everything in grayscale then perform subtraction  but I also need to convert that final subtracted video into an array and add a scalar value basically adding 255 or 128 to give better view of the subtracted video. Attached is my vi , please let me know where I am doing wrong.

 

-1074360278 Unknown GIGE Vision error

$
0
0

I feel stupid asking if anyone knows what an unknown error is... but anyway, if anyone has ever seen such an error with NI-IMAQdx.

 

erreur.png

 

It happens at "IMAQdx Start Acquisition.vi". I'm a using Dalsa GigE camera if that helps and I have LV2010 + VDM 2010SP1 and VAS August 2010.

 

Thanks in advance for any help

human recognition

$
0
0

hello,

i am using labview 2011,vision assistant and a webcam, i have also installed the vision builder AI.

what i am trying to do is human recognition..i was able for example to detect my face and colors but i need my VI to be able to detect humans in general.. i don't want to create a new template for each new person the camera will encounter ..i made some searches on this issue and all the solutions proposed are via algorithms and some are using matlab

 

any idea how to do that ?

 

3D position from from stereo x,y and disparity

$
0
0

Hi all.

 

I wonder if someone familiar with the stereo vision toolkit can help? I have a calibrated stereo system (calibrated using the 'Stereo Vision Example.vi' that ships with the toolkit, then saved the calibration to re-load in my application). The next 'conventional' step in stereo vision seems to be to generate the disparity image for the scene as a whole - feature matching between the left and right images. This is highly processor intensive, and not necessary for my application as I know what I'm looking for in each image - its a circular feature.

 

 

So, my code currently does this:

1. Initialise - load the stereo calibration (IMAQ Read Binocular Stereo file)

2. grab the image pair from the camera

3. recify the images using IMAQ Get Rectified Image From Stereo

4. Locate the circular feature in each rectified image

 

I then have the (x,y) position of the feature in each image, and I have what I think is the disparity of that feature (the difference in the x positions between the left and right images). The feature lies on the same y value as expected for a correctly calibrated system.

 

The question is how to convert this information to real world coordinates? The documetation for 'IMAQ Get Binocular Stereo Calibration Info' mentions the 'Q matrix' which I can get using that VI and says that "Q Matrix can be used to convert pixel coordinates along with the disparity value into real-world 3-D points." but gives no further information. Is the Q matrix relevant? Anyone know how to use it or what it is? My googling is drawing a blank.

 

Thanks for reading!

Eye controlled wheelchair

$
0
0

hello, i want to build an eye movement controlled wheelchair using labview. could you tell me how to determine the eye position in labview? we need to check whether the eye is moving left or right or up! please help me out

Viewing all 3182 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>