HomeBlogAbout UsWorkContentContact Us
 
 Advertisement 

Detecting Signals in Noise

Sometimes I like to write code just for fun. Today is one of those days. It's a rainy Sunday afternoon here in Seattle and so I've been dabbling around with code just for the fun of it.

The code I wrote is just some basic graphics and simple modelling of signals in a noisy environment. The best analogy I can think of is a radar system experiencing a lot of random noise and interference.

Below are some grids of data showing 'radar' coverage. It's a digital radar system; either the pixel is set if we detect something present in that region, or blank if there is nothing detected.

Sadly, however, our system is very noisy. Pixels are being set even if there is nothing in that region. Below are three different examples of different levels of noise.

1% 5% 15%

On the left is an example with 1% noise; each pixel has a 1% chance of falsely reporting a target in that section. In the middle is a sample showing 5% noise, and finally on the right is an example of 15% noise. It's getting increasingly busy. The images above show the random noise. If there were a real target in this clutter how would be find it?

Monster Early Warning Radar

Let's say we're trying to use this radar as an early warning detection system to alert us when our Monster Friend is back to try and eat our brains.

We know he's flying in his saucer, and that his saucer always gives a strong positive return (meaning that if he is in a region that pixel will always be set). How do we distinguish between a true monster signal and a false positive from the noise?

Since the noise is random, if the monster stays in the same place, we can find him because his return will be constant over multiple sweeps of the radar. Computers are good at these kind of things, how good are you? In the three grids below there is one pixel that is in the same location on all three images. Can you find it?

Scan #1 Scan #2 Scan #3

Did you find it?

To see the fixed location (in red), click the button.

That wasn't too hard

What happens if there is more noise? Again, if we have the luxury of a static target, all we need to do is average the images. The noise, being random, will cancel out, but the target will be constant. Below is a collection of images with 20% noise.

To render these images I've averaged out ever increasing collections of radar frames. The brightness of each pixels represents the number of frames that each pixel was set over the collections.

The top left image shows just one frame. The target is hiding in here, lost amongst all the other false positives. The middle image on the top row shows the averaging of two frames. In this image, the white pixels represent locations present on both the frames, the grey on one or the other (but not both), and the black pixels show locations that are not set on either frame. Obviously, our target is hiding behind one of the white pixels, but we do not have enough data to disambiguate.

1 Image 2 Images 3 Images
8 Images 100 Images 700 Images

By the time we've merged eight images, a couple of possible target locations are showing through. The true target is showing, plus just those pixels that were randomly present on all the sampled frames. With a base chance of 20% per frame, the probability that any pixel on this frame will also be full-white is (0.2)8, which is fraction of a percent.

With hundreds of images merged, the background noise will average out to pretty uniform sea of gray (at 20% intensity, as, on average each pixel will be falsely set approx 1:5 frames), and the true target will shine clearly. You can see it here close to the lower edge.

NOTE: - You can see here that it doesn't matter if our radar system is not 100% reliable at positively identifying a target (sometimes returning false even though there is a target in that location). As long as it is able to detect a target with a higher probability than the noise threshold then we should be able to detect it (eventually). The greater the difference between the false positive and the false negative the less frames we need reliably detect the monster.

What about moving targets?

The averaging technique works well, but what happens if the target is moving? A moving target will be at a different pixel location on each frame (depending on speed). Since it will not be in the same location, averaging will not help and our target will get lost in the random noise.

Think you can spot a moving target in a sea of noise? Below are two examples at different levels of noise. On each animation there is one pixel that moves in a straight line at constant speed. If you look very carefully you might just be able to make out to path on the top image. The bottom images is practically impossible without assistance. Click 'Reveal' to the solution in red.

 

How do we detect a moving target?

Solution

(Before I start, I want remind my readers that this is a very simple solution to a very interesting problem. It's a solution coded on a rainy afternoon in a few dozen lines of code. There are lots of caveats and scope for improvement. It is not meant as a definitive guide for how to solve these problems).

On the assumption that our target is moving at a constant rate, then on each subsequent frame, the position of the true target will have moved the same distance on each frame. I quantized these to dx and dy. But we don't know which of the pixels is the true target, and which is noise …

The algorithm I used it to examine each pair of pixels spanning over two frames (take each pixel on frame n, and compare it to every pixel on frame n+1). If the difference between them is too high (above a max speed threshold), there is no way that the pixel on the first frame could have moved to the pixel on the second frame.

If the difference is below this speed threshold, there is a chance that the first pixel could have moved to the second pixel and a I record the dx and dy between these two locations. Of course, it's very likely that this is just random distance between two pieces of noise, but hidden amongst this random data is the true target.

I store all the possible dx and dy pairs in an accumulator (incrementing each time this vector is encountered). Over multiple frames the accumulator gets higher and higher scores for each vector. Just like the averaging works for static targets, over multiple frames the vector for the true target movement will be present and consistent on all frames, and the bogus vectors (those generated between pairs of noise particles), will have smaller values.

By selecting the vector from the accumulator with the highest frequency, and identifying the pixel that is part of this chain, we can hopefully determine the true target. Here is an example of this algorithm in action:

The left of the image shows the target buried amongst the random noise. The right of the image shows the detected target.

Not bad for a few lines of code!

 

You can find a complete list of all the articles here.      Click here to receive email alerts on new articles.

© 2009-2014 DataGenetics    Privacy Policy