So, a while back I stumbled across this article: http://www.technologyreview.com/view/515651/bell-labs-invents-lensless-camera/

To summarize it, Bell Labs built a single pixel camera that needs no lenses to operate… pretty cool concept right? I immediately wanted to build one.

After doing some research, I realized the concept was pretty simple: place an array of selectively-blackable windows in front of a light sensor like a photodiode, send random patterns to the windows, record the light level at the photodiode, repeat a few thousand times, and all of a sudden you’re left with a giant system of linear equations, the solution to which is the image the camera “sees”.

The problem arises when you try to solve this system. The system is HUGE, so there’s no way you could solve it through gaussian elimination, so instead we solve the system through minimizing the system’s l1 norm, and a least-squares method.

As I’m in Princeton for my summer job at PPPL, I’m lacking my workshop and tools, so I couldn’t actually build a single pixel camera. Instead I did the next best thing. I programmed a simulation of one in Matlab.

I used this awesome l1-minimization system solving library called l1_ls from Stanford, and after writing some Matlab code, I was able to “take a picture” of a 256×256 pixel image.

Below is the reconstructed image, taken from 10,000 “exposures” of the virtual single pixel camera, followed by the original image.

It’s not much, but it’s something. It’s a start.

The next step on the-programming side is to use Hadamard matricies rather than pseudorandom matricies to generate the sampling patterns. Other groups have done this, and I’m hoping it will help “distribute the sparsity” of the sampling matricies, resulting in a better final image. The step after this is to build one… and maybe one that works in the IR, UV, microwave, or x-ray spectrum.

Next step: building one.