Quantum

In the latest of a string of potential disruptions to the traditional digital-camera imaging model — the last was from Light — startup InVisage Technologies is gearing up to ship its first quantum-dot imaging sensors, which the company hopes will displace the more common silicon-based CMOS sensors.

Unlike traditional silicon-based sensors, whose photosensitive layer consists of “buckets” that collect the electrons created when photos hit a silicon layer, InVisage’s QuantumFilm replaces the buckets with liquid nanoparticles (so-called quantum dots) suspended in a substrate, sort of the way silver halide grains in film are suspended in gelatin. When a photon hits a dot, it releases an electron and a positively charged hole. The positive and negative charges flow through the QuantumFilm toward the electrodes which sandwich it, streaming them out to an analog-to-digital converter just like a silicon sensor. The capture process and the characteristics of the quantum dots are what differentiate QuantumFilm from a typical CMOS sensor.


InVisage’s first QuantumFilm product is the 13-megapixel Quantum13,


a sensor with 1.1-micron pixels that fits in an 8.5mm-square by 4mm-deep module. The company expects to be able to ship the Quantum13 to phone manufacturers by the end of this year.

An inside look at digital photography upstart InVisage (pictures)

​InVisage, based in Silicon Valley, adds a think layer of tiny particles called quantum dots to a circular wafer that will be sliced up into hundreds of rectangular image sensors. Here, staff work in an ultra-clean room to avoid contaminating the process. The quantum dots are sensitive to light -- more so than the silicon layer used in today's conventional digital camera technology.​InVisage, based in Silicon Valley, adds a think layer of tiny particles called quantum dots to a circular wafer that will be sliced up into hundreds of rectangular image sensors. Here, staff work in an ultra-clean room to avoid contaminating the process. The quantum dots are sensitive to light -- more so than the silicon layer used in today's conventional digital camera technology.

A plastic pod holds a stack of circular wafers that each will be sliced into hundreds of QuantumFilm image sensors for smartphone cameras.A plastic pod holds a stack of circular wafers that each will be sliced into hundreds of QuantumFilm image sensors for smartphone cameras.

​A circular wafer of InVisage image-sensor chips reflects light -- but not as much, since the dots absorb light better than conventional silicon-based microchip sensors.​A circular wafer of InVisage image-sensor chips reflects light -- but not as much, since the dots absorb light better than conventional silicon-based microchip sensors.

+3 more


See all photos

My take

This technology does enable a couple of important improvements. Because it dumps an entire frame of image data at a time (unlike other sensors, which read it out a line at a time), it can potentially eradicate rolling shutter — wobble is one of the ugliest problems with phone video.

It can also potentially eradicate the color filter array (CFA) at the front of the sensor, which is how the sensor captures the color information. Dropping the CFA would allow more light through, which I think would unambiguously improve low-light photo quality. However, despite admitting to this possibility when the company started up five years ago, the Quantum13 sensor still has a CFA.

invisage-bright.jpgEnlarge Image

Based on these samples, the sensor seems to do a reasonable job with highlights that usually get blown out. (The center is Kodak film.)


InVisage

invisage-darkt.jpginvisage-darkt.jpgEnlarge Image

I’m not overly impressed with the quality of the medium-light photos, and it looks like the images need more postprocessing — they don’t look as nearly as sharp as the cell phone photo and the white balance looks a little off.


InVisage

InVisage previously released some photo samples and a video from October shot with a prototype, and based on those I have mixed feelings. QuantumFilm’s light-response characteristics are a cross between film and silicon: in the bright areas it acts like film, gradually losing details as brightness increases (nonlinear response, the way your eye sees), but in the dark areas and midtones, it responds like a silicon sensor, losing detail in a direct relationship to the decrease in brightness (linear response). Based on the samples, I’m impressed with the performance in the bright areas, but not so much in the overall photo quality.

That said, it looks like the lens they used for these samples was terrible, and each manufacturer will be able to tweak and optimize the imaging pipeline and other hardware, so assuming they don’t run into any implementation problems, I’m optimistic.

Check Also

The M2 MacBook Air Is the Ultimate Laptop Gift

This story is part of 84 Days of Holiday, a collection that helps you find the perfect gift for anyone. Over the years, I’ve often described Apple’s MacBook Air as the most universally useful laptop you can get (or in this case, give). The latest version, now with Apple’s new M2 chip inside, hits the fresh …

Leave a Reply