Precise sub-pixel shifts are not necessary at the sensor level though instead, OIS is used to uniformly distribute a bunch of scene samples across a pixel, and then the images are aligned to sub-pixel precision in software. "We can demonstrate the way the optical image stabilization moves very slightly" remarked Marc Levoy. In fact, I was told the shifts are carefully controlled by the optical image stabilization system. Subtle shifts from handheld shake and optical image stabilization (OIS) allow scene detail to be localized with sub-pixel precision, since shifts are unlikely to be exact multiples of a pixel. ![]() It uses HDR+ burst photography to buffer up to 15 images 2, and then employs super-resolution techniques to increase the resolution of the image beyond what the sensor and lens combination would traditionally achieve 3. This year, the Pixel 3 pushes all this further. Click image to view the level of detail at 100%. Like the Pixel 2, HDR+ allows the Pixel 3 to render sharp, low noise images even in high contrast situations. And going back in time to the last 9 frames captured right before you hit the shutter button means there's zero shutter lag. Averaging simulates the effects of shooting with a larger sensor by 'evening out' noise. Blurred elements in some shots can be discarded, or subjects that have moved from frame to frame can be realigned. When you press the shutter, the camera essentially goes back in time to those last nine frames 1, breaks each of them up into thousands of 'tiles', aligns them all, and then averages them.īreaking each image into small tiles allows for advanced alignment even when the photographer or subject introduces movement. HDR+ was its secret sauce, and it worked by constantly buffering nine frames in memory. Last year the Pixel 2 showed us what was possible with burst photography. Let's take a closer look at some of the Pixel 3's core technologies. Any technology that makes a single camera better will make multiple cameras in future models that much better, and we've seen in the past that a single camera approach can outperform a dual camera approach in Portrait Mode, particularly when the telephoto camera module has a smaller sensor and slower lens, or lacks reliable autofocus. ![]() At a time when we're seeing companies add dual, triple, even quad-camera setups, one main camera seems at first an odd choice.īut after speaking to Marc and Isaac I think that the Pixel camera team is taking the correct approach – at least for now. One of the first things you might notice about the Pixel 3 is the single rear camera. I had the opportunity to sit down with Isaac Reynolds, Product Manager for Camera on Pixel, and Marc Levoy, Distinguished Engineer and Computational Photography Lead at Google, to learn more about the technology behind the new camera in the Pixel 3. With the launch of the Google Pixel 3, smartphone cameras have taken yet another leap in capability.
0 Comments
Leave a Reply. |