We’ve already been impressed by the power of the camera on the Google Pixel 2 but, according to Google, it’s about to get even better.
Pixel Visual Core, the co-processor that’s dedicated to imaging inside of the Pixel 2, has been switched on for all users. While previously gated off to developers in the Android Oreo 8.1 preview, it’s now injecting even more machine learning smarts into your photos.
The Pixel Visual Core was announced in October of 2017, but has only now released due to Google working with third-party apps to ensure that the Pixel 2’s spruced-up imaging capability comes through without being diminished.
If you use Instagram, WhatsApp and Snapchat, Pixel Visual Core will work to make photos you take within those apps just as good as if you took them with the stock camera app.
Building on a stellar foundation
Packed with a 12.2MP sensor with f/1.8 and optical image stabilization (OIS), the Google Pixel 2 and Google Pixel 2 XL are near the top of the camera spec chain compared to competing flagship smartphones, like the iPhone X and Samsung Galaxy S8.
But where Google impresses, and looks to double-down on those efforts, is with its computational know-how. As we’ve seen in our time with the Pixel 2, it may not have jumped on the dual-lens bandwagon, but it still takes incredible photos.
Google has shared the kind of results that we can expect with the Pixel Visual Core turned on. Like HDR+, the co-processor takes the detail and balance of contrast to a whole new level.
Another perk that all Pixel 2 users will now benefit from is what Google calls RAISR, a machine learning algorithm that smooths over your zoomed-in shots that are inherently more prone to noise and other visual artifacts.
Google says that Pixel Visual Core will receive its timely wake-up call in the coming days, showing up inside of the February monthly update. We’ll be updating this piece when the software update hits our devices.