Google, MIT Researchers Create New AI-Based Real-Time Photo Editing
- The AI-based system can automatically retouch image
- Photographer can see final version of the image while framing the shot
- The same system can also speed up existing image-processing algorithms
Scientists from MIT and Google have developed a new artificial intelligence system that can automatically retouch images like a professional photographer in real time, eliminating the need to edit images after they are clicked with smartphones.
The data captured by today’s digital cameras is often treated as the raw material of a final image. Before uploading pictures to social networking sites, even casual cellphone photographers might spend a minute or two balancing colour and tuning contrast, with one of the many popular image-processing programs now available.
The system developed by researchers from Massachusetts Institute of Technology and Google in the US is so energy-efficient and fast that it can display retouched images in real-time on phones, so that the photographer can see the final version of the image while still framing the shot.
The same system can also speed up existing image-processing algorithms.
The system employs machine-learning. The researchers trained their system on a dataset created by Adobe Systems, the creators of Photoshop.
The data set included 5,000 images, each retouched by five different photographers. They also trained their system on thousands of pairs of images produced by the application of particular image-processing algorithms, such as the one for creating high-dynamic-range (HDR) images.
The software for performing each modification takes up about as much space in memory as a single digital photo, so in principle, a cellphone could be equipped to process images in a range of styles.
Researchers compared their system’s performance to that of a machine-learning system that processed images at full resolution rather than low resolution.
During processing, the full-resolution version needed about 12 gigabytes of memory to execute its operations.
The researchers’ version needed about 100 megabytes, or one-hundredth as much.
The full-resolution version of the HDR system took about 10 times as long to produce an image as the original algorithm, or 100 times as long as the researchers’ system.
“This technology has the potential to be very useful for real-time image enhancement on mobile platforms,” said Jon Barron from Google.
“Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones,” said Barron.
This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience, he said.