Scientists from Princeton University and the University of Washington in the United States have perhaps created the world’s smallest camera capable of producing full color, high definition images better than any of its peers.
Micro-sized cameras have great potential to detect problems in the human body and allow detection of super-small robots, but previous approaches captured blurry and distorted images with limited fields of view.
Now, the team behind this technological advance has overcome these obstacles with an ultra-compact camera the size of a coarse grain of salt. The new system can produce Sharp, full-color images on par with a conventional composite camera lens 500,000 times larger in volume, say its creators in an article published on November 29 in Nature Communications.
Enabled by a joint design of camera hardware and computational processing, the system could enable minimally invasive endoscopy with medical robots to diagnose and treat disease, and improve imaging for other robots with size and weight limitations. Arrays of thousands of such cameras could be used for full scene detection, turning surfaces into cameras.
Whereas a traditional camera uses a series of curved glass or plastic lenses to focus light rays, The new optical system is based on a technology called a metasurface, which can be produced much like a computer chip. At only half a millimeter wide, the metasurface is studded with 1.6 million cylindrical posts, each roughly the size of the human immunodeficiency virus (HIV).
Each pole has a unique geometry and works like an optical antenna. It is necessary to vary the design of each pole to correctly shape the entire optical wavefront. With the help of algorithms based on machine learning, the interactions of posts with light are combined to produce the highest quality images and the widest field of view for a full color metasurface camera developed to date.
A key innovation in creating the camera was the integrated design of the optical surface and the signal processing algorithms that produce the image. This boosted the camera’s performance in natural light conditions, in contrast to previous metasurface cameras that required pure laser light from a laboratory or other ideal conditions to produce high-quality images, said Felix Heide, lead author of the study and assistant professor of computer science and science at Princeton.
The researchers compared the images produced with their system with results from previous metasurface cameras, as well as with images captured by conventional composite optics using a series of six refractive lenses. Aside from a bit of blur at the edges of the frame, the images from the nano-sized camera were comparable to those from the traditional lens setup, which is more than 500,000 times larger in volume.
Other ultra-compact metasurface lenses have suffered from significant image distortions, small fields of view, and a limited ability to capture the full spectrum of visible light, known as RGB images because they combine red, green, and blue to produce different tones.
“It has been a challenge to design and configure these small nanostructures to do what you want”, said Ethan Tseng, a Ph.D. in computer science student at Princeton who co-led the study. “For this specific task of capturing wide-field-of-view RGB images, it was previously unclear how to co-engineer the millions of nanostructures together with post-processing algorithms.”
Co-lead author Shane Colburn addressed this challenge by creating a computational simulator to automate testing of different nano-antenna configurations. Due to the number of antennas and the complexity of their interactions with light, this type of simulation can user “massive amounts of memory and time”. Colburn developed a model to efficiently approximate the imaging capabilities of metasurfaces with sufficient precision.
James Whitehead, another study co-author, manufactured the metasurfaces, which are based on silicon nitride, a glass-like material that is compatible with standard semiconductor manufacturing methods used for computer chips, meaning that a given metasurface design could easily be mass produced at a lower cost than conventional camera lenses.
“Although the approach to optical design is not new, this is the first system to use surface optical technology at the front end and nerve-based processing at the back.” said Joseph Mait, a Mait-Optik consultant and former senior researcher and chief scientist at the US Army Research Laboratory.
“The importance of the published work is to complete the Herculean task of jointly designing the size, shape and location of the million features of the metasurface and the parameters of post-detection processing to achieve the desired image performance,” added Mait, who was not involved in the study.
Heide and his colleagues are now working to add more computational capabilities to the camera itself. Beyond optimizing image quality, they would like to add capabilities for object detection and other detection modalities relevant to medicine and robotics.
Heide also envisions the use of ultra-compact imagers to create “Surfaces as sensors”. “We could turn individual surfaces into cameras that have ultra-high resolution, so you would no longer need three cameras on the back of your phone, but the entire back of your phone would become one giant camera. We can think of completely different ways to build devices in the future ”, said.
Harvard scientists designed a chip to study intestinal COVID-19 infection
Scientists ask that asthmatic children have priority in the covid vaccine
Scientists use artificial intelligence to calculate seal populations