Recent studies in nanophotonic structures and metamaterials have showcased remarkable applications such as invisibility cloaks, perfect lenses, radiative cooling, and light trapping for solar cells. However, current design methodologies require computationally-intensive, time-consuming, and iterative processes to realize structurally-complex devices. For example, finite-element analysis (FEA) simulation and other “forward design” methods involve a series of trial-and-error procedures, which require users to repeatedly define material geometry and property inputs until the desired spectral behavior is obtained. Conversely, “inverse design” methods allow users to create structures following the input of spectral information, but such methods are orders of magnitude more computationally expensive and requires specialized knowledge and complex derivations.
In comparison to conventional design and optimization techniques, machine learning (ML) based methods can achieve design goals in a fraction of the time, eliminate the need for iterative optimization processes, capture results with little computation power/costs, and generate complex/novel designs not previously conceived by human intuition. To this end, we implemented deep convolutional generative-adversarial networks (DCGANs) to optimally inverse-design thermo-radiative metasurfaces that demonstrate spectral selectivity in emitted thermal radiation. Our approach harnesses the strengths of recent breakthroughs in convolutional GANs to train generators that can, given a target goal, generate an image of a photonic structure that can meet the target spectrum.
Implemented using the PyTorch deep-learning framework, our DCGAN is composed of two competing convolutional neural networks. A generator network receives broadband absorption datapoints as inputs and generates images representing nanophotonic structures. Then, a discriminator network facilitates spectra-to-image learning by comparing the generated images with simulated “ground truth” results. Using training data created with finite-difference frequency domain (FDFD) simulation in MATLAB, we trained the GAN with 10,000 images, each associated with 200 spectral points, and generated nanophotonic designs with approximately 90% accuracy with respect to the ground truths. The trained DCGAN was then packaged within a custom-developed desktop application that enables immediate utilization by users who seek to find novel phonic device designs for a given broadband absorption requirement.
Our work leverages a novel ML architecture and implementation to demonstrate robust, versatile, and rapid image-based photonics design. The presented methodology offers the potential to: expedite the overall photonics design lifecycle, save costs by reducing the need for numerous simulation/software tools, proliferate further innovations in photonics research by enabling new design insights, and lay the groundwork for the development of physics-based ML applications which accurately reproduce a multitude of optical, thermal, and/or energy-related phenomena.
Authors: Christopher Yeung1, Ju-Ming Tsai1, Yusaku Kawagoe1, Brian King1, and Aaswath Raman1
1Department of Materials Science and Engineering, University of California, Los Angeles, CA, USA