Original: Heart of the Machine 2018-07-27 11:29:03
From techcrunch, by Devin Coldewey, compiled by Heart of the Machine.
Signals between neurons in the brain travel at about 100 meters per second, while light travels at 300,000 kilometers per second - what if neuronal signals also traveled at the speed of light? Researchers from the University of California, Los Angeles (UCLA) used 3D printing technology to print a solid-state neural network, and the use of layers of light diffraction to perform calculations to realize the image recognition of handwritten digits, the relevant results have been published in the journal science.
The idea may seem novel, but it's actually quite natural. What is performed in the neural network is a linear operation, which corresponds to the linear interaction of light diffraction, and the concepts of neuron's weight and activation value can also correspond to the amplitude and phase of light (adjustable). In addition, solid-state light diffraction computation has the advantages of low energy consumption, no heat generation, and light-speed execution (although the propagation of electric fields in conventional computer circuits is also light-speed, it does not directly correspond to the computational process of neural networks). This research direction is still in its infancy, and if its advantages can be fully utilized, it may have a very broad application prospect.
Machine learning is ubiquitous these days, but most machine learning systems are invisible: they optimize audio or recognize faces in images in a 'black box'. But recently UCLA researchers developed a 3D printed AI analysis system. This system is not only visible, it's also tangible. Unlike previous systems that analyze by adjusting numbers, this system analyzes AI through the diffraction of light. This novel and unique research shows that these 'AI' systems can look very simple.
We usually think of machine learning systems as a form of artificial intelligence that centers on a series of operations on a set of data, each based on the previous operation or feeding into a loop. The operations themselves aren't overly complex —— though neither are they simple enough to be computed with a pen and paper. Ultimately, these simple mathematical operations result in a probability that the input data matches the various patterns that the system has 'learned' to recognize.
Typically, the operations required for a machine learning system to perform each parameter update or inference need to be performed on a CPU or GPU. Since current deep learning requires a lot of parallel computation, GPUs have become a more widely available option. But even the most advanced GPUs are made of silicon and copper, and information needs to be propagated in the form of pulses along intricate circuitry. This means that traditional GPUs consume energy, whether they are performing new calculations or repetitive ones.
So when these 'layers' in deep learning have been trained and the values of all the parameters have been determined, it repeats the calculations and energy consumption over and over again. This means that a 3D printed AI analytics system can be optimized to not take up too much space or CPU power after it has trained its 'layers'. The researchers from UCLA say that it can indeed be cured, and that the layers themselves are 3D-printed layers made of transparent materials printed with complex diffraction patterns that process light.
If that description gives you a bit of a headache, think of a mechanical calculator. Today, number crunching is done digitally in computer logic. But in the past, calculators needed to move actual mechanical parts in order to perform calculations —— adding numbers up to 10 would cause the parts to shift positions. In a way, this 'diffractive deep neural network' is similar: it uses and manipulates the physical representation of numbers, rather than the electronic one. This means that if the model's prediction process is solidified into a physical representation, it can use significantly less energy in the actual prediction process.
As the researchers put it:
Each point on a given layer transmits or reflects an incident wave that is equivalent to an artificial neuron connected by optical diffraction to other neurons in the next layer. Each 'neuron' is tunable by varying its phase and amplitude.“Our all-optical deep learning framework is capable of performing a wide range of complex tasks at the speed of light, which can also be realized by computer-based neural networks.” The researchers wrote in their paper describing their system.To demonstrate this, they train a deep learning model to recognize handwritten numbers. Once that's done, they transform the matrix math layer into a series of optical transformations. For example, one layer might add value by refocusing the light from both to a single region in the next layer —— the actual computation is much more complex than that, and is only outlined here.
By arranging millions of miniature transitions on a printing plate, light is input from one end and output from the other structure, so the system can determine if it is a 1, 2, 3, etc. with over 90% accuracy.
The reader may wonder what the point of this is, since the simplest three-layer perceptron can easily recognize handwritten numbers with more than 95% accuracy, and a convolutional network can achieve more than 99% accuracy. This form is not really useful at the moment, but neural networks are very flexible tools and it is possible to recognize letters rather than just numbers. It is therefore possible to make an optical character recognition system work in hardware with essentially no energy or computation required.
The real limitation lies in the manufacturing process: it is very difficult to build an ultra-high-precision diffractive plate that can perform on-demand processing tasks. After all, if it needs to be accurate to seven decimal places, whereas a printed plate can only be accurate to the third digit, that's quite a problem.
This is just a proof of concept —— there is no pressing need for large digital recognition machines —— but the idea is very interesting. The idea could have implications for cameras and machine learning technology — — constructing light and data in the material world rather than the virtual world. It looks like going backwards, but maybe it's just the pendulum swinging backwards.
Thesis: All-optical machine learning using diffractive deep neural networks
Paper address: http://science.sciencemag.org/content/early/2018/07/25/science.aat8084
Abstract: Deep learning has improved our ability to perform advanced reasoning tasks using computers. We introduce a physical mechanism to perform machine learning in this paper, an all-optical diffractive deep neural network (D^2NN) architecture that can implement a wide range of functions according to passive diffraction layers designed to work collectively based on deep learning. We constructed 3D printed D^2NNs to implement image classification of handwritten digits and fashion products, as well as imaging lenses as a function of the terahertz spectrum. Our all-optical deep learning framework computes at the speed of light a wide range of complex functions that can also be realized by traditional computer-based neural networks, and will develop new applications in all-optical image analysis, feature detection, and target classification, in addition to allowing for the design of new cameras and optics to perform unique tasks using D^2NN.
Figure 1: Diffractive deep neural network (D^2NN) architecture.
Figure 2: 3D printed diffractive deep neural network test experiment.
Figure 3: Diffractive deep neural network for handwritten digit recognition.
Original link: https://techcrunch.com/2018/07/26/this-3d-printed-ai-construct-analyzes-by-bending-light/