Everyone knows that machine learning is energy intensive and quite expensive. But what if it wasn’t? Boy, the advancements we could make!
A multidisciplinary team of MIT researchers has now set out to push the speed limits of a type of human-made analog synapse, the key building block in analog deep learning, according to a press release by the institution. The goal? To make machine learning faster, more energy efficient, and cheaper.
Be the first to read our latest stories, analysis, and trend-spotting on tech from every corner on earth. Sign up for our weekly newsletter today.
Stay on top of the latest engineering news just enter your email and we’ll take care of the rest.
Stay ahead with the latest science, technology and innovation news for free
By subscribing, you agree to ourTerms of Use andPrivacy Policy. You may unsubscribe at any time.
The newly developed material is compatible with silicon fabrication techniques and could pave the way for integration into commercial computing hardware for deep-learning applications.
“With that key insight, and the very powerful nanofabrication techniques we have at MIT.nano, we have been able to put these pieces together and demonstrate that these devices are intrinsically very fast and operate with reasonable voltages,” said senior author Jesús A. del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science (EECS). “This work has really put these devices at a point where they now look really promising for future applications.”
The researchers further explained how they managed to increase the speed of the ions in the device.
“The working mechanism of the device is electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field, and push these ionic devices to the nanosecond operation regime,” explained senior author Bilge Yildiz, the Breene M. Kerr Professor in the departments of Nuclear Science and Engineering and Materials Science and Engineering.
The team further indicated that they were able to apply a high voltage to the device without damaging it.
“The action potential in biological cells rises and falls with a timescale of milliseconds since the voltage difference of about 0.1 volt is constrained by the stability of water,” said senior author Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering. “Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices" he added.
The result is programmable resistors that significantly increase the speed at which a neural network is trained while drastically reducing the cost and energy to undertake that process.
“Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft,” added lead author and MIT postdoc Murat Onen.
The applications of the new devices are many, including self-driving cars, fraud detection, and medical image analysis.
For full access to all features and product updates.
Already have an account? Log in