A team of engineers at MIT have built a new artificial intelligence chip with a design that is stackable and reconfigurable, which helps swap out and build on existing sensors and neural network processors.
The new AI chip could help achieve a more sustainable future where cellphones, smart watches, and other wearable devices don’t have to be discarded for a newer model. They would be able to be upgraded with new sensors and processors that snap onto a device’s internal chip. Reconfigurable AI chips such as these would keep devices up to date and reduce electronic waste.
The research results were published in Nature Electronics.
Designing the Chip
The LEGO-like design of the chip comprises alternating layers of sensing and processing elements, as well as light-emitting diodes (LED) that enable the chip’s layers to communicate optically.
This new design uses light instead of physical wires to transmit information through the chip, which enables the chip to be reconfigured, with layers being swapped out or stacked on. This could be used to add new sensors or updated processors.
Jihoon Kang is a MIT postdoc.
“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”
The researchers will look to apply the design to edge computing devices, self-sufficient devices, and other electronics that work independently from a central or distributed resource.
Jeehwan Kim is an associate professor of mechanical engineering at MIT.
“As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically,” says Kim. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”
The new design is configured to carry out basic image-recognition tasks via a layering of image sensors, LEDs, and processors made from artificial synapses. The researchers paired image sensors with artificial synapse arrays, with each array being trained to recognize certain letters. The team was able to achieve communication between the layers without needing a physical connection.
Hyunseok Kim is a MIT postdoc.
“Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” says Kim. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips the way we want.”
This optical communication system consists of paired photodetectors and LEDs, each one patterned with tiny pixels. The photodetectors constitute an image sensor for receiving data, and LEDs to transmit data to the next layer. When a signal reaches the image sensor, the image’s light pattern encodes a configuration of LED pixels that then stimulates another layer of photodetectors, along with an artificial synapse array that classifies the signal based on the pattern and strength of the LED light.
Creating a Stackable Chip
The fabricated chip has a computing core measuring about 4 square millimeters, and it is stacked with three image recognition “blocks,” with each comprising an image sensor, optical communication layer, and artificial synapse array for classification.
Min-Kyu Song is another MIT postdoc.
“We showed stackability, replaceability, and the ability to insert a new function into the chip,” says Song.
The researchers will now look to add more sensing and processing capabilities to the chip.
“We can add layers to a cellphone’s camera so it could recognize more complex images, or make these into healthcare monitors that can be embedded in wearable electronic skin,” says Choi.
The team says that modular chips could be built into electronics and enable consumers to choose to build up with the latest sensor and processor “bricks.”
“We can make a general chip platform, and each layer could be sold separately like a video game,” Jeehwan Kim says. “We could make different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO.”
Credit: Source link