Electronic neural networks, one of the key concepts in artificial intelligence research, have drawn inspiration from biological neurons since their inception—as evidenced by their name. New research has now revealed that the influential AI transformer architecture also shares unexpected parallels with human neurobiology.
In a collaborative study, scientists propose that biological astrocyte-neuron networks could mimic transformers’ core computations. Or vice versa. The findings—jointly reported by MIT, the MIT-IBM Watson AI Lab, and Harvard Medical School—were published this week in the journal Proceedings of the National Academy of Sciences.
Astrocyte–neuron networks are networks of cells in the brain that consist of two types of cells: astrocytes and neurons. Astrocytes are cells that support and regulate neurons, which are brain cells that send and receive electrical impulses. Their activity is basically thinking. Astrocytes and neurons talk to each other using chemicals, electricity, and touch.
On the other hand, AI transformers—first introduced in 2017—are one of the base technologies behind generative systems like ChatGPT. –in fact, that’s where the “T” in GPT stands from. Unlike neural networks that process inputs sequentially, transformers can directly access all inputs via a mechanism called self-attention. This allows them to learn complex dependencies in data like text.
Go to Source to See Full Article
Author: Jose Antonio Lanz
Tip BTC Newswire with Cryptocurrency