In the past weeks, researchers from Google and Sakana unveiled two cutting-edge neural network designs that could upend the AI industry.

These technologies aim to challenge the dominance of transformers—a type of neural network that connects inputs and outputs based on context—the technology that has defined AI for the past six years.

The new approaches are Google’s “Titans,” and “Transformers Squared,” which was designed by Sakana, a Tokyo AI startup known for using nature as its model for tech solutions. Indeed, both Google and Sakana tackled the transformer problem by studying the human brain. Their transformers basically utilize different stages of memory and activate different expert modules independently, instead of engaging the whole model at once for every problem.

The net result makes AI systems smarter, faster, and more versatile than ever before without making them necessarily bigger or more expensive to run.

For context, transformer architecture, the technology which gave ChatGPT the ‘T’ in its name, is designed for sequence-to-sequence tasks such as language modeling, translation, and image processing. Transformers rely on “attention mechanisms,” or tools to understand how important a concept is depending on a context, to model dependencies between input tokens, enabling them to process data in parallel rather than sequentially like so-called recurrent neural networ

Go to Source to See Full Article
Author: Jose Antonio Lanz

BTC NewswireAuthor posts

BTC Newswire Crypto News at your Fingertips

Comments are disabled.