Huawei debuted its new artificial intelligence (AI) storage model, the OceanStor A310, at GITEX GLOBAL 2023 last week, marking an attempt to address certain industry challenges surrounding large model applications. Designed for the era of large AI models, the OceanStor A310 aims to provide a storage solution for basic model training, industry model training, and inference in segmented scenario models.
Imagine the OceanStor A310 as a super-efficient librarian in a vast digital library, quickly fetching pieces of information. In comparison, another system—IBM’s ESS 3500—is a slower librarian. The faster the OceanStor A310 can fetch information, the quicker AI applications can work, making smart decisions in a timely manner. This speedy access to information is what makes Huawei’s OceanStor A310 stand out.
The OceanStor A310’s edge seems to lie in its ability to speed up data processing for AI. When compared to IBM’s ESS 3500, Huawei’s latest all-flash array reportedly feeds Nvidia GPUs almost four times faster on a per-rack unit basis. That’s per the methodology using Nvidia’s Magnum GPU Direct, wherein data is sent directly from an NVMe storage resource to the GPUs without a storage host system being involved.
Huawei’s OceanStor A310 demonstrated a performance with up to 400GBps sequential read bandwidth and 208GBps write bandwidth. However, the impact of open-source and closed-source frameworks on these numbers remains unclear.
Go to Source to See Full Article
Author: Jose Antonio Lanz
Tip BTC Newswire with Cryptocurrency