Anzeige
Mehr »
Montag, 16.03.2026 - Börsentäglich über 12.000 News
Startschuss in Nevada: Dieser Kupfer-Explorer könnte vom US-Rohstoffboom profitieren
Anzeige

Indizes

Kurs

%
News
24 h / 7 T
Aufrufe
7 Tage

Aktien

Kurs

%
News
24 h / 7 T
Aufrufe
7 Tage

Xetra-Orderbuch

Fonds

Kurs

%

Devisen

Kurs

%

Rohstoffe

Kurs

%

Themen

Kurs

%

Erweiterte Suche

WKN: 918422 | ISIN: US67066G1040 | Ticker-Symbol: NVD
Tradegate
16.03.26 | 21:58
158,80 Euro
+0,65 % +1,02
1-Jahres-Chart
NVIDIA CORPORATION Chart 1 Jahr
5-Tage-Chart
NVIDIA CORPORATION 5-Tage-Chart
RealtimeGeldBriefZeit
158,66158,8223:00
158,62158,7622:02
PR Newswire
182 Leser
Artikel bewerten:
(1)

WEKA Maximizes Token Output With Lower Cost Per Token on NVIDIA BlueField-4 STX

NeuralMesh and Augmented Memory Grid Integration with NVIDIA STX Increases Token Production by 6.5x in the Same GPU Footprint, Slashing Cost of Inference for AI-Driven Organizations

SAN JOSE, Calif. and CAMPBELL, Calif., March 16, 2026 /PRNewswire/ -- From GTC 2026: WEKA, the AI storage and memory systems company, today announced the integration of its NeuralMesh software with the NVIDIA STX reference architecture. WEKA's breakthrough Augmented Memory Grid memory extension technology running on NeuralMesh will support NVIDIA STX to bring high-throughput context memory storage to agentic AI factories, making long-context reasoning seamless across sessions, tools, and tasks. Leveraging NVIDIA Vera Rubin NVL72, NVIDIA BlueField-4, and NVIDIA Spectrum-X Ethernet, the NeuralMesh solution based on NVIDIA STX will deliver an estimated increase of 4-10x more tokens per second for context memory while supporting at least 320 GB read and 150 GB write throughput per second for AI workloads, more than double the throughput of conventional AI storage platforms.

WEKA and NVIDIA unlock cost-efficient AI inference at scale

Solving the Inference Cost Problem with Shared KV Cache Infrastructure
Scaling agentic systems, especially for software engineering applications, exposes a hard truth: today's AI economics are decided at the memory infrastructure layer. Every large-scale inference fleet hits the memory wall: limited high-bandwidth memory (HBM) on the GPU is rapidly exhausted, key-value (KV) cache is evicted, context is lost, and the system is forced to repeat work it already completed. This architectural inefficiency sends inference costs soaring. The answer is a shared KV cache infrastructure that keeps context live across agents, users, and sessions. It eliminates redundant computation, sustains token throughput, and maintains predictable performance. Without shared KV cache infrastructure, every increase in concurrent users and agents becomes a liability - costs rise, experiences degrade, and the inference fleet becomes harder to operate the larger it grows. With STX for context memory, NVIDIA is introducing a blueprint to address these core inference bottlenecks.

Context Memory Storage: The Foundation of Agentic AI Factories
With co-designed WEKA solutions based on NVIDIA STX architecture, AI clouds, enterprises, and AI model builders can deploy the infrastructure foundation they need to run GPUs at peak productivity, sustain high-volume token production, and make large-scale inference more energy and cost-efficient.

Leading AI innovators and cloud providers, such as Firmus, are already transforming their inference economics with Augmented Memory Grid on NeuralMesh.

"Real-world AI doesn't run in a lab- it has power constraints, cooling limits, and relentless workload demand. Firmus is built for exactly that. Paired with NVIDIA AI infrastructure, WEKA Augmented Memory Grid delivers up to 6.5x higher tokens per second and 4x faster TTFT at scale, proving we can get more performance from the same GPU footprint. With NeuralMesh and Augmented Memory Grid integrated into our NVIDIA-aligned AI Factory and NVIDIA STX reference architecture, we'll be able to deliver the fastest context memory network for predictable and efficient inference at scale," said Daniel Kearney, Chief Technology Officer at Firmus.

NeuralMesh and NVIDIA STX: Purpose-Built for Agentic AI
NeuralMesh is WEKA's intelligent, adaptive storage system built on over 170 patents. It will run across the full-stack STX reference architecture, providing the next-generation storage organizations need to standardize high-performance AI data services and accelerate agentic AI outcomes. WEKA's Augmented Memory Grid is a purpose-built memory extension layer that pools and persists KV cache outside of GPU memory, keeping long-context sessions stable and concurrency high as inference workloads grow. First unveiled at GTC 2025 and generally available to NeuralMesh customers today, Augmented Memory Grid has been validated with Supermicro on NVIDIA Grace CPUs and BlueField-3 DPUs to deliver numerous benefits that improve AI economics, including:

  • Faster User Experiences: Augmented Memory Grid on NeuralMesh delivers up to 4-20x improvement in time-to-first-token, keeping AI agents and applications responsive under real-world load.
  • More Revenue from the Same Hardware: Serve 6.5x more tokens per GPU - without adding infrastructure.
  • Sustained Performance at Scale: Augmented Memory Grid maintains high KV cache hit rates even as sessions, agents, and context windows grow - preventing the performance cliff that hits DRAM-only architectures.
  • GPU-Native Efficiency: BlueField-4 integration offloads the storage data path from the CPU, keeping GPUs fully productive and eliminating I/O bottlenecks.

"With coding LLMs advancing, we're seeing unprecedented adoption of Agentic AI use cases for software engineering, where productivity increases by 100-1000x. As coding assistants make repeated calls against largely unchanged codebases and prompts, WEKA's Augmented Memory Grid reuses cached context instead of forcing redundant prefill, even as context windows grow to incredible lengths. This provides a major boost in response times and greatly increases the number of concurrent users running on the same infrastructure," said Liran Zvibel, co-founder and CEO at WEKA. "WEKA first identified this need for context memory storage more than a year ago and launched Augmented Memory Grid at GTC 2025. Now, NVIDIA STX opens the door to organizations running their storage and memory extension infrastructure on state-of-the-art NVIDIA Vera Rubin architecture, including NVIDIA BlueField-4 and NVIDIA Spectrum-X Ethernet. Running Augmented Memory Grid on NeuralMesh for NVIDIA STX delivers extreme performance and efficiency that translates directly to game-changing AI economics."

Availability

WEKA's Augmented Memory Grid is commercially available with NeuralMesh today.

Organizations that don't address the memory wall today will find it harder and more expensive to scale tomorrow. As agentic workloads grow and context windows expand, DRAM-only architectures face a compounding cost problem: each additional concurrent user or session increases recomputation overhead, GPU idle time, and operational cost. The organizations that architect for persistent KV cache now will have a structural cost and performance advantage over those that wait.

For more information about NeuralMesh, visit: weka.io/NeuralMesh.
For more information about Augmented Memory Grid, visit: weka.io/augmented-memory-grid.

Organizations can learn more at weka.io/nvidia or visit WEKA at GTC 2026, booth #1034.

About WEKA
WEKA is transforming how organizations build, run, and scale AI workflows with NeuralMesh by WEKA, its intelligent, adaptive mesh storage system. Unlike traditional data infrastructure, which becomes slower and more fragile as workloads expand, NeuralMesh becomes faster, stronger, and more efficient as it scales, dynamically adapting to AI environments to provide a flexible foundation for enterprise AI and agentic AI innovation. Trusted by 30% of the Fortune 50, NeuralMesh helps leading enterprises, AI cloud providers, and AI builders optimize GPUs, scale AI faster, and reduce innovation costs. Learn more at www.weka.io or connect with us on LinkedIn and X.

WEKA and the W logo are registered trademarks of WekaIO, Inc. Other trade names herein may be trademarks of their respective owners.

WEKA: The Foundation for Enterprise AI

Photo - https://mma.prnewswire.com/media/2934399/WEKA_and_NVIDIA.jpg
Logo - https://mma.prnewswire.com/media/1796062/WEKA_v1_Logo_new.jpg

Cision View original content:https://www.prnewswire.co.uk/news-releases/weka-maximizes-token-output-with-lower-cost-per-token-on-nvidia-bluefield-4-stx-302714469.html

© 2026 PR Newswire
Favoritenwechsel - diese 5 Werte sollten Anleger im Depot haben!
Das Börsenjahr 2026 ist für viele Anleger ernüchternd gestartet. Tech-Werte straucheln, der Nasdaq 100 tritt auf der Stelle und ausgerechnet alte Favoriten wie Microsoft und SAP rutschen zweistellig ab. KI ist plötzlich kein Rückenwind mehr, sondern ein Belastungsfaktor, weil Investoren beginnen, die finanzielle Nachhaltigkeit zu hinterfragen.

Gleichzeitig vollzieht sich an der Wall Street ein lautloser Favoritenwechsel. Während viele auf Wachstum setzen, feiern Value-Titel mit verlässlichen Cashflows ihr Comeback: Telekommunikation, Industrie, Energie, Pharma – die „Cashmaschinen“ der Realwirtschaft verdrängen hoch bewertete Hoffnungsträger.

In unserem aktuellen Spezialreport stellen wir fünf Aktien vor, die genau in dieses neue Marktbild passen: solide, günstig bewertet und mit attraktiver Dividende. Werte, die nicht nur laufende Erträge liefern, sondern auch bei Marktkorrekturen Sicherheit bieten.

Jetzt den kostenlosen Report sichern – bevor der Value-Zug 2026 endgültig abfährt!

Dieses exklusive PDF ist nur für kurze Zeit gratis verfügbar.
Werbehinweise: Die Billigung des Basisprospekts durch die BaFin ist nicht als ihre Befürwortung der angebotenen Wertpapiere zu verstehen. Wir empfehlen Interessenten und potenziellen Anlegern den Basisprospekt und die Endgültigen Bedingungen zu lesen, bevor sie eine Anlageentscheidung treffen, um sich möglichst umfassend zu informieren, insbesondere über die potenziellen Risiken und Chancen des Wertpapiers. Sie sind im Begriff, ein Produkt zu erwerben, das nicht einfach ist und schwer zu verstehen sein kann.