Anzeige
Mehr »
Mittwoch, 10.09.2025 - Börsentäglich über 12.000 News
Warum diese Aktie jetzt das perfekte Chance-Risiko-Profil für Investoren bietet
Anzeige

Indizes

Kurs

%
News
24 h / 7 T
Aufrufe
7 Tage

Aktien

Kurs

%
News
24 h / 7 T
Aufrufe
7 Tage

Xetra-Orderbuch

Fonds

Kurs

%

Devisen

Kurs

%

Rohstoffe

Kurs

%

Themen

Kurs

%

Erweiterte Suche
PR Newswire
215 Leser
Artikel bewerten:
(1)

Software-Defined Memory (SDM) Pioneer Kove Announces Benchmarks Showing 5x Larger AI Inference Workloads on Redis & Valkey, With Lower Latency Than Local Memory

CHICAGO, Sept. 9, 2025 /PRNewswire/ -- Kove, creator of software-defined memory Kove:SDM, today announced benchmark results proving that Redis and Valkey - two of the most widely used engines in AI inference - can run up to 5x larger workloads faster than local DRAM in a majority of cases when powered by remote memory via Kove:SDM. Kove:SDM is the world's first and only commercially available software-defined memory solution. Using pooled memory, running on any hardware supported by Linux, Kove:SDM allows technologists to dynamically right-size memory according to need. This enables better inference-time processing, faster time to solution, increased memory resilience, and substantial energy reductions. The announcement, made during CEO John Overton's keynote at AI Infra Summit 2025, identifies memory, not compute, as the next bottleneck for scaling AI Inference.

Kove

While GPUs and CPUs continue to scale, traditional DRAM remains fixed, fragmented, and underutilized - stalling inference workloads, creating unnecessary and repetitive GPU processing, and driving up costs due to inefficient GPU utilization. Tiering to NVMe storage can aid in reducing GPU recomputation but also delivers greatly reduced performance compared to system memory. Instead, Kove:SDM virtualizes memory across servers, creating much larger elastic memory pools that behave and perform like local DRAM to break through the "memory wall" that continues to constrain scale-out AI inference. With Kove:SDM, KV cache can remain in memory without suffering the performance degradation from tiering across HBM, memory, and storage. What happens in memory, stays in memory.

Redis and Valkey Benchmarks: Proof for AI Inference

Kove:SDM improves capacity and latency compared to local server memory. Independent benchmarks ran on the same server - first without Kove:SDM, and then with Kove:SDM - across Oracle Cloud Infrastructure. With 5x more memory from Kove:SDM, server performance compared to local memory was:

Redis Benchmark (v7.2.4)

  • 50th Percentile: SET 11% faster, GET 42% faster
  • 100th Percentile: SET 16% slower, GET 14% faster

Valkey Benchmark (v8.0.4)

  • 50th Percentile: SET 10% faster, GET 1% faster
  • 100th Percentile: SET 6% faster, GET 25% faster

"These results show that software-defined memory can directly accelerate AI inference by eliminating KV cache evictions and redundant GPU computation," said John Overton, CEO of Kove. "Every GET that doesn't trigger a recompute saves GPU cycles. For large-scale inference environments, that translates into millions of dollars saved annually."

Redis has long powered caching workloads across industries. A new branch to Redis, Valkey is now widely integrated into vLLMs, making it central to modern inference pipelines. By expanding KV Cache capacity and improving performance, Kove:SDM directly addresses one of AI's most urgent challenges.

Structural Business Impact

Enterprises deploying AI at scale stand to benefit from significant financial savings:

  • $30-40M+ annual savings typical for large-scale deployments.
  • 20-30% lower hardware spend by deferring costly high-memory server refreshes.
  • 25-54% lower power and cooling costs from improved memory efficiency.
  • Millions in avoided downtime by eliminating memory bottlenecks that cause failures.

"Kove has created a new category - software-defined memory - that makes AI infrastructure both performant and economically sustainable," said Beth Rothwell, Director of GTM Strategy at Kove. "It's the missing layer of the AI stack. Without it, AI hits a wall. With it, AI inference scales, GPUs stay busy doing the right work, and enterprises save tens of millions."

Why It Matters Now

AI demand is doubling every 6-12 months, while DRAM budgets cannot keep pace. Existing solutions like tiered KV caching or storage offload reduce efficiency or add latency. By contrast, Kove:SDM pools DRAM across servers while delivering local memory performance. KV cache tiering to storage can be 100-1000x less performant than Kove:SDM.

Kove:SDM is available now, deploys without application or code changes, and runs on any x86 hardware supported by Linux.

About Kove

Founded in 2003, Kove has a long history of solving technology's most vexing problems, from launching high-speed back-ups for large databases to setting sustained storage speed records. Kove invented pioneering technology using distributed hash tables for locality management. This breakthrough technology created unlimited scaling, and enabled the advent of cloud storage, scale-out database sharding, among other markets. Most recently, after years of development, testing and validation, Kove delivers the world's first patented and mature software-defined memory solution - Kove:SDM. Kove:SDM enables organizations and their leaders to achieve more by maximizing the performance of their people and infrastructure. Kove's team of passionate software engineers and technologists understands the importance of access to high performance computing technology and has worked with clients across a variety of industry verticals from financial services and life sciences to energy and defense. Kove is committed to delivering the products and personalized service that enable every enterprise to reach its full potential. To learn more, visit kove.com.

Contact: CJ Martinez, 1-310-980-5431, cjmartinez@axial1.com

Logo - https://mma.prnewswire.com/media/2555946/Kove_Logo.jpg

Cision View original content:https://www.prnewswire.co.uk/news-releases/software-defined-memory-sdm-pioneer-kove-announces-benchmarks-showing-5x-larger-ai-inference-workloads-on-redis--valkey-with-lower-latency-than-local-memory-302551048.html

© 2025 PR Newswire
Solarbranche vor dem Mega-Comeback?
Lange galten Solaraktien als Liebling der Börse, dann kam der herbe Absturz: Zinsschock, Überkapazitäten aus China und ein Preisverfall, der selbst Marktführer wie SMA Solar, Enphase Energy oder SolarEdge massiv unter Druck setzte. Viele Anleger haben der Branche längst den Rücken gekehrt.

Doch genau das könnte jetzt die Chance sein!
Die Kombination aus KI-Explosion und Energiewende bringt die Branche zurück ins Rampenlicht:
  • Rechenzentren verschlingen Megawatt – Solarstrom bietet den günstigsten Preis je Kilowattstunde
  • Moderne Module liefern Wirkungsgrade wie Atomkraftwerke
  • hina bremst Preisdumping & pusht massiv den Ausbau
Gleichzeitig locken viele Solar-Aktien mit historischen Tiefstständen und massiven Short-Quoten, ein perfekter Nährboden für Kursrebound und Squeeze-Rally.

In unserem exklusiven Gratis-Report zeigen wir dir, welche 4 Solar-Aktien besonders vom Comeback profitieren dürften und warum jetzt der perfekte Zeitpunkt für einen Einstieg sein könnte.

Laden Sie jetzt den Spezialreport kostenlos herunter, bevor die Erholung am Markt beginnt!

Dieses Angebot gilt nur für kurze Zeit – also nicht zögern, jetzt sichern!
Werbehinweise: Die Billigung des Basisprospekts durch die BaFin ist nicht als ihre Befürwortung der angebotenen Wertpapiere zu verstehen. Wir empfehlen Interessenten und potenziellen Anlegern den Basisprospekt und die Endgültigen Bedingungen zu lesen, bevor sie eine Anlageentscheidung treffen, um sich möglichst umfassend zu informieren, insbesondere über die potenziellen Risiken und Chancen des Wertpapiers. Sie sind im Begriff, ein Produkt zu erwerben, das nicht einfach ist und schwer zu verstehen sein kann.