Redis vs Valkey vs Dragonfly in 2026: Which In-Memory Store to Choose
Table of Contents
The in-memory data store landscape has fractured in a way nobody predicted two years ago. What was once a single-product category dominated by Redis has become a three-way race between Redis (now under a triple license), Valkey (the Linux Foundation-backed open-source fork), and Dragonfly (a ground-up multi-threaded reimplementation). Each project has shipped major releases in 2025, and each occupies a distinct position in terms of architecture, licensing, performance, and ecosystem maturity.
This article is written for backend engineers, DevOps practitioners, and engineering leads who need to make an informed decision about which in-memory store to deploy in 2026. We cover the licensing situation in detail, compare architectures and benchmark data, walk through practical code examples, and close with concrete recommendations for different project profiles. All data points are sourced from official documentation, published benchmarks, and community reports from 2025-2026.
The Licensing Landscape: Why There Are Three Projects Now
Understanding the current state of affairs requires a brief history of the Redis licensing saga.
In March 2024, Redis Ltd. changed the license of Redis from the permissive BSD 3-clause license to a dual source-available model: RSALv2 (Redis Source Available License v2) and SSPLv1 (Server Side Public License v1). Neither of these is an OSI-approved open-source license. The change was motivated by Redis Ltd.’s desire to prevent cloud providers from offering managed Redis services without contributing back.
The community response was immediate. Within weeks, the Linux Foundation announced Valkey, a fork of Redis 7.2.4 under the original BSD 3-clause license. AWS, Google Cloud, Oracle, Ericsson, and Snap Inc. all pledged support. The project quickly attracted contributors and shipped its first independent release.
In May 2025, Redis Ltd. reversed course partially by adding AGPLv3 as a third licensing option alongside RSALv2 and SSPLv1. This was influenced by the return of Salvatore Sanfilippo (antirez), the original creator of Redis, who rejoined Redis Ltd. in November 2024. Redis 8.0 and all subsequent versions are available under this triple-license model (RSALv2 / SSPLv1 / AGPLv3 at your choice).
Dragonfly entered the picture from a different angle entirely. It is not a fork of Redis but a ground-up reimplementation of the Redis and Memcached APIs, built with a multi-threaded, shared-nothing architecture. Dragonfly uses the Business Source License (BSL 1.1), which permits free use for internal purposes and as part of your own product, but prohibits offering it as a hosted in-memory data store service. The code will convert to Apache 2.0 on March 1, 2029.
| Aspect | Redis | Valkey | Dragonfly |
|---|---|---|---|
| License | RSALv2 / SSPLv1 / AGPLv3 | BSD 3-clause | BSL 1.1 (Apache 2.0 after 2029) |
| OSI-approved open source | AGPLv3 option only | Yes | No (source-available) |
| Can be offered as managed service | Only under AGPL/SSPL terms | Yes, no restrictions | No, until change date |
| Governance | Redis Ltd. | Linux Foundation | DragonflyDB Inc. |
| Original codebase | Redis | Fork of Redis 7.2.4 | Independent implementation |
The licensing choice has real operational implications. If you are building a SaaS product that embeds an in-memory store and you want maximum licensing freedom, Valkey is the cleanest option. If you need the latest Redis features and are comfortable with AGPL copyleft obligations (or use Redis internally only), Redis 8.x under AGPLv3 works. If you are using the store internally and do not offer it as a service, Dragonfly’s BSL is permissive enough for most use cases.
Architecture Deep Dive
Redis: The Single-Threaded Pioneer
Redis has maintained its famous single-threaded command execution model since its inception. All data manipulation happens on a single main thread, which guarantees atomicity without locks. Since Redis 6.0, I/O operations (reading from and writing to client sockets) can be offloaded to background threads via io-threads, but the core data operations remain single-threaded.
Redis 8.0 merged all previously separate modules (RedisJSON, RediSearch, RedisTimeSeries, RedisBloom, etc.) into a single unified distribution called Redis Open Source. This means you no longer need to install modules separately — vector search, JSON, time series, and probabilistic data structures are all built in.
Key architectural characteristics of Redis 8.x:
- Single-threaded command execution with multi-threaded I/O
- Copy-on-write fork for RDB snapshots (causes memory spikes)
- Redis Cluster for horizontal scaling (up to 1,000 shards)
- Built-in Redis Query Engine for secondary indexing and vector search
- 8 new data structures including Vector Sets, JSON, Time Series, Bloom filters, Cuckoo filters, Count-Min Sketch, Top-K, and T-Digest
Valkey: The Enhanced Fork
Valkey started as a direct fork of Redis 7.2.4 and thus inherits the same fundamental architecture. However, the Valkey community has invested heavily in improving the threading model within that architecture’s constraints.
Valkey 8.0 (September 2024) introduced a redesigned I/O threading system that intelligently distributes I/O tasks across multiple cores based on real-time usage. Unlike Redis’s static io-threads configuration, Valkey can dynamically adjust thread utilization. The result was dramatic: Valkey 8.0 achieved 1.19 million RPS on an AWS c7g.4xlarge instance (16 vCPUs) — a 230% increase over Valkey 7.2’s 360K RPS.
Valkey 9.0 (October 2025) brought further improvements:
- Multi-database clustering: Multiple logical databases in cluster mode, eliminating the need for separate clusters per namespace
- Atomic slot migration: Data moves atomically between nodes instead of key-by-key, fixing edge cases that caused data loss or blocked migrations
- 2,000-node cluster support: Improved cluster protocol resilience for massive deployments
- 1 billion RPS: Demonstrated in cluster benchmarks, a 40% throughput improvement over Valkey 8.1
- Hash field expiration: Per-field TTL on hash data structures
- Optimized pipelining: Reduced latency in high-concurrency environments
Valkey’s redesigned hash table implementation also reduces memory overhead by approximately 20-30 bytes per key compared to Redis.
Dragonfly: The Multi-Threaded Reimplementation
Dragonfly takes a fundamentally different approach. Instead of running commands on a single thread, Dragonfly divides its in-memory dataset into independent shards, each assigned to a dedicated OS thread. This is a shared-nothing architecture where each thread manages its own portion of the data, eliminating the need for locks or cross-thread synchronization during normal operations.
Key architectural characteristics of Dragonfly:
- True multi-threaded data processing: Each CPU core handles its own shard of data
- Shared-nothing architecture: No global locks, no contention
- Dashtable: A custom hash table implementation optimized for cache-line efficiency, reducing memory overhead by ~38% compared to Redis/Valkey
- Snapshot without fork: Uses a novel algorithm that avoids the copy-on-write memory spike that Redis/Valkey suffer during RDB snapshots
- Single-node vertical scaling: A single Dragonfly instance can replace an entire Redis Cluster
- Built-in search engine: RediSearch-compatible but reportedly more performant
- Native OpenTelemetry support: Built-in observability without additional plugins
The architectural difference manifests most clearly under CPU-intensive workloads. Operations like ZADD (sorted set insertion) require significant computation per command. On a 48 vCPU server, Dragonfly reached 29x higher throughput than Valkey for ZADD operations because Dragonfly distributes the work across all cores, while Valkey is bottlenecked by its main thread.
Performance Benchmarks
Performance comparisons between these three systems must be interpreted carefully. Many published benchmarks come from Dragonfly’s blog (which naturally favors Dragonfly) or from Redis Ltd.’s marketing materials. We synthesize data from multiple sources including independent benchmarks.
Throughput: Single Node
| Benchmark | Redis 8.0 | Valkey 8.1 / 9.0 | Dragonfly |
|---|---|---|---|
| GET (simple, 16 vCPU) | ~500K RPS | ~700K RPS | ~2.5M RPS |
| SET (simple, 16 vCPU) | ~450K RPS | ~600K RPS | ~2.2M RPS |
| GET (pipeline=30, 48 vCPU) | ~2M RPS | ~3M RPS | ~15M RPS |
| SET (pipeline=30, 48 vCPU) | ~1.5M RPS | ~2M RPS | ~10M RPS |
| ZADD (48 vCPU) | Main-thread bound | Main-thread bound | ~29x higher than Valkey |
| Memory per key (idle) | Baseline | ~20-30 bytes less | ~38% less than Redis |
Sources: Dragonfly GCP benchmark, centminmod/redis-comparison-benchmarks, Valkey 8.0 announcement, Valkey 9.0 blog.
Valkey 8.1 showed 37% higher throughput for SET operations and 16% higher throughput for GET operations compared to Redis 8.0 in benchmarks on AWS Graviton instances. This advantage comes from Valkey’s improved I/O threading.
Dragonfly’s numbers are the most dramatic: 4.5x higher throughput than Valkey on Google Cloud in general-purpose workloads, with the gap widening for CPU-intensive operations. On a c6gn.16xlarge instance (64 vCPUs), Dragonfly crossed 3.8M QPS for simple GET/SET without pipelining.
Memory Efficiency
Memory efficiency matters significantly at scale. Dragonfly claims a 38% reduction in memory usage per item compared to Valkey, which itself is more efficient than Redis by 20-30 bytes per key. Dragonfly’s Dashtable data structure is designed for cache-line efficiency and avoids the pointer-heavy structures used by Redis and Valkey.
Additionally, Dragonfly’s snapshot mechanism does not use fork(), avoiding the copy-on-write memory spike that can double Redis/Valkey memory usage during persistence operations. This means Dragonfly’s peak memory usage is much more predictable.
Latency
At low load, all three systems deliver sub-millisecond P99 latency. The differences emerge under load:
- Redis and Valkey show extremely consistent latency because the single-threaded model eliminates scheduling jitter. P99 latency under moderate load is typically 0.1-0.3 ms.
- Dragonfly can show slightly higher tail latency under certain conditions due to cross-shard coordination for multi-key operations. However, for single-key operations within one shard, latency is comparable.
For latency-sensitive applications where predictability matters more than raw throughput, Redis and Valkey have a slight edge. For throughput-heavy workloads, Dragonfly’s multi-threaded architecture wins decisively.
Feature Comparison
Data Structures and Modules
Redis 8.0 unified all previously separate modules into the core distribution. This gives Redis the broadest built-in feature set:
# Redis 8.x — JSON is a first-class data type
127.0.0.1:6379> JSON.SET user:1 $ '{"name":"Alice","scores":[95,87,92]}'
OK
127.0.0.1:6379> JSON.GET user:1 $.scores[0]
"[95]"
# Vector Sets (new in Redis 8.0)
127.0.0.1:6379> VADD embeddings VALUES 3 0.1 0.2 0.3 item1
(integer) 1
127.0.0.1:6379> VSIM embeddings VALUES 3 0.15 0.25 0.35 COUNT 5
1) "item1"
Valkey is working on adding comparable features through its extension ecosystem. The Valkey roadmap includes JSON support, Bloom filters, and vector similarity search, but as of early 2026, these are not yet fully integrated into the main distribution. Valkey 9.0 focused on clustering and performance rather than data structure additions.
Dragonfly supports a subset of Redis modules natively, including a built-in search engine compatible with the RediSearch API and probabilistic data structures (Bloom filters). JSON support and vector search capabilities are available but may lag behind Redis’s implementation in feature completeness.
| Feature | Redis 8.x | Valkey 9.x | Dragonfly 1.x |
|---|---|---|---|
| Strings, Lists, Sets, Hashes, Sorted Sets | Yes | Yes | Yes |
| Streams | Yes | Yes | Yes |
| JSON | Built-in | Extension (planned) | Partial |
| Time Series | Built-in | Not yet | Not yet |
| Vector Search | Built-in (Query Engine + Vector Sets) | Planned | Partial |
| Bloom / Cuckoo Filters | Built-in | Extension | Built-in |
| Full-Text Search | Built-in (Query Engine) | Not yet | Built-in (search engine) |
| Pub/Sub | Yes | Yes | Yes |
| Lua Scripting | Yes | Yes | Yes |
| Cluster Mode | Yes (up to 1,000 shards) | Yes (up to 2,000 nodes) | Single-node vertical scaling |
| Multi-database Cluster | No | Yes (Valkey 9.0) | N/A (single node) |
Client Compatibility
All three systems are compatible with existing Redis clients. You can use redis-py, ioredis, jedis, lettuce, or any other Redis client library to connect to Valkey or Dragonfly without code changes. The wire protocol is identical.
Valkey also offers its own client library, Valkey GLIDE (General Language Independent Driver for the Enterprise), written in Rust with bindings for Java, Python, Node.js, and Go. GLIDE is designed for reliability and optimized performance with built-in connection management and failover support.
# Connecting to any of the three stores with redis-py
import redis
# Works identically for Redis, Valkey, or Dragonfly
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
# Basic operations — identical API
r.set('session:abc123', '{"user_id": 42, "role": "admin"}', ex=3600)
session = r.get('session:abc123')
print(session)
# Pipelines — identical API
pipe = r.pipeline()
for i in range(1000):
pipe.set(f'key:{i}', f'value:{i}')
pipe.execute()
# Using Valkey GLIDE for Valkey-specific features
from glide import GlideClient, GlideClientConfiguration, NodeAddress
config = GlideClientConfiguration(
addresses=[NodeAddress(host="localhost", port=6379)]
)
client = await GlideClient.create(config)
await client.set("key", "value")
value = await client.get("key")
Clustering and Horizontal Scaling
Redis and Valkey both support hash-slot-based clustering. The keyspace is divided into 16,384 hash slots distributed across master nodes, with automatic failover to replicas.
Valkey 9.0 made significant improvements to clustering:
- Atomic slot migration eliminates data loss edge cases
- Multi-database support in cluster mode
- Tested at 2,000-node scale with 1 billion RPS
Dragonfly takes a different approach: instead of clustering across multiple nodes, a single Dragonfly instance scales vertically by utilizing all available CPU cores. For many workloads, a single large Dragonfly instance (e.g., 48 or 64 vCPUs) can replace an entire Redis/Valkey cluster of 6-12 nodes. Dragonfly does support replication for high availability, but its clustering story is simpler — you scale up, not out.
This has operational implications. Managing a Redis/Valkey cluster involves slot rebalancing, node additions, failover configuration, and cross-node latency considerations. A single Dragonfly node eliminates most of this complexity at the cost of a vertical scaling ceiling.
Docker and Cloud Deployment
Docker Quick Start
# docker-compose.yml — all three side by side
version: "3.8"
services:
redis:
image: redis:8.4
ports:
- "6379:6379"
command: redis-server --maxmemory 2gb --maxmemory-policy allkeys-lru
valkey:
image: valkey/valkey:9.0
ports:
- "6380:6379"
command: valkey-server --maxmemory 2gb --maxmemory-policy allkeys-lru
dragonfly:
image: docker.dragonflydb.io/dragonflydb/dragonfly:latest
ports:
- "6381:6379"
ulimits:
memlock: -1
command: dragonfly --maxmemory=2gb --cache_mode
Redis has over 10 billion Docker Hub pulls as of late 2025, making it one of the most pulled images on the platform. Valkey has surpassed 5 million Docker pulls in its first year. Dragonfly’s Docker image is hosted on its own registry (docker.dragonflydb.io) in addition to Docker Hub.
Cloud Managed Services
| Provider | Redis | Valkey | Dragonfly |
|---|---|---|---|
| AWS | ElastiCache, MemoryDB | ElastiCache (default since 2024) | Not managed |
| Google Cloud | Memorystore for Redis | Memorystore for Valkey | Not managed |
| Azure | Azure Cache for Redis | Not yet | Not managed |
| Aiven | Yes | Yes | Yes |
| Dragonfly Cloud | No | No | Yes (managed offering) |
AWS made a notable move in 2024 by switching ElastiCache’s default engine from Redis to Valkey. Google Cloud followed with Memorystore for Valkey. This cloud provider backing gives Valkey a significant deployment advantage for teams using managed services.
Dragonfly offers its own managed cloud service (Dragonfly Cloud) but is not available as a managed option from the major cloud providers. Self-hosted Dragonfly on Kubernetes is straightforward using the official dragonfly-operator.
Migration Considerations
Migrating from Redis to Valkey
Migration from Redis to Valkey is the simplest of all paths. Since Valkey is a direct fork, it supports the same commands, configuration directives, and data formats. For most deployments, migration involves:
- Replace the
redis-serverbinary withvalkey-server - Rename
redis.conftovalkey.conf(or keep the same file — Valkey accepts both) - Update monitoring and alerting dashboards to use Valkey metrics names
# Migration is essentially a drop-in replacement
# Stop Redis
systemctl stop redis
# Install Valkey
apt-get install valkey-server
# Copy config (Valkey reads redis.conf format)
cp /etc/redis/redis.conf /etc/valkey/valkey.conf
# Start Valkey with existing RDB/AOF data
systemctl start valkey
Note that Valkey 9.0 has diverged enough that some newer features (multi-database clustering, atomic slot migration) do not have Redis equivalents. If you use these features, reverse migration back to Redis becomes harder.
Migrating from Redis to Dragonfly
Dragonfly is API-compatible with Redis and accepts RDB snapshot files for data import. However, there are differences to be aware of:
- Dragonfly does not support Redis Cluster protocol (it scales vertically, not horizontally)
- Some Redis modules may not be fully supported
- Lua scripting is supported but some edge cases may differ
- Configuration flags use Dragonfly-specific names
# Import existing Redis RDB into Dragonfly
dragonfly --dbfilename dump.rdb --dir /data
# Or use live replication from Redis
dragonfly --replicaof redis-host:6379
Dragonfly’s --replicaof flag allows it to act as a replica of an existing Redis instance, which enables zero-downtime migration by syncing data in real time before cutting over.
Community and Ecosystem Health
GitHub Statistics (as of February 2026)
| Metric | Redis | Valkey | Dragonfly |
|---|---|---|---|
| GitHub Stars | ~73K | ~23K | ~26K |
| Contributors | 700+ | 150+ | 100+ |
| First Release | 2009 | 2024 | 2022 |
| Latest Stable | 8.4.1 (Feb 2026) | 9.0.x (Oct 2025) | 1.x (Feb 2026) |
| Release Cadence | Monthly patches | ~6 month majors | Frequent patches |
| Backing | Redis Ltd. | Linux Foundation | DragonflyDB Inc. |
| Contributing Companies | Redis Ltd. primarily | ~50 companies (AWS, Google, Oracle, etc.) | DragonflyDB Inc. primarily |
Redis has the largest community by far, with 15+ years of documentation, tutorials, Stack Overflow answers, and battle-tested production deployments. Any problem you encounter with Redis has likely been solved and documented.
Valkey has rapidly built community momentum. With 50 contributing companies and Linux Foundation governance, it has the strongest organizational backing for long-term sustainability. The project shipped 13 releases and accumulated over 1,000 commits in its first year.
Dragonfly is the youngest project and the most architecturally innovative. Its community is smaller but growing rapidly. The project’s main risk is its dependency on DragonflyDB Inc. — if the company struggles, the BSL license means the community cannot easily fork and maintain the project until the change date.
When to Choose Redis
Choose Redis 8.x when:
- You need vector search, JSON, time series, and probabilistic data structures built into a single distribution — Redis 8.x has the richest feature set
- You rely on Redis Modules from the ecosystem that have not been ported to Valkey or Dragonfly
- You use Azure Cache for Redis, which does not yet offer Valkey
- Your team has deep Redis operational expertise and you want to stay on the canonical implementation
- You are comfortable with AGPLv3 copyleft obligations (or use Redis internally only)
- You need the Redis Query Engine for secondary indexing and full-text search on your cached data
Be cautious about Redis when:
- You are building a product that embeds Redis and selling it to customers (AGPL requires source disclosure; RSALv2 restricts competitive use)
- You are concerned about future licensing changes — Redis Ltd. has changed licenses twice in two years
- You need maximum single-node throughput and are willing to change stores to get it
When to Choose Valkey
Choose Valkey 9.x when:
- You need a permissively licensed (BSD 3-clause) in-memory store with no restrictions on commercial use or managed service offerings
- You are deploying on AWS ElastiCache or Google Cloud Memorystore — Valkey is now the default engine
- You need multi-database clustering or atomic slot migration (Valkey 9.0 exclusive features)
- You want to bet on the project with the broadest industry backing (Linux Foundation, AWS, Google, Oracle)
- You are migrating from Redis and want a drop-in replacement with minimal risk
- You need to scale horizontally to 2,000+ cluster nodes with billion-RPS throughput
Be cautious about Valkey when:
- You need built-in JSON, time series, or vector search today — Valkey’s module ecosystem is still maturing
- You need the absolute highest single-node throughput — Valkey’s main-thread bottleneck remains
When to Choose Dragonfly
Choose Dragonfly when:
- You need maximum single-node throughput and want to avoid the complexity of clustering
- Your workload is CPU-intensive (sorted sets, complex operations) where Dragonfly’s multi-threaded architecture provides 10-29x improvement
- You want to reduce infrastructure costs by replacing a multi-node Redis/Valkey cluster with a single large Dragonfly instance (up to 80% cost reduction claimed)
- Memory efficiency is critical — Dragonfly’s 38% memory reduction can save significant costs at scale
- You want predictable memory usage during snapshots — no fork-based memory spikes
- You are self-hosting and do not plan to offer the store as a managed service (BSL allows this)
Be cautious about Dragonfly when:
- You need Redis Cluster protocol compatibility — Dragonfly does not support it
- You need a managed service from AWS, Google Cloud, or Azure (only available on Dragonfly Cloud or Aiven)
- You need guaranteed long-term open-source availability — BSL is not open source
- You need feature parity with Redis 8.x modules (some are only partially implemented)
- You are concerned about vendor lock-in to a single company with no foundation governance
Conclusion and Recommendations
The in-memory data store market in 2026 offers genuine choice for the first time. Here is our summary recommendation:
For most new projects in 2026, Valkey is the safest default choice. It offers permissive licensing, strong community backing from the Linux Foundation and major cloud providers, near-parity with Redis performance (and better in many benchmarks), and a drop-in migration path. The fact that AWS and Google Cloud have both adopted Valkey as their default managed engine speaks volumes about industry confidence.
If you need advanced data structures and AI-ready features today, Redis 8.x under AGPLv3 is the most feature-complete option. The unified distribution with built-in JSON, vector search, time series, and probabilistic data structures is unmatched. Just be clear-eyed about the licensing implications.
If you are running high-throughput, self-hosted workloads where single-node performance matters, Dragonfly is the performance leader by a wide margin. The multi-threaded architecture eliminates the fundamental scaling bottleneck of Redis and Valkey. The trade-off is a less mature ecosystem and source-available (not open-source) licensing.
For teams currently on Redis who are uncertain about the licensing changes, migrating to Valkey is low-risk and can be done with minimal effort. For teams hitting the performance ceiling of single-threaded Redis/Valkey, Dragonfly is worth serious evaluation.
Sources
- Redis 8.0 GA Announcement — https://redis.io/blog/redis-8-ga/
- Valkey 9.0 Release Blog — https://valkey.io/blog/introducing-valkey-9/
- Dragonfly vs Valkey Benchmark on GCP — https://www.dragonflydb.io/blog/dragonfly-vs-valkey-benchmark-on-google-cloud
- Redis Licensing: AGPLv3 Announcement — https://redis.io/blog/agplv3/
- A Year of Valkey (Linux Foundation) — https://www.linuxfoundation.org/blog/a-year-of-valkey
- Dragonfly License Documentation — https://www.dragonflydb.io/docs/about/license
- Redis 8.0 vs Valkey 8.1 Technical Comparison — https://www.dragonflydb.io/blog/redis-8-0-vs-valkey-8-1-a-technical-comparison
- Valkey 9.0 Multi-Database Clustering (InfoQ) — https://www.infoq.com/news/2025/11/valkey-9-atomic-migration/
- Valkey vs Redis Comparison (Better Stack) — https://betterstack.com/community/comparisons/redis-vs-valkey/
- centminmod Redis Comparison Benchmarks — https://github.com/centminmod/redis-comparison-benchmarks