Found and cleanup keys without expiration in Redis

Redis Nov 22, 2017

If for some reason, your Redis memory grow and grow but you would like to know why the first step should be to check how many keys you have without expiration.

You can easily check it by running the info keyspace command:

# Keyspace
db0:keys=3277631,expires=447528,avg_ttl=238402708

Where keys is the total number of key and expires the number of keys with an expiration. In this example, we have 13% of our keys that will expire a day. The average expiration time of those keys is avg_ttl in ms.

Good, a lot of our keys doesn't have an expiration, that should be our problem!

Now, founding which key expire or not is not a simple job, but it's a really useful information when you want to fixe the code that creates them.

Juste a little reminder:

Note: Never run keys in production !
Yes! Really, never do that please

What the simplest way to finding the keys without listing all the keys? If your Redis server is configured for RDB / AOF persistence, just use the dump and it's what we will do.

We will use an RDB dump in our example, but it's the same command with an AOF log.
For more information about Redis persistence, the redis documentation explain the two option with pro and cons

If you don't have persistence enabled, you can take a one time dump by running BGSAVE doc.

The RDB/AOF dump should be in /var/lib/redis/, copy it temporarly to an another place because redis could rewrite it during a transfert: cp /var/lib/redis/dump.rdb /tmp/dump.rdb.

Change the ownership of the file for allowing your local user to have access to it: chown jmaitrehenry. /tmp/dump.rdb

And download it on your local computer: scp myredis:/tmp/dump.rdb ./

Note: If like me, you have a gateway server between Redis and internet, I assume you know how to download your dump to your local computer.

Good, we can now work on our dump!

I prefer to load the dump in a new Redis server, and for that, why not using Docker?

docker run --name redis_dump -d -v `pwd`:/data -p 6379 redis:3.2

Note: If you don't have a local Redis client or don't want to use it, you can enter into the previous docker container: docker exec -ti redis_dump bash

Ok, let extract all the key without expiration

redis-cli keys "*" > keys
cat keys | xargs -n 1 -L 1 redis-cli ttl > ttl
paste -d " " keys ttl | grep .*-1$ | cut -d " " -f 1 > without_ttl

# We can create a script for deleting the keys 
cat without_ttl | awk '{print "redis-cli del "$1}' > redis.sh

You can now review your keys without TTL and upload your cleanup script to your Redis server and run it with sh redis.sh.

In our case, we found the problem in a ruby code that not set the expiration

$redis.with do |redis|
  redis.set redis_key, info, ttl: 3.days
end

The right key is ex and not ttl but we miss it in our code review process, and I think a lot of teams could miss it too.

Bonus - How to find largest keys

In some case, it's normal to not have an expiration on some keys, and we can skip them in the first cleanup. Redis has a way to scan the keyspace and found the biggest key per type with redis-cli --bigkeys doc

$ redis-cli --bigkeys

# Scanning the entire keyspace to find biggest keys as well as
# average sizes per key type.  You can use -i 0.1 to sleep 0.1 sec
# per 100 SCAN commands (not usually needed).

[00.00%] Biggest string found so far 'production_session:xxx' with 437 bytes
[00.00%] Biggest string found so far 'production_session:yyy' with 440 bytes
[00.00%] Biggest string found so far 'production_session:zzz' with 477 bytes
[...]
-------- summary -------

Sampled 362668 keys in the keyspace!
Total key length in bytes is 25195300 (avg len 69.47)

Biggest string found 'production:a/b/c' has 212156 bytes
Biggest   list found 'sidekiq-logs' has 18127306 items
Biggest    set found 'production:xyz' has 140 members
Biggest   hash found 'production:abc' has 140 fields
Biggest   zset found 'production:dead' has 8484 members

361073 strings with 253642736 bytes (99.56% of keys, avg size 702.47)
9 lists with 18127422 items (00.00% of keys, avg size 2014158.00)
507 sets with 18067 members (00.14% of keys, avg size 35.64)
1044 hashs with 20986 fields (00.29% of keys, avg size 20.10)
35 zsets with 9951 members (00.01% of keys, avg size 284.31)

It's how we found the real problems, the sidekiq-logs list just grow and grow.

Thanks for reading!

If you find a typo, have a problem when trying what you find on this article, please contact me!

Tags

Julien Maitrehenry

I specialize in DevOps, Agile practices and web development. I love sharing my knowledge for helping other people to go to the next level!