3 tricks of Python’s`lru_cache`

lru_cache pythoh
lru_cache pythoh


This tiny helper method `cache_info` from the decorator function can help you troubleshoot and debug caching function stats on each call.

for n in 8, 290, 308, 320, 8, 218, 320, 279, 289, 320, 9991:
pep = get_pep(n)
print(n, len(pep))
CacheInfo(hits=3, misses=8, maxsize=32, currsize=8)


I remember writing a cacheing bug a few years back where I had to use a lru cache decorator for some of the larger functions deployed to AWS lambda. The issue was that cache needs to be invalidated over time and there is no built in argument for that in python’s `lru_cache`. However a simple workaround is to make a time based variable as an argument for the function under lru decorator _

Image for post
Image for post
lru cache with ttl

And during the call we can now pass time based parameter

pep = get_pep(320, time.time() // 3600)# time.time() // 3600  - Cached for an hour
# time.time() // 24 * 3600 - Cached for a day

This is implicit TTL implementation without any external lib. The code is quite clever and I always feel satisfying how simple it is. It is easy to understand, and I really hope this will become standard in python.

maxsize = None

Default maxsize=128 is quite non universal. With `maxsize=None` you can put unlimited number for k-v pairs in the cache. RAM is cheap nowdays. Kubernetes can restart your container on OOM. So `maxsize=None` must be the new default standard value for lru cache function :sarcasm.

Happy hacking!

Written by

Eventually consistent and eventually practical system engineer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store