Sola-Aremu 'Pelumi
navigation
🔮

Elixir Distributed KV Store - Extending Groot

 
Background
At MonitorLizard, we run a cluster of machines connected via an internal IPv6 network. We recently required a distributed key-value store. Running multiple instances of our application means we can no longer trust maintaining state in Genservers. An approach that we could have explored is ensuring requests are routed to the machine we believe has that state. However, we would face the classic distributed problem where if this machine goes offline, we have lost that state and no recovery mechanism.
MonitorLizard is built using the battle-tested Elixir/Erlang stack. While a solution like Redis would typically be used here, we believe we won’t be taking advantage of the fault-tolerant distribution that Erlang and the OTP provide at this scale. Deploying Redis also comes with the cost of having to provision and maintain.
 
Our Requirements
1. Distributed Key Value Store 2. No additional overhead at application level for clustering and updating state 3. Provide a `delete/1` API 4. Supports ttl for inserted records
 
Our Analysis
In Elixir/Erlang, the three most popular drop-in libraries to achieve this natively are Ram, Groot and, recently, Khepri. Per our analysis, we found this;
Library
Pros
Cons
Khepri
- Achieves clustering natively
- Overkill for our immediate requirements - Does not support ttl
Ram
- Simple API - Supports delete/1
- Requires an overhead for setting up clustering - Provides no immediate strategy in terms of node failure - Does not support ttl
Groot
- Natively manages connected nodes - Simple API
- Supports only set/2 and get/1 Does not support ttl
 
Our Choice
Ram was our initial inclination, but with Ram, we would have found the exact reasons why the Rabbit MQ team provided the Khepri library - an opacity in bringing machines online or recovering when we lose the main node.
Groot, while simple, does not support our third and fourth requirements. However, it presents the easiest short/medium-term choice with minimal engineering overhead - we only have to extend it to support these requirements.
 
How we did it
Groot uses a Genserver backed by Ets to manage state but distributes updates to this state across all connected nodes. In implementing delete/1 We added an additional callback to the Genserver and propagated this operation to all connected Genservers using Genserver.abcast/2
 
# Originating GenServer def handle_call({:delete, key}, _from, data) do registers = Map.delete(data.registers, key) :ets.delete(data.table, key) GenServer.abcast(__MODULE__, {:propagate_delete, key}) {:reply, :ok, %{data | registers: registers}} end # Connected Genservers def handle_cast({:propagate_delete, key}, data) do registers = Map.delete(data.registers, key) :ets.delete(data.table, key) {:noreply, %{data | registers: registers}} end
 
Implementing the TTL for inserted records required sending a message in the Genserver process managing the insertion using Process.send_after/3. This builds on the newly implemented delete/1, ensuring that changes are propagated.
case expires_in do nil -> :ok _ -> Process.send_after(self(), {:delete, key}, expires_in) end
 
Drawbacks
Groot mentions it is not a good option for managing loads of records. Its prioritization of consistency over availability, while a possible concern for others, fits well with our requirements.
 
There are concerns with the implementation of the ttl expiry
  1. The process on the Genserver/Machine where the initial insertion occurred manages removal on expiry. The record will not be removed if this machine goes down or Genserver crashes.
  1. This implementation of ttl could cause a wrongful deletion when an update is made with a newer ttl - this represents an area for improvement.
 
TLDR: We required a native distributed key-value store supporting the API delete/1 and ttl for records. We extended Groot to achieve this.
 
Credits: Ridwan reviewed this write-up and served as a sounding board for implementing this.
Published: Oct 26, 2024
 
badge