Event sourcing at Orus
Event sourcing is rarely used as the foundation of the whole system, but at Orus, we had the chance to start from scratch, and think things through before writing the first line of code.
Insurance is complicated, and we need to know exactly what happened at any given time.
When a claim is filed, a contract renewed, or a coverage added, the "current state" of a database row isn't enough. We need to answer: Who changed this? When? Why? And what did the contract look like exactly before that change?
In a standard CRUD application, you might add an audit log table. But audit logs rot. They get out of sync with the actual data. In event sourcing, the audit log is the data.
We decided early on to build our backend (TypeScript/Node.js) on a 100% event sourced architecture. It wasn’t the easiest path, but it was the right one for our domain.
In this article, we’ll share how we structure our stores, how we handle the "read" side of the equation, and the lessons we learned along the way.
The mental model: stores and views
If you are new to event sourcing, the simplest analogy is a bank account.
- The store is the transaction log (Deposit $100, Withdraw $20). It is immutable. You never "update" a transaction; you only append new ones.
- The view is your current balance ($80). It is calculated by replaying the transactions.
At Orus, we strictly separate these two concepts.
1. The store (write side)
Our "source of truth" is a set of MongoDB collections acting as append-only event logs. We split them by domain aggregate (e.g., event.subscription, event.collected_file).
Writing to the store is simple. We append an event that captures the intent of the user, not just the data change.
// We capture the "What happened", not just the result
await subscriptionStore.append(
{
type: 'protection_schedule_updated',
payload: {
termination: {
terminationDate: new Date(1735686000000),
reason: "customer_request",
},
},
},
idempotencyKey
);
We type our stores with two generics: Store<SupportedEvent, CurrentEvent>.
Reads return SupportedEvent (a union of V1 | V2 | ...), forcing our reducers to handle every historical version of an event.
Writes only accept CurrentEvent (strictly V2).
This simple TypeScript trick ensures backward compatibility while making it a compilation error to accidentally append a legacy event structure to the log.
The stores API also comes in with a few standard fields that are added to every event:
idempotencyKey: Uniquely identifies the reason of the event, so that we don't created duplicatesactor: Who made the change (person, background job, etc.)appendDate: When the event was appended to the store- etc...
2. The view (read side)
We have two ways to read data, depending on the need for consistency vs. performance.
Option A: real-time (in-memory stores)
When we need absolute consistency (e.g., a user just clicked "Save" and expects to see the result), we build the state on demand.
We load all events for a specific ID and pass them through a reducer, a pure function that takes the current state and an event, and returns the new state.
// Simplified reducer logic
function reduce(state: State, event: Event): State {
switch (event.type) {
case "subscribed":
return { ...state, status: "active", startDate: event.startDate };
case "protection_schedule_updated":
return { ...state, protectionSchedule: normalize(event.payload) };
// etc...
}
}
Option B: eventual consistency (persisted views)
We can't replay events every time we need to list 50 subscriptions. That would be a performance killer.
For these cases, we use persisted views. Background workers listen to the event stores. When a new event arrives, they compute the new state and upsert it into a standard MongoDB "read" collection. This allows us to run fast, complex queries (like aggregations or text search) on subscriptions_persisted_view.
Want to work with event sourcing? We're based in Paris and we're hiring Software Engineers! Check out our 🌟 job offer!
The "gotchas": 3 mistakes we made
Implementing event sourcing is a journey of discovering things you wish you had known six months ago. Here are three traps we fell into so you don't have to.
1. The "spread payload" trap (V1 vs V2)
In our early days, we wanted our event documents to look "clean," so we spread the business data at the top level alongside the metadata.
The naive approach (V1 - deprecated):
type NoteUpdated = {
type: "updated";
// Metadata and Business logic mixed together 😱
timestamp: Date;
userId: string;
content: string; // Business field
idempotencyKey: string;
actor: Actor;
};
This works until you want to add a metadata field which name is already taken by a business field of some obscure event.
The Orus Way (V2 - Current):
We now strictly wrap business logic in a payload property.
type NoteUpdated = {
type: "updated";
// Clean separation
timestamp: Date;
payload: {
userId: string;
content: string;
};
};
This seems minor, but it allows us to create generic logic and indexes without worrying about field collisions.
2. Idempotency is not optional
In a distributed system, retries are inevitable. If a worker crashes after charging a customer but before acknowledging the task, it will restart and try to charge them again.
To prevent this, every append operation at Orus requires a deterministic idempotency key.
// BAD: Random ID
const key = uuid();
// GOOD: Derived from the business intent
const key = `subscription_renewal_${year}_${contractId}`;
If we try to append an event with an existing key:
- We check if the payload is identical.
- If yes, we ignore it (silent success).
- If no, we throw a
IdempotencyKeyViolationError.
This guarantees that no matter how many times a job retries, the side effect only happens once.
3. How to deal with corruption
In the classic approach (when you just store the state), if you write bad data, you just run an SQL UPDATE script to fix it. In event sourcing, history is immutable. You cannot change the past.
But what if the past is wrong? What if a bug generated a corrupted event that now breaks your reducers?
If we stayed true to the event sourcing logic, we should keep these events as they are and update the reducers to handle them properly. But sometimes that would introduce a lot of accidental complexity.
So here's how we deal with this situation :
- We do a migration to "fix" the event data
- We add an event of type
history_altered. The workers responsible for computing the persisted view can react to this new event and recompute the state.
await subscriptionStore.append({
type: "history_altered",
payload: {
reason: "https://issue-tracker.com/issues/123",
},
});
We treat this as a last resort option, used only in extreme cases. It feels "dirty" because it breaks the purity of the reducer logic, but pragmatism beats purity when the simplicity of the codebase is at stake.
Conclusion
Event sourcing at Orus hasn't just been an architectural choice; it's a mindset. It forces us to think about behavior (events) rather than just structure (tables).
It comes with overhead: you have to write reducers, manage eventual consistency, and handle versioning carefully.
But the payoff is a system where we can debug complex insurance workflows by simply replaying history, and where we sleep soundly knowing we have a perfect audit trail of everything that ever happened. As a result, except for a few pure frontend exceptions, there's never been a bug that we weren't able to reproduce, since, by design, the detailed history of everything is always available.
This architecture has allowed us to build a system that is both flexible and auditable, but we can also react to events in real-time. We will dive into this in a future blog post, where we will introduce the reactors, which are, by the way, at the core of the implementation of the persisted views.
