Event-driven architecture sounds simple until you try to implement it in production. Emit an event, handle it somewhere else, decouple your modules. Easy enough on a whiteboard. But the moment real money is involved (capital calls that must be processed, investor notifications that cannot be lost, regulatory filings that must happen exactly once), the details matter.

At Infra One, we process financial transactions for venture capital funds. When a capital call is issued, a cascade of operations must follow: investor notifications, account updates, document generation, compliance checks. These operations must survive application restarts, handle failures gracefully, and maintain a complete audit trail. We needed event handling that was persistent, transactional, and retryable.

We evaluated every serious option in the Elixir ecosystem. None of them fit. So we built ObanEvents — a lightweight, persistent event handling library built on top of Oban. This article explains why, and how it compares to the alternatives.

The options for event handling in Elixir

Maciej Kaszubowski wrote an excellent survey of event handling approaches in Elixir [1], identifying six patterns ranging from synchronous function calls to full CQRS with external message queues. The core tension is the same: you are trading off between simplicity, consistency, and resilience. You can have simple in-process events that are easy to reason about but lose messages on crashes. You can have persistent events that survive failures but add complexity. You can have full event sourcing that gives you a complete history but changes how you build your application.

Here is each major option and where it falls on this spectrum.

Option 1: Process-based PubSub

What it is. Using Elixir's built-in Registry or Phoenix PubSub to broadcast messages to subscriber processes. This is the most "Elixir-native" approach — it leans into the BEAM's lightweight processes and message passing.

The problem. In-memory PubSub provides at-most-once delivery semantics. If a subscriber process is not running when the event is emitted (because of a deployment, a crash, or a restart), the event is gone. There is no retry mechanism, no persistence, and no audit trail. As Kaszubowski notes, this approach "should only be used if losing some messages is OK" [1].

For fund operations, losing messages is never OK. A missed capital call notification is not a minor inconvenience — it is a regulatory problem.

Option 2: EventBus

What it is. EventBus [2] is an ETS-based event bus for Elixir with built-in event store and observation tracking. It has been around since 2017, has over 600,000 all-time downloads on Hex.pm [3], and provides a clean publish-subscribe API with topic-based routing and regex matching for subscribers.

What it does well. EventBus is fast — it uses concurrent ETS reads and writes, and its implementation has near O(1) complexity for most operations. It includes an observation manager that tracks which subscribers have processed each event, and it supports traceability via optional metadata fields compatible with OpenTracing.

The problem. EventBus stores events in ETS, which means they live in memory. When your application restarts, the event store is gone. The library has not been updated since January 2022 [3]. More critically, because events are not persisted to the database, you cannot emit events inside an Ecto transaction and have them atomically committed with your data changes. For financial applications where the consistency between your data state and your event state must be guaranteed, this does not work.

Option 3: Commanded

What it is. Commanded [4] is the most mature CQRS/ES (Command Query Responsibility Segregation / Event Sourcing) framework in the Elixir ecosystem. With over 1.4 million all-time Hex.pm downloads [5] and active maintenance, it provides the full event sourcing stack: command dispatch, aggregate hosting, event handling, process managers, and read model projections. It supports multiple event store backends including a PostgreSQL-based EventStore and Greg Young's EventStoreDB.

What it does well. If you want to build a system where events are your source of truth (the current state of every entity is derived by replaying its event history), Commanded is the right choice. It has been battle-tested in production by multiple companies [4], has excellent documentation, and the CQRS pattern it implements naturally provides a complete audit trail.

The problem. Commanded is a framework, not a library. Adopting it means adopting an architectural pattern that changes how your application works. Your entire write model goes through command dispatch and aggregate event sourcing. For teams that already have a working Ecto-based application (which describes most Phoenix projects), retrofitting Commanded is a major undertaking.

It also introduces eventual consistency by design. In a CQRS system, the read model is a projection built asynchronously from events. This means there is always some delay between a command being processed and the read model reflecting the change. For many applications this is fine. For a fund administration platform where an operations team member needs to see the result of a capital call computation immediately, it creates UX challenges.

We considered Commanded seriously. It solves a harder problem than the one we have. We do not need our events to be the source of truth. Our Ecto models and PostgreSQL database are the source of truth. We need events as a reliable, persistent mechanism for triggering side effects after our source of truth has been updated.

Option 4: Rolling your own with Oban

What it is. Oban [6] is the de facto standard for background job processing in Elixir, with over 22 million all-time downloads on Hex.pm [7]. Built by Parker Selbert, it uses PostgreSQL (and now SQLite3 and MySQL) as its job queue, providing persistence, retries, scheduling, uniqueness constraints, and observability via telemetry and the Oban Web UI.

Because Oban jobs are inserted into a PostgreSQL table, they participate in Ecto transactions. This means you can insert a job inside a Repo.transaction/1 block and know that if the transaction rolls back, the job is never created. This is exactly the transactional outbox pattern [8] — one of the most reliable approaches to ensuring consistency between your data state and your side effects.

The approach. Many Elixir teams (including ours, initially) implement event handling by creating dedicated Oban workers for each event type. You define a worker module, insert a job with the event data inside your transaction, and Oban handles execution, retries, and monitoring.

The problem. It works, but it does not scale well. As your system grows, you end up with dozens of worker modules, each with boilerplate for job argument parsing and error handling. The mapping between "business event" and "handlers" is scattered across your codebase. There is no central registry of events, no compile-time validation that your events and handlers are correctly wired, and no standard way to inspect which handlers will fire for a given event.

We found ourselves copy-pasting worker modules and maintaining a mental model of which events triggered which side effects. In a financial system where correctness matters, that implicit wiring was a liability.

What we built: ObanEvents

ObanEvents [9] takes the approach of option 4 — Oban as the persistence and execution layer — and adds the developer experience of a proper event system on top of it. The core idea is simple: define your events and their handlers in one place, and let the library handle the wiring.

You define an events module with a declarative map of event names to handler modules. Each handler implements a handle_event/2 callback. When you call emit/2, the library creates one Oban job per handler, with all the metadata needed for tracing, deduplication, and correlation. Because job creation goes through Oban.insert/2, it participates in Ecto transactions.

The flow looks like this:

  Business Logic
        │
        ▼  1. emit event + data
    Events.emit
        │
        ▼  2. lookup handlers
   Create Oban jobs
        │
        ▼  3. persist dispatch jobs
  Oban processes jobs
        │
        ├──▶  4. dispatch → EmailHandler
        │
        └──▶  4. dispatch → AnalyticsHandler

Your business logic emits a named event with a data payload. ObanEvents looks up the registered handlers, creates one Oban job per handler, and persists them to the database within your existing transaction. Oban then processes each job independently, with its own retry semantics and priority.

Compile-time validation. The @events map is validated at compile time. If you reference a handler module that does not exist, or an event that is not registered, you get a compilation error — not a runtime surprise in production.

Transactional safety. Because ObanEvents creates Oban jobs under the hood, you can emit events inside Repo.transaction/1. If the transaction rolls back, the event handlers are never enqueued. If the transaction commits, the handlers are guaranteed to execute (with Oban's retry semantics). This is the transactional outbox pattern [8] without requiring any additional infrastructure.

Per-handler configuration. Different handlers may have different requirements. A critical notification handler might need max_attempts: 10 and priority: 0, while an analytics handler can tolerate more failures and run at lower priority. ObanEvents supports per-handler Oban configuration using a simple tuple syntax.

Event metadata. Every handler receives an Event struct containing the event data, an event_id (shared across all handlers for the same emit), an idempotency_key (unique per handler job, for outbox-style deduplication), and optional causation_id and correlation_id fields for building event chains and grouping business operations. The causation chain looks like this:

user_registered
  event_id: "01933b7e-1111-...", causation_id: nil
  │
  └─▶ send_welcome_email
        event_id: "01933b7e-2222-...", causation_id: "01933b7e-1111-..."
        │
        └─▶ email_delivered
              event_id: "01933b7e-3333-...", causation_id: "01933b7e-2222-..."

Each child event carries its parent's event_id as a causation_id, which makes it straightforward to reconstruct the full chain of events that led to any given side effect. This is essential for debugging and audit trails in financial systems.

Dead letter handling. When a handler exhausts all retry attempts, you can override handle_exhausted/4 to log to an error tracker, insert into a dead letter queue, or alert your team. The default behaviour logs a warning so failures are always visible.

How it compares

The following table summarises the key trade-offs:

Process-based PubSub gives you in-memory, at-most-once delivery with zero infrastructure overhead, but no persistence and no retry. Good for cache invalidation and live UI updates. Not suitable when events must not be lost.

EventBus adds structure with topic-based routing and observation tracking, but events still live in ETS memory. It has not been updated since 2022 and does not integrate with Ecto transactions [2][3].

Commanded provides the full CQRS/ES stack with events as the source of truth, read model projections, and process managers. It is the right choice for systems that are designed from the ground up around event sourcing. It is the wrong choice for retrofitting an existing Ecto application with event handling [4][5].

Raw Oban workers give you persistent, transactional, retryable job processing — all the primitives you need. But the event abstraction is missing. You end up maintaining the mapping between events and handlers manually, with no central registry, no compile-time validation, and duplicated boilerplate across workers [6][7].

ObanEvents sits in the gap between raw Oban and Commanded. It gives you persistent, transactional events with a declarative handler registry, compile-time validation, and first-class support for idempotency and tracing, without requiring you to adopt event sourcing or change your data model [9].

When to use what

If your events are ephemeral and losing some is acceptable, use PubSub. If you want full event sourcing with events as your source of truth, use Commanded. If you have an existing Ecto application and you need reliable, persistent event handling that works within your existing transactions, that is what ObanEvents is for.

The library is open source on GitHub under the MIT licence. We use it in production at Infra One to handle investor notifications, regulatory filing triggers, and portfolio company data synchronisation. It is intentionally small (under 500 lines of library code) because the hard work of persistence, retries, and observability is already solved by Oban.

Sometimes the best thing you can build is a thin layer that connects two good ideas.

Sources

  1. mkaszubowski.com
  2. github.com
  3. hex.pm
  4. github.com
  5. hex.pm
  6. github.com
  7. hex.pm
  8. microservices.io
  9. github.com