Upcasters or a versioned event store: pros and cons

In a previous article, I wrote a few things about upcasters. One of the significant downsides when implementing an upcaster is that it adds to our application’s technical debt. An alternative technique is the versioned event store (or versioned event stream), where the existing event store is copied and modified. In this post I’ll discuss the pros and cons of both approaches.


In the event sourcing context, upcasting an event means to transform that event from its original to its new structure. Some other libraries or implementations use the term event adapter or event transformer when referring to the same technique. An upcaster is called when the application is reading existing events from an event stream (to reconstruct an aggregate, for example). The upcasting function transforms applicable events before they are forwarded to any event listeners, projections, etc. The immutability of the existing events is guaranteed as they are not touched or modified on disk.

Axon is one of the frameworks that has built-in support for upcasting events.

  • Events continue to be immutable, the complete event history remains intact;
  • No corrective events are necessary, we can safely remove old event listeners & code.
  • The in-memory view of the event stream does not match the on-disk state of the stream;
  • (Axon specifc) Upcasters work on intermediate event representations (xml);
  • Serialization (of events) can be broken;
  • Projections are not automatically updated;
  • Maintenance burden, upcasters need to be maintained indefinitely (number of upcasters in repository doesn’t decrease in general);
  • Performance considerations (depending on the complexity of the upcaster);
  • Upcaster chains can be tricky to test (there are multiple combinations);
  • Complex upcasters (merging or splitting events, pulling in data from other sources, etc.);
  • Snapshots have to be rebuilt or upcast;
  • (Axon specific) Can’t upcast events from one aggregate to another.

(Versioned) refactoring of event streams

This is a technique that allows modifications to existing events, while aiming to satisfy some of the concerns around immutability. To achieve this, we simply copy an existing event stream while simultaneously modifying (some) events. This approach is also referred to as Copy and Replace. To perform the Replace, we can re-use existing upcasting logic.

  1. Create a new, empty event stream (with a version number suffix added to the name of the stream);
  2. Iterate over all events in the existing stream;
  3. Apply the upcasting function to applicable events and write the result to the new stream, copy any events that do not have to be modified;
  4. Let the application use the new stream (flip a feature toggle, for example);
  5. Perform cleanup: remove upcasting function, archive or delete old stream.
  • The in-memory view of the event stream matches the on-disk state of stream;
  • There’s no performance or maintenance penalty (after running the refactoring process);
  • The conversion/upcasting function can be removed after the event stream is copied.
  • Events are no longer immutable;
  • Projections are not automatically updated;
  • Serialization can potentially be broken;
  • Snapshots will have to be rebuilt;
  • Old versions / copies of event stores (streams) should be retained indefinitely if required for auditing;
  • If you encounter a bug in an event (or event handler) after the refactoring process, you’ll have to refactor again.

In a running system, events will continue to come in while the Copy and Replace process is running. The simple solution to that problem is stopping the application (or switch it to read-only mode), create the new version and finally start the application again. Depending on the environment, traffic and other parameters, that downtime may not be acceptable.

This can be solved in a number of ways, I’ll describe one alternative:

  • Use a message queue such as RabbitMQ, or a data streaming platform such as Kafka;
  • Publish events, as they are persisted, to the message bus;
  • Consume events from the bus, pass them through the upcasting function, and construct the new event stream;
  • Switch to the new stream, once it is up to date and all replicas are aware of new version.

I hope this is useful for you, let me know in the comments!

Forget me please? Event sourcing and the GDPR

In May 2018, a new piece of EU legislation called the General Data Protection Regulation (GDPR) will come into effect. The GDPR attempts to regulate data protection for individuals within the EU and has very interesting and specific implications for applications that use event sourcing. In this article, I’ll discuss my thoughts on this subject and a few pointers for those implications.

Read more

Using Tracking processors to replay events in Axon Framework 3

Replaying events is a crucial part in any event sourcing / cqrs application, to rebuild projections, generate new ones or seed external systems with data.

I’m a big fan of the Axon Framework. Even with its quirks and occasional (strange) bugs, it’s my go-to toolbox for my event sourcing & cqrs consulting and development work.

With the recent 3.0 release, Axon changed the way events can be replayed by introducing the Subscribing and Tracking event processors. The Subscribing processor follows the event stream in real-time, whereas the Tracking processor keeps track of events it has processed (using a token). This means that the Tracking processor can be stopped and resumed, and it will pick up processing where it left off.

Read more

Using annotations in Prooph

One of the things I love about Java is its native, compiler-level support for annotations, a form of syntactic metadata which can be applied to source code but also retain at run-time to influence application behavior. I use them almost daily in my projects.

I do a fair amount of consulting and development on event sourced applications and these usually use Axon, a popular CQRS & event sourcing framework. Recently, Axon version 3 was released, supporting a number of annotations that can turn any POJO (Plain Old Java Object) into an event-sourced aggregate.

Read more

State Of DevOps Report

The 2017 version of Puppet’s State of DevOps Report was just released.

To me, the most interesting takeaways from the report are:

  • High performing teams have 46x more frequent deploys, 96x faster mean time to recover/repair and a 5x lower change failure rate.
  • They also automate significantly more work (automation is a key ingredient of any successful DevOps strategy).
  • A lower change failure rate and significant automation mean these teams spend 44% more time on new work (and 26% less time on unplanned work and rework).
  • Developers in high performing teams generally work in small batches and practice Trunk Based Development. Low performing teams on the other hand use long-lived feature branches and merge infrequently to trunk or master (read on for my thoughts about feature branches).