Nate Maxwell

A blog of various experiments and projects of mine.

View on GitHub
24 April 2025

[GO] Mycelia Event Broker

by Nate Maxwell

Over the last year I’ve built a distributed microservice art and shot production pipeline. Like any distributed system it involves an event broker or message router to coordinate data transfer between all the machines on the network. There are many popular mature tools for this like Kafka or RabbitMQ, but I took to making my own to learn more about the proces.

I recently bought Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf. This has been, by far, my favorite software engineering book. After stumbling across Hohpe’s website covering each section of the network messaging pipeline and demonstrating numerous example design patterns I bought the book and read it cover to cover.

From this I’ve built my own event broker: Mycelia. Its early days in the development, but I’m pretty proud of how the start has gone. I’ve written it in Go to leverage its concurrency model for development ease and speed.

Here is a breakdown of what I’ve learned about messaging systems and their various components:

Motivation

To start, what problem does a messaging system solve? Well, if we have dozens of machines on a network who all perform a designated function, then it becomes a massive pain to inform each machine of each other machine’s address. Instead, it’s much easier to have one machine that all others talk to who would then forward messages on to the destination.

The data flow can even be done in the opposite direction for confirmations or queries, in addition to each of the network nodes talking to each other for more complex tasks.

Dependency Inversion

The primary design pattern for this is basic dependency inversion. To demonstrate, here is an example “broker” written in python.

# mock broker
from typing import Callable
from collections import defaultdict

_event_structure: dict[str, list[Callable]] = defaultdict(list)
""" { channel : [list of subscribers] } """

def emit_event(data: str, channel: str) -> None:
    subscribers = _event_structure[channel]
    for s in subscribers:
        s(data)

def register_subscriber(channel: str, handler: Callable) -> None:
    _event_structure[channel].append(handler)

Here, we keep a dictionary that takes “channel” strings as keys and a list of executables. Events can be “emitted” by calling the emit_event function, passing in the data to emit and the channel name. This gets the list of callables from the _event_structure dict and passes the data on to each of them.

Here is a simplistic “subscriber” that would receive the data:

# example subscriber
import broker

def consume_event(e: str) -> None:
    print(f'Received data: {e}')

def register_events() -> None:
    broker.register_subscriber(channel='example', handler=consume_event)

This has a consume_event function that simply prints the data received, and a function for adding the consume_event function to the broker.

We can then emit and event like so:

# example producer
import broker

def example_event_trigger() -> None:
    broker.emit_event(channel='example', data='hello world')

We can see the system in action with the following:

# Main execution
import subscriber
import publisher

subscriber.register_events()
publisher.example_event_trigger()
>> Received data: hello world

In this example, the subscriber and the producer have no knowledge of each other and only have knowledge of the broker. This means they are decoupled, and we can simply swap out the subscriber to change what functionality is invoked. Additionally, we can continue to add subscribers to tack on more functionality that will execute when we emit the event. This scales very nicely allowing us to extend the functionality of certain systems without mucking them up with additional code inside of them.

Channels

Now that we have the overall concept established, we can begin to expand upon each part. To start with, we can build a construct by which events will travel through. These will be called “channels”. In the minified python example these were simply a list of subscribers that the data was sent through. Now imagine we create an object that contains each of the subscribers. On its own its no different, but since it’s now an object we can begin to host other objects inside.

Imagine we wish to transform the event data before sending it to the end point consumer. Maybe we wish to use a key from the data to query a database, and then append the query data onto the event payload before finally delivering it to the subscriber. This would create an object like so:

There are many common channel design patterns. Here are a few:

Transformers

This “enricher”, that queries a database and appends the returned data onto the event payload is commonly referred to as a “transformer” or “translator”. These are objects themselves and can be swapped in and out of channels.

Like channels, there is a plethora of common transformer patterns.

Routers

Great! Now we have ways to alter the data as it is sent to end points and ways to orchestrate these data transformers in channels, but how do we know which channel to send the event through? Taking another look at our minified python example, it was simply a string key in the dictionary. Enter the Router.

Routers are like automatic key selectors for the dictionary. The event data would have some kind of key in it dictating what kind of event it is. From here we can decide which channel, or topic, to send the event through.

In some systems, channels will send their event back to the router to be sent to subsequent channels. In other systems, channels are strung together and pass events on to the next channel.

There are even common design patterns for routers:

Complexity

From here we can combine these concepts to make increasingly complex communication structures:

Generally structures this complex are more common in fintech or media on demand services rather than art production pipelines. Dreamworks’ 2017 whitepaper about converting their “Pipeline X” to microservices is an example of a fully matured system of services in an animation production pipeline. Undoubtedly they have some kind of message bus or event broker handling communication between each of the services.

Obviously I can’t share the architectural setup for the pipeline we use at Digital Domain’s previs department, but it utilizes many of the same concepts in a similar fashion.

Mycelia’s Hyphae Model

Mycelia uses these components but orchestrates channels in “routes” rather than “topics”. Routes contain arrays of channels that feed event data to each other in order. Consumers can subscribe to the output of any channel in the route to get the data at that point before subsequent transformations.

I’ve decided to label this the “Hyphae Model”, which doesn’t really mean anything, but is on theme with the whole fungus thing.

Here is an example to illustrate the differences between a traditional message routing setup and Mycelia.

Let’s suppose a customer tells a car insurance website that they have changed addresses. The initial message data hits the broker who tells all subscribers that the customer’s address has updated. The Policy Generation Service has subscribed to this event so that it can generate an updated rate for the customer based on their new location.

Unfortunately the Policy Generation Service needs more data. It now needs to know the accident volume of the new location. To get this, it emits an event requesting this locational data that is picked up by the Geo Service.

The Geo Service then queries its database and sends the accident volume back to the broker to send to the Policy Generation Service which then formulates the new rate, and finally tells the broker to send the updated rate to the customer.

So this event chain is

Customer -> Broker -> Policy Gen -> Broker -> Geo -> Broker -> Policy Gen -> Broker -> Customer.

How nice would it be to instead have the geo service intercept the data initially sent to the Policy Generation Service based on the route taken so the Policy Generation Service has all the data it needs in one smooth transfer.

This is what channel transformers do in Mycelia. Technically the transformer is still sending a request on behalf of the first service to another with the broker handling it between stops, just like in the original, but semantically simpler. Here is a diagram comparison of the traditional vs hyphae routing structures.

tags: go - event - broker - concurrency - distribution - network - micro - service