The Token Approach for Architecting Flow in Elixir
In the last two posts we looked at the Token approach and how we can leverage metaprogramming to build pipelines and custom DSLs processing this Token
.
The Token
is just a struct holding relevant data for the execution of your program:
# This struct is handed down during the execution of your program, just like a
# `Plug.Conn` is during the processing of a request in Plug.
defmodule MyApp.Token do
defstruct [:status, :assigns, :errors, :results]
end
Writing these posts was great fun, but the reality is that the presented concepts do not fit every problem.
If all you have is a hammer, suddenly everything looks like a nail.
In this post, I want to examine when to use a Plug-like Token
in your project and when to consider other options.
We’ll start with a list of the pros and cons of this approach.
Why using a Token
is a GOOD idea 👍
It provides a clear API for communicating
When we build software as a team, we need to establish common ground on several fronts. Codewise this can be done by establishing APIs, interfaces and contracts between different parts of our programs.
We want to be able to communicate between different stages of our program’s flow.
In Plug, there is the concept of assigns
in a connection, which enables us to communicate pieces of information from one Plug to another:
def early_plug_in_pipeline(conn) do
if logged_in?(conn) do
user = get_user(conn)
assign(conn, :user_name, user.name)
else
conn
end
end
def later_plug_in_pipeline(conn) do
greeting =
if conn.assigns[:user_name] do
"Hello #{conn.assigns[:user_name]}!"
else
"Please login."
end
assign(conn, :greeting, greeting)
end
It enables easy control flow through a common API
When we have a common Token
in our program, we can implement common use-cases at the top-level.
This enables easy control flow through a common API.
Example: Systems concerned with the processing of incoming requests can benefit from a standardized way to stop all further processing of the current request.
Using Plug as an example, we could implement the concept of a “stopped” connection.
Here are two ways in which we could implement this using assign/3
:
def step1(conn) do
if critical_error? do
assign(conn, :stopped, true)
else
conn
end
end
def step2(conn) do
unless conn.assigns[:stopped] do
# do step2 stuff ...
end
end
def step3(%Token{assigns: %{stopped: true}} = conn), do: conn
def step3(conn) do
# do step3 stuff ...
end
But Plug provides halt/1
for easier control of the flow:
def step1(conn) do
if critical_error? do
halt(conn)
else
conn
end
end
# The later two steps aren't called thanks to Plug's notion of `halted`.
def step2(conn) do
# do step2 stuff ...
end
def step3(conn) do
# do step3 stuff ...
end
This has two benefits: Plug establishes the notion of a “halted” connection, which helps avoid a situation where multiple developers implement divergent solutions for a common use-case. By using a standardized API, we also add maintainability to the concept of a “halted” connection. If the meaning of “halted” changes at any point in the future, we can update the code in a central place.
It helps establish a common project language
The example above shows that it is important enough for Plug to establish the term “halted connection”, include a field for it in its Plug.Conn
struct and define an API for it.
Your Token
might have the concept of a status
, which the business case it represents is currently in or an origin
, from where the service call originated. Whatever it is: it is important that you, your team, manager and other stakeholders establish a common language and understanding of these important terms.
Cutting responsibilities and separating concerns
Obviously, not everything should be put into a map like assigns
.
In the case of Plug, there are lots of concepts surrounding HTTP requests, which deserve their own field in the Plug.Conn
struct.
One question worth asking is
When should I introduce a dedicated field for a domain specific property?
When I first looked at Plug, I was surprised that there was no content_type
field in Plug.Conn
(you can find the Content-Type
header in the list of request headers).
This is a good example for cutting responsibilities and seperating concerns:
-
If we’re designing a media server, we might want to feature the
Content-Type
request header more prominently in our ownToken
. -
If we’re designing a server tracking video impressions, we might want to feature the
User-Agent
request header more prominently in our ownToken
.
It is up to your implementation to decide which pieces of information you want to promote in your program’s Token
struct.
In the case of Plug, the Content-Type
is one of many HTTP header fields and from a neutral system-design perspective, there is not much that seperates it from e.g. Accept
or User-Agent
.
It allows for easy debugging between stages
Whenever we’re not sure where something goes wrong, we can easily monitor the Token
as it flows through our program and get a sense of the “state” that the current request/process/execution is in.
Phoenix utilizes this by prominently displaying the Plug.Conn
on error pages when running in dev
env.
Similarly, you can use IEx.pry
and :debugger
to investigate your Token
during development.
It supports extensible workflows
Using a Token
approach enables to make our flows and data pipelines extensible and pluggable (pun intended).
You can easily add extra APIs or add functionality to your existing APIs. Similarly, you can enhance a flow like the one described in the last post on creating a custom DSL by wrapping individual steps:
For example with a Plug-like pipeline you can easily add metrics to monitor the performance of your business flow in production, without touching a line of business logic.
Or you can wrap each step in a predefined retry mechanism, if that somehow makes sense for your use-case.
Or you can have your flow automatically collect the results of each step and store them in a special results
field in your Token
.
The possbilities are limitless.
Why using a Token is a BAD idea 👎
It’s another layer of abstraction
The fundamental theorem of software engineering states:
We can solve any problem by introducing an extra level of indirection [except for the problem of too many levels of indirection].
Adding a Token
and maintaining an API surrounding it does not come for free.
People in the team have to be educated about its utility, properties and APIs.
You have to ensure everyone gets why the Token
is useful as opposed to “just another layer of indirection”.
It’s another interface which has to satisfy its consumers
The Token
is just another interface which has to satisfy lots of consumers and stakeholders with lots of different requirements.
You have to avoid creating a god object and still provide value to your consumers.
A couple of questions and topics that will come up during its implementation are:
- Which fields should be included?
- How should we name each field so its meaning is obvious?
- Should we have an unstructured field like
assigns
, which can hold arbitrary data? - How do we prevent this
assigns
field from becoming the company wide go-to property bag?
The last point illustrates that the introduction of a Token
relies heavily on people to design its interface and API in a thoughtful way.
It can get memory intensive due to immutability
If you want to move lots of data throuh a pipeline, a Token
approach might not be the way to go.
When we modify a struct with lots of large binaries in it, we might run into memory problems due to immutability. If you run to move the “blobs of data” into a GenServer, while holding a reference to the data in the token (and using the Token for metadata - like status, type of content, etc.).
Conclusion
So, should you try the Token
approach? Here are two checklists to help you decide:
When to use it
- the need of a contract outweighs overhead for “yet another struct”
- many parts of your system have to talk about the same thing in different contexts
- extensibility is a major concern
When to avoid it
- the problem domain is very small
- there are very few stakeholders
- there are many stakeholders, but requirements are vague
- you get the feeling that the overhead is simply not worth it
There are many scenarios where the Token
approach can help developers achieve better results, but it is not a “one size fits all” solution.