Streaming Architecture:

New Designs Using Apache Kafka and MapR Streams

by Ted Dunning & Ellen Friedman

Streaming Architecture: Ideal Platform for Microservices

Over the last decade or so, there has been a strong movement toward a flexible style of building large systems that has lately been called microservices. This trend started earliest at innovative companies such as Google, and many aspects of microservices have since been reinvented at a variety of other companies. Now, among highly successful, fast-evolving companies that include Amazon, LinkedIn, and Netflix, the microservices approach has become more the rule than the exception, at least partly because companies who adopt this style of architecture move faster and compete better.

“Instead of a big monolithic application, where every change is centrally coordinated, the new Netflix app is a series of microservices, each of which can be changed independently.”1

Yevgeniy Sverdlik

The microservices idea is simple: larger systems should be built by decomposing their functions into relatively simple, single-purpose services that communicate via lightweight and simple techniques. Such services can be built and maintained by small, very efficient teams.

Why Microservices Matter

Microservices are an important trend in building huge systems, primarily because the weak coupling between microservices allows agility even in large organizations. Here’s how.

The evolution of the idea behind microservices started as a bit of a backlash against the complexity of service-oriented architecture (SOA) and the associated enterprise service bus (ESB) idea. The ideas behind SOA and ESBs have their roots in early mainframe development practices where there was a very strict class structure that determined who worked on different aspects of a system. With the advent of the Web, that class structure was reflected in the way that each tier in an n-tier architecture was designed and built by teams specialized in building a particular kind of tier. As these tiered systems grew larger, they were often broken into siloed systems, first by which business unit they served and then based on what rough function they served within a business unit.

The essence of this result was a tiered and siloed system as shown in Figure 3-1.

ndsa 0301
Figure 3-1. Traditional organization of large systems involved a horizontal segregation of components according to required skillsets, as shown here by horizontal lines. Further complications emerged as silos formed, as indicated by dotted vertical lines. Tier-oriented teams are not invested in individual systems because they are matrixed across multiple siloes. Isolation between systems is heavily compromised by this organizational structure and the architecture that echoes it.

The idea of SOA was to try to align each siloed system along functional boundaries and isolate their internal details. However, the complexity and rigidity of ESBs not only causes coupling between services, but also adds an entire level of difficulty to building these systems due to the requirement for coordinated changes. The resulting systems were difficult to build and maintain because changes in one part of the system often required changes in other parts. Performance was poor relative to the total investment because of the complexity of ESBs. These issues make SOAs much less effective in practice than it seemed in theory that they would be.

The emergence of highly scalable and more flexible systems such as Hadoop-based platforms has helped a lot by providing a cost-effective way to centralize large amounts of data for multi-tenant use. New data sources can also be used, and preparation of data for traditional systems happens in a more efficient and affordable way. This has tended to break down some of the siloes simply through collocation of data, but does not address the question of whether individual services could be made more independent and less coupled to other services and whether large systems could be built in an agile fashion, and that’s where the microservices approach comes in.

Microservices design makes use of focused teams that have a range of skillsets shared within the team.

The resulting systems do not have strong separations between tiers, but instead have cross-functional teams who are strongly aligned with the service they work on. The resulting systems are strongly isolated from each other, but the layers inside each system are not isolated. This idea is shown in the upper part of Figure 3-2.

ndsa 0302
Figure 3-2. The evolution toward microservices breaks down the horizontal layers that segregate teams according to skills in older monolithic systems. Compare the upper part A of this figure to Figure 3-1, which depicts the older-style vision. Here the horizontal tiers have given way to cross-functional teams, each building a service. In the lower part B of the current figure, we represent each microservice being handled by a small team as an opaque hexagon to illustrate the idea that the internal details of a service do not matter.

The microservices approach involves building small services that interact only in limited ways via defined interfaces. In contrast to the older tiered approach that segregated skillsets, the microservices style breaks down the horizontal barriers and empowers a team to get the job done without burdensome negotiations and restrictions. An obvious benefit of this decomposition is that each service can be simpler, but the longer-term benefit stems from having small, cross-functional teams that behave more like startup development teams than enterprise software development teams.

With microservices, you get the agility of a small team even in a large organization.

What Is Needed to Support Microservices

In order to carry out the goals of a particular cross-functional team, it’s necessary to have effective communication between microservices, but this interaction needs to be kept lightweight and flexible. The goal is to give each team a job and a way to do it and get out of their way, as shown abstractly in Figure 3-3. The key idea is that services are opaque, and they communicate with only a few other services using lightweight, flexible protocols. This might entail using a remote procedure call (RPC) protocol such as REST or a messaging system such as Apache Kafka or MapR Streams. Data formats should be future-proofed by using JSON, Avro, Protobuf, Thrift, or a similarly flexible system to communicate.

ndsa 0303
Figure 3-3. Microservices require a way to communicate between them, but it should be kept lightweight by using a REST API or by using Apache Kafka or MapR Streams.

The rise of microservices has also been paralleled by the rise in popularity of closely related ideas such as DevOps practices (where the team who built a service operates it), container systems like Docker (which make it easier to deploy self-sufficient versions of services), continuous integration (in which new versions of services are deployed quickly and often), and REST interfaces (which make it easy to build and document call-response services).

With a good streaming architecture, a microservices style of computing becomes a powerful approach that can be implemented much more easily.

Microservices in More Detail

Much of the discussion around microservices has stemmed from their early use in building complex websites, such as Netflix. As a result, much of the discussion has almost assumed that services interact using RPC mechanisms like REST involving a call and an immediate response. In fact, service interactions in a microservice architecture may need to use both synchronous call-and-response such as REST as well as asynchronous methods like message passing. Synchronous interactions tend to dominate the closer you get to a user (as with a website), while asynchronous service interactions become the rule as you move more toward analytical back-end systems where throughput becomes more important than response time and the desired analytical results are more and more the aggregation of many records. Applications designed to handle data from the IoT are often dominated by asynchronous service interactions.

Authors such as Martin Fowler and James Lewis emphasize that in microservice architectures, the data transport between services should be very lightweight. Instead of using elaborate enterprise service busses that can do lots of transformation and scheduling, microservice architectures should focus on so-called dumb pipes that do little more than transport data. In practice, the term lightweight should be interpreted to mean that the mechanisms used are ubiquitous and self-service for the teams using them.

Even though people tend to focus largely on the synchronous aspects of systems when describing microservice architectures, the asynchronous side deserves comparable attention. In fact, even in the case of a service that appears to involve a request that requires an immediate return result, what will often happen is that both kinds of processing will be involved. In such a combined action, something will be done immediately in order to be able to respond to the current request, but any work that can be deferred will be put into a message queue to be processed as soon as it is convenient to do so. Deferring work like this allows a much snappier user experience, but is also a way of decoupling the work schedules of different components of the overall system.

In this book, we focus on streaming architectures, that is, on asynchronous service interactions. We particularly address how recent developments in message-passing technologies such as Apache Kafka together with processing systems like Apache Spark, Apache Flink, and others make it possible to build more advanced streaming systems than ever before.

But before talking about specific messaging technologies that would enable microservices, let’s look a bit more into what this all means to somebody designing or building such a system.

Designing a Streaming Architecture: Online Video Service Example

As an example of how to look at building a streaming architecture, especially from the point of view of microservices, let’s examine part of a user-generated video website one of the authors built a few years ago and look at how we would build the site now with the advantage of a few more years of experience plus the appropriate modern tools for data streaming and flexible NoSQL databases to work with.

The basic idea of this system is that video files are uploaded by users and then processed to create everything needed to present these videos to other website visitors. This processing includes extracting video metadata such as size, length, original encoding, video resolution, date and time that the video was uploaded, and similar characteristics. The processing also includes the extraction of thumbnail images from different parts of the video and creation of different versions of the video suitable for streaming at different bit rates and resolutions to a variety of devices. A rough outline of the system as it was designed years ago is shown in Figure 3-4.

ndsa 0304
Figure 3-4. Original design of an online video service for a user-generated website. Here the arrows depict hand-coded custom data transfers that connect the different components or applications. Often these connections were built using a variety of different technologies (even within one system), and these interchanges also made strong assumptions about timing and handshakes between the data source and recipient. There were strong dependencies between the components joined by arrows. Gray shading indicates part of the system shown in more detail in Figure 3-5. This pre-microservices system was very difficult to build and even harder to change.

This old-style system was difficult to build. One issue is that each arrow shown here connecting different components or processes was uniquely coded ad hoc, using different technologies from arrow to arrow. This traditional approach resulted in dependencies such that one piece affected many others: to make a change in the process at one end of the arrow or to add a new process required adjustments at the other end, which could cascade into more changes. The system was built and was successful, but at a considerable cost, and it lacked the agility needed to respond well to new ideas or changes in the marketplace.

A New Design: Infrastructure to Support Messaging

If we were to revisit this video system with a modern view toward taking advantage of microservices and the benefit of hindsight, we would use a different design. Let’s see how implementation would be improved by using distributed files, a NoSQL database, and in particular an appropriate infrastructure to support communication between services.

Keep in mind that one of the main goals of microservices is to provide agility and ease of development. This requires isolation of service implementation details, and to support that, we would use a durable, high-performance messaging technology such as Apache Kafka or MapR Streams to provide the connections between components. This leads us to the system in Figure 3-5.

ndsa 0305
Figure 3-5. Use of messaging technology such as Apache Kafka or MapR Streams is shown here as horizontal tubes that are labeled with a description of the streaming data that they handle. These message streams form the connections between shaded microservices in this design. (For reference, see the shaded portion of the old design shown in Figure 3-4.)

There are a few important things that are not usually obvious to developers or architects getting started with this style of system. If we focus on just the thumbnail extraction service as in Figure 3-5, we see records are read from a stream called “uploads” and written to a stream called “thumbs.” These labels indicate what data is available from that stream, much as a variable name should tell us what it contains. A new microservice can tap into the stream fairly easily to provide data or to consume it. This is an architectural design that provides agility.

Microservices can be implemented very quickly in an agile style by a small team precisely because of their very limited scope and restricted interfaces.

Also notice that videos are read from files and thumbnail images are written out as files on a distributed store so that other processes can access them. Subsequently, these microservices can be improved or possibly even completely reimplemented. New versions can be tested by running alongside the existing production version. Notably, microservices that work well enough need not be changed even as all the services around them change.

A key advantage of the right choice of message-passing technology is that it lets you avoid unwanted dependencies or complexities between services.

For microservices to work as intended, they need to be connected by communication via durable, high-performance messages that decouple dependencies of each piece.

This move to use infrastructure to connect microservices together using asynchronous messaging is very different from connecting services using synchronous calls. In particular, with asynchronous messaging, there is no guarantee that the sender and receiver are even both running when the message is sent or when it is received, as mentioned in Chapter 2: Stream-based Architecture. In fact, there may well be times between the sending and receiving when neither sender nor receiver is running. This means that we have to have support for asynchronous messaging at an infrastructural level; we cannot depend on either the sender or receiver to handle the messages. Without infrastructure support for messaging, we will wind up with coupling between the implementation of the sender and receiver.

Importance of a Universal Microarchitecture

Another aspect of building a streaming system is that every processing element really needs to have more than just the obvious inputs and outputs. For instance, taking the thumbnail extraction as an example again, the thumbnail extractor should also be sending information about normal operation such as number of videos processed, histograms of processing time, and so on to a metrics stream and should be sending records that describe processing exceptions to an exceptions stream. This is shown in Figure 3-6.

Metrics
For best practice, the components in a streaming system should use message streams as a way to collect metrics and exceptions throughout the system ubiquitously. Any service not worth monitoring isn’t worth running.

This universal convention for collecting information like metrics and exceptions is known as a universal microarchitecture.

ndsa 0306
Figure 3-6. Processing elements should make use of a universal micro-architecture in which they emit metrics records and signal exceptions via well-known streams. Stateful processes may find it very useful to emit checkpoints to a stream to assist in reloading state on restarts and associating that state with an input message position so processing can be restarted correctly.

What’s in a Name?

Another aspect of this architecture that might be surprising is that the naming of all data transfers seems a bit odd. Traditionally, when system diagrams like this were drawn in the past, an arrow was all that was necessary. No name was typically given. The implementation of each arrow depended on a convention agreed to between the source and the consumer. Typically, nearly every arrow used slightly different conventions, and this inconsistency led to a considerable amount of coupling between producers and consumers. Such a coupling violates the core premise of microservices and can ultimately result in a system that is almost as difficult to modify as a traditional ball of mud anti-architecture.

The right way to build a system like this is to support asynchronous message passing as an infrastructural capability so that all messages between services are passed using a consistent mechanism. Naming each connection facilitates connecting new consumers to a message stream. This idea is shown in Figure 3-7, where the depiction of infrastructure for message passing is simplified to just an arrow plus a label of the content of messages. In fact, the arrow is also an important component of the architecture, but when it functions well it essentially fades into the background of the design.

ndsa 0307
Figure 3-7. The thumbnail extraction service in isolation with its inputs and outputs. To make the diagram more concise and to allow us to focus on noninfrastructural aspects, the message streams are shown as labeled arrows rather than tubes (as in Figure 3-5 and Figure 3-6). Note in this detail that the uploads message stream has references to the original video files and the outgoing thumbs message stream includes references to the image files extracted by the thumbnail extraction process.

Why Use Distributed Files and NoSQL Databases?

At a high level, if we are committing ourselves to using streams for connections between essentially all components and trying to keep state local as much as possible, then it seems like a contradiction to be storing thumbnail images as files in a distributed file system. In fact, while it is theoretically feasible to push all kinds of data through a message queue, files are still a very good solution for many purposes. If a good distributed file system is available, it can make systems that involve fairly large persistent data objects (more than a few megabytes) much simpler. This is true partly because many of the tools that we might use to process these objects, such as image extractors, or to serve them to users, such as web servers, assume that data is stored in files. It is much easier to just let these tools do what comes naturally. Also, Kafka, for instance, has a default maximum message size of only 1MB. MapR Streams caps message size at 2 GB. Neither limit is large enough to be sure that we won’t need a file larger than that.

The file system can also be used to store checkpoints. A service can momentarily stop processing input while it writes data that is required to restart the process to files. In addition, the message offsets for all inputs can be stored so that processing can resume at the current point.

The use of distributed files and databases are shown in the new design for the video example that is explained in the next section.

New Design for the Video Service

Putting all these improvements together, we see how the modern design would play out for the video example. These changes include:

  • Designing for a universal microservices architecture
  • Maintaining independence of microservices by using lightweight communications between services via a uniform message-passing technology (such as Apache Kafka or MapR Streams) that is durable and high performance
  • Use of distributed files, NoSQL databases, and snapshots

Now with this microservices and streaming approach in the new design, the purpose of each processing element in this architecture is apparent in Figure 3-8 and can be described in just a few sentences. More importantly, a prototype for each can be implemented in a few hours of work.

ndsa 0308
Figure 3-8. Modern design for the online video service example. Here we put all these ideas into practice in our example. Each processing element is represented by a rectangle. We indicate file or database persistence by a vertical cylinder. For simplicity, the connections between microservices that are supported by uniform messaging technology are shown by arrows labeled with the content of the stream, rather than the tubular symbol used in Figure 3-5 and Figure 3-6. Note that this diagram is further simplified in that the metrics streams are not shown. Elements common to the old design are shaded.

For instance, the file upload service stores the raw video as a file in a distributed file store and sends out a record with information such as title along with the file name where the video data can be found. The thumbnail extractor reads these records and processes the video file to produce a number of image files and an augmented video record that now includes a list of the thumbnail files. The transcoder does a similar operation to produce versions of the video in different sizes and encoding qualities.

Records from the thumbnail extractor and transcoder are joined together to form a full description of the video (video metadata) and sent to processes that store this information into databases, thus providing convenient access for a variety of analysts. Note, too, that in addition to these description records being written to live databases, they can also be used to create snapshots of the database so that up-to-date copies of the metadata database can be created at will.

Summary: The Converged Platform View

For a microservices streaming architecture to work well, there are some constraints on how the platform that supports it has to work.

First, all of the services have to use a consistent and ubiquitous transport mechanism for streaming data. This is different from how synchronous microservices need to work because making a synchronous request to a service implies that the service itself can receive the query and provide the response. That means that you don’t need much in the way of infrastructure to support synchronous microservices other than a solid network connection, but that approach limits flexibility and requires more administrative overhead.

Instead, we recommend asynchronous streaming services. With this streaming infrastructure approach, the receiving service may not even be running when the message is sent to a stream. This implies that it is important for the stream itself to be able to persist messages until they can be read and processed.

Another critical need is that new instances of the service will need to load any state that is being maintained by the service. This typically implies that old messages will need to be reread by this new instance as it gets ready to run, and that implies that those messages will need to stick around after the original instance of the service has already read them. Debugging or forensic examinations of services can also require that old messages be examined.

The current best practice for meeting these requirements is to use a replayable persistent messaging system such as Kafka or MapR Streams throughout a streaming system.

Many systems like the video processing chain described in this chapter manipulate large objects that can range from tens of megabytes in size to several gigabytes. While it is possible to write very large messages to messaging systems, it is usually considered bad practice. Large operations are better read or written using a file-like API with distinct open, read, write, and close operations. Message-passing APIs, on the other hand, typically require that an entire message be passed in a single call. This means that objects larger than a few megabytes should be passed by alternative methods, such as a distributed file system. The video processing chain does just this by writing the video files, thumbnail images, and converted videos to files and passing references to these files in messages.

The upshot of all of this is that successful implementation of a modern streaming architecture typically requires that Kafka-like message passing and a distributed file system be provided as a utility for all services. The adoption of a converged system that supports messaging, files, and tables makes asynchronous microservices much easier to build and maintain.