By Adam Boduch

Learn to build powerful and scalable applications with Flux, the architecture that serves billions of Facebook users every day


  • This the first resource dedicated to the new architectural pattern that powers Facebook
  • You’ll learn all the tips and tricks you need to get the most out of Flux
  • Filled with practical, hands-on samples, you’ll not only understand how Flux works, but will be able to start building Flux-powered applications straight away
  • Written by Adam Boduch, software architect at Virtustream (EMC), and author of JavaScript at Scale, JavaScript Concurrency, and jQuery UI Cookbook for Packt Publishing

  • Understand the Flux pattern and how it will impact your React applications
  • Build real-world applications that rely on Flux
  • Handle asynchronous actions in your application
  • Implement immutable stores with Immutable.js
  • Replace React.js with alternate View components such as jQuery and Handlebars
  • Test and benchmark your Flux architecture using Jest—Facebook’s enhancement of the Jasmine library

Whilst React has become Facebook’s poster-child for clean, complex, and modern web development, it has quietly been underpinned by its simplicity. It’s just a view. The real beauty in React is actually the architectural pattern that handles data in and out of React applications: Flux. With Flux, you’re able to build data-rich applications that engage your users, and scale to meet every demand. It is a key part of the Facebook technology stack that serves billions of users every day.

This book will start by introducing the Flux pattern and help you get an understanding of what it is and how it works. After this, we’ll build real-world React applications that highlight the power and simplicity of Flux in action. Finally, we look at the landscape of Flux and explore the Alt and Redux libraries that make React and Flux developments easier.

Filled with fully-worked examples and code-first explanations, by the end of the book, you’ll not only have a rock solid understanding of the architecture, but will be ready to implement Flux architecture in anger.

Chapter 1. What is Flux?

Flux is supposed to be this great new way of building complex user interfaces that scale well. At least that’s the general messaging around Flux, if you’re only skimming the Internet literature. But, how do we define this great new way of building user interfaces? What makes it superior to other more established frontend architectures?

The aim of this chapter is to cut through the sales bullet points and explicitly spell out what Flux is, and what it isn’t, by looking at the patterns that Flux provides. And since Flux isn’t a software package in the traditional sense, we’ll go over the conceptual problems that we’re trying to solve with Flux.

Finally, we’ll close the chapter by walking through the core components found in any Flux architecture, and we’ll install the Flux npm package and write a hello world Flux application right away. Let’s get started.

Flux is a set of patterns

We should probably get the harsh reality out of the way first—Flux is not a software package. It’s a set of architectural patterns for us to follow. While this might sound disappointing to some, don’t despair—there’s good reasons for not implementing yet another framework. Throughout the course of this book, we’ll see the value of Flux existing as a set of patterns instead of a de facto implementation. For now, we’ll go over some of the high-level architectural patterns put in place by Flux.

Data entry points

With traditional approaches to building frontend architectures, we don’t put much thought into how data enters the system. We might entertain the idea of data entry points, but not in any detail. For example, with MVC (Model View Controller) architectures, the controller is supposed control the flow of data. And for the most part, it does exactly that. On the other hand, the controller is really just about controlling what happens after it already has the data. How does the controller get data in the first place? Consider the following illustration:

At first glance, there’s nothing wrong with this picture. The data-flow, represented by the arrows, is easy to follow. But where does the data originate? For example, the view can create new data and pass it to the controller, in response to a user event. A controller can create new data and pass it to another controller, depending on the composition of our controller hierarchy. What about the controller in question—can it create data itself and then use it?

In a diagram such as this one, these questions don’t have much virtue. But, if we’re trying to scale an architecture to have hundreds of these components, the points at which data enters the system become very important. Since Flux is used to build architectures that scale, it considers data entry points an important architectural pattern.

Managing state

State is one of those realities we need to cope with in frontend development. Unfortunately, we can’t compose our entire application of pure functions with no side-effects for two reasons. First, our code needs to interact with the DOM interface, in one way or another. This is how the user sees changes in the UI. Second, we don’t store all our application data in the DOM (at least we shouldn’t do this). As time passes and the user interacts with the application, this data will change.

There’s no cut-and-dry approach to managing state in a web application, but there are several ways to limit the amount of state changes that can happen, and enforce how they happen. For example, pure functions don’t change the state of anything, they can only create new data. Here’s an example of what this looks like:

As you can see, there’s no side-effects with pure functions because no data changes state as a result of calling them. So why is this a desirable trait, if state changes are inevitable? The idea is to enforce where state changes happen. For example, perhaps we only allow certain types of components to change the state of our application data. This way, we can rule out several sources as the cause of a state change.

Flux is big on controlling where state changes happen. Later on in the chapter, we’ll see how Flux stores manage state changes. What’s important about how Flux manages state is that it’s handled at an architectural layer. Contrast this with an approach that lays out a set of rules that say which component types are allowed to mutate application data—things get confusing. With Flux, there’s less room for guessing where state changes take place.

Keeping updates synchronous

Complimentary to data entry points is the notion of update synchronicity. That is, in addition to managing where the state changes originate from, we have to manage the ordering of these changes relative to other things. If the data entry points are the what of our data, then synchronously applying state changes across all the data in our system is the when.

Let’s think about why this matters for a moment. In a system where data is updated asynchronously, we have to account for race conditions. Race conditions can be problematic because one piece of data can depend on another, and if they’re updated in the wrong order, we see cascading problems, from one component to another. Take a look at this diagram, which illustrates this problem:

When something is asynchronous, we have no control over when that something changes state. So, all we can do is wait for the asynchronous updates to happen, and then go through our data and make sure all of our data dependencies are satisfied. Without tools that automatically handle these dependencies for us, we end up writing a lot of state-checking code.

Flux addresses this problem by ensuring that the updates that take place across our data stores are synchronous. This means that the scenario illustrated in the preceding diagram isn’t possible. Here’s a better visualization of how Flux handles the data synchronization issues that are typical of JavaScript applications today:

Information architecture

It’s easy to forget that we work in information technology and that we should be building technology around information. In recent times, however, we seem to have moved in the other direction, where we’re forced to think about implementation before we think about information. More often than not, the data exposed by the sources used by our application doesn’t have what the user needs. It’s up to our JavaScript to turn this raw data into something consumable by the user. This is our information architecture.

Does this mean that Flux is used to design information architectures as opposed to a software architecture? This isn’t the case at all. In fact, Flux components are realized as true software components that perform actual computations. The trick is that Flux patterns enable us to think about information architecture as a first-class design consideration. Rather than having to sift through all sorts of components and their implementation concerns, we can make sure that we’re getting the right information to the user.

Once our information architecture takes shape, the larger architecture of our application follows, as a natural extension to the information we’re trying to communicate to our users. Producing information from data is the difficult part. We have to distill many sources of data into not only information, but information that’s also of value to the user. Getting this wrong is a huge risk for any project. When we get it right, we can then move on to the specific application components, like the state of a button widget, and so on.

Flux architectures keep data transformations confined to their stores. A store is an information factory—raw data goes in and new information comes out. Stores control how data enters the system, the synchronicity of state changes, and they define how the state changes. When we go into more depth on stores as we progress through the book, we’ll see how they’re the pillars of our information architecture.

Flux isn’t another framework

Now that we’ve explored some of the high-level patterns of Flux, it’s time to revisit the question: what is Flux again? Well, it is just a set of architectural patterns we can apply to our frontend JavaScript applications. Flux scales well because it puts information first. Information is the most difficult aspect of software to scale; Flux tackles information architecture head on.

So, why aren’t Flux patterns implemented as a framework? This way, Flux would have a canonical implementation for everyone to use; and like any other large scale open source project, the code would improve over time as the project matures.

The main problem is that Flux operates at an architectural level. It’s used to address information problems that prevent a given application from scaling to meet user demand. If Facebook decided to release Flux as yet another JavaScript framework, it would likely have the same types of implementation issues that plague other frameworks out there. For example, if some component in a framework isn’t implemented in a way that best suits the project we’re working on, then it’s not so easy to implement a better alternative, without hacking the framework to bits.

What’s nice about Flux is that Facebook decided to leave the implementation options on the table. They do provide a few Flux component implementations, but these are reference implementations. They’re functional, but the idea is that they’re a starting point for us to understand the mechanics of how things such as dispatchers are expected to work. We’re free to implement the same Flux architectural pattern as we see it.

Flux isn’t a framework. Does this mean we have to implement everything ourselves? No, we do not. In fact, developers are implementing Flux libraries and releasing them as open source projects. Some Flux libraries stick more closely to the Flux patterns than others. These implementations are opinionated, and there’s nothing wrong with using them if they’re a good fit for what we’re building. The Flux patterns aim to solve generic conceptual problems with JavaScript development, so you’ll learn what they are before diving into Flux implementation discussions.

Flux solves conceptual problems

If Flux is simply a collection of architectural patterns instead of a software framework, what sort of problems does it solve? In this section, we’ll look at some of the conceptual problems that Flux addresses from an architectural perspective. These include unidirectional data-flow, traceability, consistency, component layering, and loosely coupled components. Each of these conceptual problems pose a degree of risk to our software, in particular the ability to scale it. Flux helps us get out in front of these issues as we’re building the software.

Data flow direction

We’re creating an information architecture to support the feature-rich application that will ultimately sit on top of this architecture. Data flows into the system and will eventually reach an endpoint, terminating the flow. It’s what happens in between the entry point and the termination point that determines the data-flow within a Flux architecture. This is illustrated here:

Data flow is a useful abstraction, because it’s easy to visualize data as it enters the system and moves from one point to another. Eventually, the flow stops. But before it does, several side-effects happen along the way. It’s that middle block in the preceding diagram that’s concerning, because we don’t know exactly how the data-flow reached the end.

Let’s say that our architecture doesn’t pose any restrictions on data flow. Any component is allowed to pass data to any other component, regardless of where that component lives. Let’s try to visualize this setup:

As you can see, our system has clearly defined entry and exit points for our data. This is good because it means that we can confidently say that the data-flows through our system. The problem with this picture is with how the data-flows between the components of the system. There’s no direction, or rather, it’s multidirectional. This isn’t a good thing.

Flux is a unidirectional data flow architecture. This means that the preceding component layout isn’t possible. The question is—why does this matter? At times, it might seem convenient to be able to pass data around in any direction, that is, from any component to any other component. This in and of itself isn’t the issue—passing data alone doesn’t break our architecture. However, when data moves around our system in more than one direction, there’s more opportunity for components to fall out of sync with one another. This simply means that if data doesn’t always move in the same direction, there’s always the possibility of ordering bugs.

Flux enforces the direction of data-flows, and thus eliminates the possibility of components updating themselves in an order that breaks the system. No matter what data has just entered the system, it’ll always flow through the system in the same order as any other data, as illustrated here:

Predictable root cause

With data entering our system and flowing through our components in one direction, we can more easily trace any effect to it’s cause. In contrast, when a component sends data to any other component residing in any architectural layer, it’s a lot more difficult to figure how the data reached its destination. Why does this matter? Debuggers are sophisticated enough that we can easily traverse any level of complexity during runtime. The problem with this notion is that it presumes we only need to trace what’s happening in our code for the purposes of debugging.

Flux architectures have inherently predictable data-flows. This is important for a number of design activities and not just debugging. Programmers working on Flux applications will begin to intuitively sense what’s going to happen. Anticipation is key, because it let’s us avoid design dead-ends before we hit them. When the cause and effect are easy to tease out, we can spend more time focusing on building application features—the things the customers care about.

Consistent notifications

The direction in which we pass data from component to component in Flux architectures should be consistent. In terms of consistency, we also need to think about the mechanism used to move data around our system.

For example, publish/subscribe (pub/sub) is a popular mechanism used for inter-component communication. What’s neat about this approach is that our components can communicate with one another, and yet we’re able to maintain a level of decoupling. In fact, this is fairly common in frontend development because component communication is largely driven by user events. These events can be thought of as fire-and-forget. Any other components that want to respond to these events in some way, need to take it upon themselves to subscribe to the particular event.

While pub/sub does have some nice properties, it also poses architectural challenges, in particular scaling complexities. For example, let’s say that we’ve just added several new components for a new feature. Well, in which order do these components receive update messages relative to pre-existing components? Do they get notified after all the pre-existing components? Should they come first? This presents a data dependency scaling issue.

The other challenge with pub-sub is that the events that get published are often fine-grained to the point where we’ll want to subscribe and later unsubscribe from the notifications. This leads to consistency challenges because trying to code lifecycle changes when there’s a large number of components in the system is difficult and presents opportunities for missed events.

The idea with Flux is to sidestep the issue by maintaining a static inter-component messaging infrastructure that issues notifications to every component. In other words, programmers don’t get to pick and choose the events their components will subscribe to. Instead, they have to figure out which of the events that are dispatched to them are relevant, ignoring the rest. Here’s a visualization of how Flux dispatches events to components:

The Flux dispatcher sends the event to every component; there’s no getting around this. Instead of trying to fiddle with the messaging infrastructure, which is difficult to scale, we implement logic within the component to determine whether or not the message is of interest. It’s also within the component that we can declare dependencies on other components, which helps influence the ordering of messages. We’ll cover this in much more detail in later chapters.

Simple architectural layers

Layers can be a great way to organize an architecture of components. For one thing, it’s an obvious way to categorize the various components that make up our application. For another thing, layers serve as a means to put constraints around communication paths. This latter point is especially relevant to Flux architectures since it’s important that data flow in one direction. It’s much easier to apply constraints to layers than it is to individual components. Here is an illustration of Flux layers:

Note

This diagram isn’t intended to capture the entire data flow of a Flux architecture, just how data-flows between the main three layers. It also doesn’t give any detail about what’s in the layers. Don’t worry, the next section gives introductory explanations of the types of Flux components, and the communication that happens between the layers is the focus of this book.

As you can see, the data-flows from one layer to the next, in one direction. Flux only has a few layers, and as our applications scale in terms of component counts, the layer counts remains fixed. This puts a cap on the complexity involved with adding new features to an already large application. In addition to constraining the layer count and the data-flow direction, Flux architectures are strict about which layers are actually allowed to communicate with one another.

For example, the action layer could communicate with the view layer, and we would still be moving in one direction. We would still have the layers that Flux expects. However, skipping a layer like this is prohibited. By ensuring that layers only communicate with the layer directly beneath it, we can rule out bugs introduced by doing something out-of-order.

Loosely coupled rendering

One decision made by the Flux designers that stands out is that Flux architectures don’t care how UI elements are rendered. That is to say, the view layer is loosely coupled to the rest of the architecture. There are good reasons for this.

Flux is an information architecture first, and a software architecture second. We start with the former and graduate toward the latter. The challenge with view technology is that it can exert a negative influence on the rest of the architecture. For example, one view has a particular way of interacting with the DOM. Then, if we’ve already decided on this technology, we’ll end up letting it influence the way our information architecture is structured. This isn’t necessarily a bad thing, but it can lead to us making concessions about the information we ultimately display to our users.

What we should really be thinking about is the information itself and how this information changes over time. What actions are involved that bring about these changes? How is one piece of data dependent on another piece of data? Flux naturally removes itself from the browser technology constraints of the day so that we can focus on the information first. It’s easy to plug views into our information architecture as it evolves into a software product.

Flux components

In this section, we’ll begin our journey into the concepts of Flux. These concepts are the essential ingredients used in formulating a Flux architecture. While there’s no detailed specifications for how these components should be implemented, they nevertheless lay the foundation of our implementation. This is a high-level introduction to the components we’ll be implementing throughout this book.

Action

Actions are the verbs of the system. In fact, it’s helpful if we derive the name of an action directly from a sentence. These sentences are typically statements of functionality – something we want the application to do. Here are some examples:

  • Fetch the session
  • Navigate to the settings page
  • Filter the user list
  • Toggle the visibility of the details section

These are simple capabilities of the application, and when we implement them as part of a Flux architecture, actions are the starting point. These human-readable action statements often require other new components elsewhere in the system, but the first step is always an action.

So, what exactly is a Flux action? At it’s simplest, an action is nothing more than a string—a name that helps identify the purpose of the action. More typically, actions consist of a name and a payload. Don’t worry about the payload specifics just yet—as far as actions are concerned, they’re just opaque pieces of data being delivered into the system. Put differently, actions are like mail parcels. The entry point into our Flux system doesn’t care about the internals of the parcel, only that they get to where they need to go. Here’s an illustration of actions entering a Flux system:

This diagram might give the impression that actions are external to Flux, when in fact they’re an integral part of the system. The reason this perspective is valuable is because it forces us to think about actions as the only means to deliver new data into the system.

Note

Golden Flux Rule: If it’s not an action, it can’t happen.

Dispatcher

The dispatcher in a Flux architecture is responsible for distributing actions to the store components (we’ll talk about stores next). A dispatcher is actually kind of like a broker—if actions want to deliver new data to a store, they have to talk to the broker, so it can figure out the best way to deliver them. Think about a message broker in a system like RabbitMQ. It’s the central hub where everything is sent before it’s actually delivered. Here is a diagram depicting a Flux dispatcher receiving actions and dispatching them to stores:

The earlier section of this chapter—”simple architectural layers”—didn’t have an explicit layer for dispatchers. That was intentional. In a Flux application, there’s only one dispatcher. It can be thought of more as a pseudo layer than an explicit layer. We know the dispatcher is there, but it’s not essential to this level of abstraction. What we’re concerned about at an architectural level is making sure that when a given action is dispatched, we know that it’s going to make it’s way to every store in the system.

Having said that, the dispatcher’s role is critical to how Flux works. It’s the place where store callback functions are registered and it’s how data dependencies are handled. Stores tell the dispatcher about other stores that it depends on, and it’s up to the dispatcher to make sure these dependencies are properly handled.

Note

Golden Flux Rule: The dispatcher is the ultimate arbiter of data dependencies.

Store

Stores are where state is kept in a Flux application. Typically, this means the application data that’s sent to the frontend from the API. However, Flux stores take this a step further and explicitly model the state of the entire application. If this sounds confusing or like a generally bad idea, don’t worry—we’ll clear this up as we make our way through subsequent chapters. For now, just know that stores are where state that matters can be found. Other Flux components don’t have state—they have implicit state at the code level, but we’re not interested in this, from an architectural point of view.

Actions are the delivery mechanism for new data entering the system. The term new data doesn’t imply that we’re simply appending it to some collection in a store. All data entering the system is new in the sense that it hasn’t been dispatched as an action yet—it could in fact result in a store changing state. Let’s look at a visualization of an action that results in a store changing state:

The key aspect of how stores change state is that there’s no external logic that determines a state change should happen. It’s the store, and only the store, that makes this decision and then carries out the state transformation. This is all tightly encapsulated within the store. This means that when we need to reason about particular information, we need not look any further than the stores. They’re their own boss—they’re self-employed.

Note

Golden Flux Rule: Stores are where state lives, and only stores themselves can change this state.

View

The last Flux component we’re going to look at in this section is the view, and it technically isn’t even a part of Flux. At the same time, views are obviously a critical part of our application. Views are almost universally understood as the part of our architecture that’s responsible for displaying data to the user—it’s the last stop as data-flows through our information architecture. For example, in MVC architectures, views take model data and display it. In this sense, views in a Flux-based application aren’t all that different from MVC views. Where they differ markedly is with regard to handling events. Let’s take a look at the following diagram:

Here we can see the contrasting responsibilities of a Flux view, compared with a view component found in your typical MVC architecture. The two view types have similar types of data flowing into them—application data used to render the component and events (often user input). What’s different between the two types of view is what flows out of them.

The typical view doesn’t really have any constraints in how its event handler functions communicate with other components. For example, in response to a user clicking a button, the view could directly invoke behavior on a controller, change the state of a model, or it might query the state of another view. On the other hand, the Flux view can only dispatch new actions. This keeps our single entry point into the system intact and consistent with other mechanisms that want to change the state of our store data. In other words, an API response updates state in the exact same way as a user clicking a button does.

Given that views should be restricted in terms of how data-flows out of them (besides DOM updates) in a Flux architecture, you would think that views should be an actual Flux component. This would make sense insofar as making actions the only possible option for views. However, there’s also no reason we can’t enforce this now, with the benefit being that Flux remains entirely focused on creating information architectures.

Keep in mind, however, that Flux is still in it’s infancy. There’s no doubt going to be external influences as more people start adopting Flux. Maybe Flux will have something to say about views in the future. Until then, views exist outside of Flux but are constrained by the unidirectional nature of Flux.

Note

Golden Flux Rule: The only way data-flows out of a view is by dispatching an action.

Installing the Flux package

We’ll close the first chapter by getting our feet wet with some code, because everyone needs a hello world application under their belt. We’ll also get some of our boilerplate code setup tasks out of the way too, since we’ll be using a similar setup throughout the book.

Note

We’ll skip going over Node + NPM installation since it’s sufficiently covered in great detail all over the Internet. We’ll assume Node is installed and ready to go from this point forward.

The first NPM package we’ll need installed is Webpack. This is an advanced module bundler that’s well suited for modern JavaScript applications, including Flux-based applications. We’ll want to install this package globally so that the webpack command gets installed on our system:

npm install webpack -g

With Webpack in place, we can build each of the code examples that ship with this book. However, our project does require a couple of local NPM packages, and these can be installed as follows:

npm install flux babel-core babel-loader babel-preset-es2015 --save-dev

The --save-dev option adds these development dependencies to our file, if one exists. This is just to get started—it isn’t necessary to manually install these packages to run the code examples in this book. The examples you’ve downloaded already come with a package.json, so to install the local dependencies, simply run the following from within the same directory as the package.json file:

npm install

Now the webpack command can be used to build the example. This is the only example in the first chapter, so it’s easy to navigate to within a terminal window and run the webpack command, which builds the main-bundle.js file. Alternatively, if you plan on playing with the code, which is obviously encouraged, try running webpack --watch. This latter form of the command will monitor for file changes to the files used in the build, and run the build whenever they change.

This is indeed a simple hello world to get us off to a running start, in preparation for the remainder of the book. We’ve taken care of all the boilerplate setup tasks by installing Webpack and its supporting modules. Let’s take a look at the code now. We’ll start by looking at the markup that’s used.

<!doctype html>
<html>
  <head>
    <title>Hello Flux</title>
    <script src="main-bundle.js" defer></script>
  </head>
  <body></body>
</html>

Not a lot to it is there? There isn’t even content within the body tag. The important part is the main-bundle.js script—this is the code that’s built for us by Webpack. Let’s take a look at this code now:

// Imports the "flux" module.
import * as flux from 'flux';

// Creates a new dispatcher instance. "Dispatcher" is
// the only useful construct found in the "flux" module.
const dispatcher = new flux.Dispatcher();

// Registers a callback function, invoked every time
// an action is dispatched.
dispatcher.register((e) => {
  var p;

  // Determines how to respond to the action. In this case,
  // we're simply creating new content using the "payload"
  // property. The "type" property determines how we create
  // the content.
  switch (e.type) {
    case 'hello':
      p = document.createElement('p');
      p.textContent = e.payload;
      document.body.appendChild(p);
      break;
    case 'world':
      p = document.createElement('p');
      p.textContent = `${e.payload}!`;
      p.style.fontWeight = 'bold';
      document.body.appendChild(p);
      break;
    default:
      break;
  }
});

// Dispatches a "hello" action.
dispatcher.dispatch({
  type: 'hello',
  payload: 'Hello'
});

// Dispatches a "world" action.
dispatcher.dispatch({
  type: 'world',
  payload: 'World'
});

As you can see, there’s not much to this hello world Flux application. In fact, the only Flux-specific component this code creates is a dispatcher. It then dispatches a couple of actions and the handler function that’s registered to the store processes the actions.

Don’t worry that there’s no stores or views in this example. The idea is that we’ve got the basic Flux NPM package installed and ready to go.

Summary

This chapter introduced you to Flux. Specifically, we looked at both what Flux is and what it isn’t. Flux is a set of architectural patterns that, when applied to our JavaScript application, help with getting the data-flow aspect of our architecture right. Flux isn’t yet another framework used for solving specific implementation challenges, be it browser quirks or performance gains—there’s a multitude of tools already available for these purposes. Perhaps the most important defining aspect of Flux are the conceptual problems it solves—things like unidirectional data flow. This is a major reason that there’s no de facto Flux implementation.

We wrapped the chapter up by walking through the setup of our build components used throughout the book. To test that the packages are all in place, we created a very basic hello world Flux application.

Now that we have a handle on what Flux is, it’s time for us to look at why Flux is the way it is. In the following chapter, we’ll take a more detailed look at the principles that drive the design of Flux applications.

Chapter 2. Principles of Flux

In the previous chapter, you were introduced at a 10,000 foot level to some of the core Flux principles. For example, unidirectional data-flow is central to Flux’s existence. The aim of this chapter is to go beyond the simplistic view of Flux principles.

We’ll kick things off with a bit of an MVC retrospective—to identify where it falls apart when we’re trying to scale a frontend architecture. Following this, we’ll take a deeper look at at unidirectional data-flow and how it solves some of the scaling issues we’ve identified in MVC architectures.

Next, we’ll address some high-level compositional issues faced by Flux architectures, such as making everything explicit and favoring layers over deep hierarchies. Finally, we’ll compare the various kinds of state found in a Flux architecture and introduce the concept of an update round.

Challenges with MV*

MV* is the prevailing architectural pattern of frontend JavaScript applications. We’re referring to this as MV* because there’s a number of accepted variations on the pattern, each of which have models and views as core concepts. For our discussions in this book, they can all be considered the same style of JavaScript architecture.

MV* didn’t gain traction in the development community because it’s a terrible set of patterns. No, MV* is popular because it works. Although Flux can be thought of as a sort of MV* replacement, there’s no need to go out and tear apart a working application.

There’s no such thing as a perfect architecture, and Flux is by no means immune to this fact. The goal of this section isn’t to downplay MV* and all the things it does well, but rather to look at some of the MV* weaknesses and see how Flux steps in and improves the situation.

Separation of concerns

One thing MV* is really good at is establishing a clear separation of concerns. That is, a component has one responsibility, while another component is responsible for something else, and so on, all throughout the architecture. Complementary to the separation of concerns principle is the single responsibility principle, which enforces a clear separation of concerns.

Why do we care though? The simple answer is that when we separate responsibilities into different components, different parts of the system are naturally decoupled from one another. This means that we can change one thing without necessarily impacting the other. This is a desired trait of any software system, regardless of the architecture. But, is this really what we get with MV*, and is this actually something we should shoot for?

For example, maybe there’s no clear advantage in dividing a feature into five distinct responsibilities. Maybe the decoupling of the feature’s behavior doesn’t actually achieve anything because we would have to touch all five components every time we want to change something anyway. So rather than help us craft a robust architecture, the separation of concerns principle has amounted to nothing more than needles indirection that hampers productivity. Here’s an example of a feature that’s broken down into several pieces of focused responsibility:

Anytime a developer needs to pull apart a feature so that they can understand how it works, they end up spending more time jumping between source code files. The feature feels fragmented, and there’s no obvious advantage to structuring the code like this. Here’s a look at the moving parts that make up a feature in a Flux architecture:

The Flux feature decomposition leaves us with a feeling of predictability. We’ve left out the potential ways in which the view itself could be decomposed, but that’s because the views are outside Flux. All we care about in terms of our Flux architecture is that the correct information is always passed to our views when state changes occur.

You’ll note that the logic and state of a given Flux feature are tightly coupled with one another. This is in contrast to MV*, where we want application logic to be a standalone entity that can operate on any data. The opposite is true with Flux, where we’ll find the logic responsible for change state in close proximity to that state. This is an intentional design trait, with the implication being that we don’t need to get carried away with separating concerns from one another, and that this activity can sometimes hurt rather than help.

As we’ll see in the coming chapters, this tight coupling of data and logic is characteristic of Flux stores. The preceding diagram shows that with complex features, it’s much easier to add more logic and more state, because they’re always near the surface of the feature, rather than buried in a nested tree of components.

Cascading updates

It’s nice when we have a software component that just works. This could mean any number of things, but it’s meaning is usually centered around automatically handling things for us. For instance, instead of manually having to invoke this method, followed by that method, and so on, everything is handled by the component for us. Let’s take a look at the following illustration:

When we pass input into a larger component, we can expect that it will do the right thing automatically for us. What’s compelling about these types of components is that it means less code for us to maintain. After all, the component knows how to update itself by orchestrating the communication between any subcomponents.

This is where the cascading effect begins. We tell one component to perform some behavior. This, in turn, causes another component to react. We give it some input, which causes another component to react, and so on. Soon, it’s very difficult to comprehend what’s going on in our code. This is because the things that are taken care of for us are hidden from view. Intentional by design, with unintended consequences.

The previous diagram isn’t too bad. Sure, it might get a little difficult to follow depending on how many subcomponents get added to the larger component, but in general, it’s a tractable problem. Let’s look at a variation of this diagram:

What just happened? Three more boxes and four more lines just happened, resulting in an explosion of cascading update complexity. The problem is no longer tractable because we simply cannot handle this type of complexity, and most MV* applications that rely on this type of automatic updating have way more than six components. The best we can hope for is that once it works the way we want it to, it keeps working.

This is the naive assumption that we make about automatically updating components—this is something we want to encapsulate. The problem is that this generally isn’t true, at least not if we ever plan to maintain the software. Flux sidesteps the problem of cascading updates because only a store can change it’s own state, and this is always in response to an action.

Model update responsibilities

In an MV* architecture, state is stored within models. To initialize model state, we could fetch data from the backend API. This is clear enough: we create a new model, then tell that model to go fetch some data. However, MV* doesn’t say anything about who is responsible for updating these models. One might think it’s the controller component that should have total control over the model, but does this ever happen in practice?

For example, what happens in view event handlers, called in response to user interactivity? If we only allow controllers to update the state of our models, then the view event handler functions should talk directly to the controller in question. The following diagram is a visualization of a controller changing the state of models in different ways:

At first glance, this controller setup makes perfect sense. It acts as a wrapper around the models that store state. It’s a safe assumption the anything that wants to mutate any of these models needs to go through the controller. That’s its responsibility after all—to control things. Data that comes from the API, events triggered by the user and handled by the view, and other models—these all need to talk to the controller if they want to change the state of the models.

As our controller grows, making sure that model state changes are handled by the controller will produce more and more methods that change the model state. If we step back and look at all of these methods as they accumulate, we’ll start to notice a lot of needless indirection. What do we stand to gain by proxying these state changes?

Another reason the controller is a dead-end for trying to establish consistent state changes in MV* is the changes that models can make to themselves. For example, setting one property in a model could end up changing other model properties as a side-effect. Worse, our models could have listeners that respond to state changes, somewhere else in the system (the cascading updates problem).

Flux stores deal with the cascading updates problem by only allowing state changes via actions. This same mechanism solves the MV* challenges discussed here; we don’t have to worry about views or other stores directly changing the state of our store.

Unidirectional data

A cornerstone of any Flux architecture is unidirectional data-flow. The idea being data flows from point A to point B, or from point A to B to C, or from point A to C. It’s the direction that’s important with unidirectional data-flow, and to a lesser extent, the ordering. So when we say that our architecture uses a unidirectional data-flow, we can say that data never flows from point B to point A. This is an important property of Flux architectures.

As we saw in the previous section, MV* architectures have no discernible direction with their data-flows. In this section, we’ll talk though some of the properties that make a unidirectional data-flow worth implementing. We’ll begin with a look at the starting points and completion points of our data-flows, and then we’ll think about how side-effects can be avoided when data flows in one direction.

From start to finish

If data-flows in only one direction, there has to be both a starting point and a finish point. In other words, we can’t just have an endless stream of data, which arbitrarily affects the various components the data-flows through. When data-flows are unidirectional with clearly defined start and finish points, there’s no way we can have circular flows. Instead, we have one big data-flow cycle in Flux, as visualized here:

This is obviously an over-simplification of any Flux architecture, but it does serve to illustrate the start and finish points of any given data-flow. What we’re looking at is called an update round. A round is atomic in the sense that it’s run-to-completion—there’s no way to stop an update round from completing (unless an exception is thrown).

JavaScript is a run-to-completion language, meaning that once a block of code starts running, it’s going to finish. This is good because it means that once we start updating the UI, there’s no way a callback function can interrupt our update. The exception to this is when our own code interrupts the updating process. For example, our store logic that’s meant to mutate the state of the store dispatches an action. This would be bad news for our Flux architecture because it would violate the unidirectional data-flow. To prevent this, the dispatcher can actually detect when a dispatch takes place inside of an update round. We’ll have more on this in later chapters.

Update rounds are responsible for updating the state of the entire application, not just the parts that have subscribed to a particular type of action. This means that as our application grows, so do our update rounds. Since an update round touches every store, it may start to feel as though the data is flowing sideways through all of our stores. Here’s an illustration of the idea:

From the perspective of unidirectional data-flow, it doesn’t actually matter how many stores there are. The important thing to remember is that the updates will not be interrupted by other actions being dispatched.

No side-effects

As we saw with MV* architectures, the nice thing about automatic state changes is also their demise. When we program by hidden rules, we’re essentially programming by stitching together a bunch of side-effects. This doesn’t scale well, mainly due to the fact that it’s impossible to hold all these hidden connections in our head at a given point in time. Flux likes to avoid side-effects wherever possible.

Let’s think about stores for a moment. These are the arbiters of state in our application. When something changes state, it has the potential to cause another piece of code to run in response. This does indeed happen in Flux. When a store changes state, views may be notified about the change, if they’ve subscribed to the store. This is the only place where side-effects happen in Flux, which is inevitable since we do need to update the DOM at some point when state changes. But what’s different about Flux is how it avoids side-effects when there’s data dependencies involved. The typical approach to dealing with data dependencies in user interfaces is to notify the dependent model that something has happened. Think cascading updates, as illustrated here:

When there’s a dependency between two stores in Flux, we just need to declare this dependency in the dependent store. What this does is it tells the dispatcher to make sure that the store we depend on is always updated first. Then, the dependent store can just directly use the store data it depends on. This way, all of the updates can still take place within the same update round.

Explicit over implicit

With architectural patterns, the tendency is to make things easier by veiling them behind abstractions that grow more elaborate with time. Eventually, more and more of the system’s data changes automatically and developer convenience is superseded by hidden complexity.

This is a real scalability issue, and Flux handles it by favoring explicit actions and data transformations over implicit abstractions. In this section, we’ll explore the benefits of explicitness along with the trade-offs to be made.

Updates via hidden side-effects

We’ve seen already, in this chapter, how difficult it can be to deal with hidden state changes that hide behind abstractions. They help us avoid writing code, but they also hurt by making it difficult to comprehend an entire work-flow when we come back and look at the code later. With Flux, state is kept in a store, and the store is responsible for changing its own state. What’s nice about this is that when we want to inquire about how a given store changes state, all the state transformation code is there, in one place. Let’s look at an example store:

// A Flux store with state.
class Store {
  constructor() {

    // The initial state of the store.
    this.state = { clickable: false };

    // All of the state transformations happen
    // here. The "action.type" property is how it
    // determines what changes will take place.
    dispatcher.register((e) => {

      // Depending on the type of action, we
      // use "Object.assign()" to assign different
      // values to "this.state".
      switch (e.type) {
        case 'show':
          Object.assign(this.state, e.payload,
            { clickable: true });
          break;
        case 'hide':
          Object.assign(this.state, e.payload,
            { clickable: false });
          break;
        default:
          break;
      }
    });
  }
}

// Creates a new store instance.
var store = new Store();

// Dispatches a "show" action.
dispatcher.dispatch({
  type: 'show',
  payload: { display: 'block' }
});

console.log('Showing', store.state);
// → Showing {clickable: true, display: "block"}

// Dispatches a "hide" action.
dispatcher.dispatch({
  type: 'hide',
  payload: { display: 'none' }
});

console.log('Hiding', store.state);
// → Hiding {clickable: false, display: "none"}

Here, we have a store with a simple state object. In the constructor, the store registers a callback function with the dispatcher. All state transformations take place, explicitly, in one function. This is where data turns into information for our user interface. We don’t have to hunt down the little bits and pieces of data as they change state across multiple components; this doesn’t happen in Flux.

So the question now becomes, how do views make use of this monolithic state data? In other types of frontend architecture, the views get notified whenever any piece of state changes. In the preceding example, a view gets notified when the clickable property changes, and again when the display property changes. The view has logic to render these two changes independently of one another. However, views in Flux don’t get fine-grained updates like these. Instead, they’re notified when the store state changes and the state data is what’s given to them.

The implication here is that we should lean toward view technology that’s good at re-rendering whole components. This is what makes React a good fit for Flux architectures. Nonetheless, we’re free to use any view technology we please, as we’ll see later on in the book.

Data changes state in one place

As we saw in the preceding section, the store transformation code is encapsulated within the store. This is intentional. The transformation code that mutates a store’s state is supposed to live nearby. Close proximity drastically reduces the complexity of figuring out where state changes happen as systems grow more complex. This makes state changes explicit, instead of abstract and implicit.

One potential trade-off with having a store manage all of the state transformation code is that there could be a lot of it. The code we looked at used a single switch statement to handle all of the state transform logic. This would obviously cause a bit of a headache later on when there’s a lot of cases to handle. We’ll think about this more later in the book, when the time comes to consider large, complex stores. Just know that we can re-factor our stores to elegantly handle a large number of cases, while keeping the coupling of business logic and state tight.

This leads us right back to the separation of concerns principle. With Flux stores, the data and the logic that operates on it isn’t separated at all. Is this actually a bad thing though? An action is dispatched, a store is notified about it, and it changes its state (or does nothing, ignoring the action). The logic that changes the state is located in the same component because there’s nothing to gain by moving it somewhere else.

Too many actions?

Actions make everything that happens in a Flux architecture explicit. By everything, I mean everything—if it happens, it was the result of an action being dispatched. This is good because it’s easy to figure out where actions are dispatched from. Even as the system grows, action dispatches are easy to find in our code, because they can only come from a handful of places. For example, we won’t find actions being dispatched within stores.

Any feature we create has the potential to create dozens of actions, if not more. We tend to think that more means bad, from an architectural perspective. If there’s more of something, it’s going to be more difficult to scale and to program with. There’s some truth to this, but if we’re going to have a lot of something, which is unavoidable in any large system, it’s good that it’s actions. Actions are relatively lightweight in that they describe something that happens in our application. In other words, actions aren’t heavyweight items that we need to fret over having a lot of.

Does having a lot of actions mean that we need to cram them all into one huge monolithic actions module? Thankfully, we don’t have to do this. Just because actions are the entry point into any Flux system, doesn’t mean that we can’t modularize them to our liking. This is true of all the Flux components we develop, and we’ll keep an eye open for ways that we can keep our code modular as we progress through the book.

Layers over hierarchies

User interfaces are hierarchical in nature, partly because HTML is inherently hierarchical and partly because of the way that we structure the information presented to users. For example, this is why we have nested levels of navigation in some applications—we can’t possibly fit everything on the screen at once. Naturally, our code starts to reflect this hierarchical structure by becoming a hierarchy itself. This is good in the sense that it reflects what the user sees. It’s bad in the sense that deep hierarchies are difficult to comprehend.

In this section, we’ll look at hierarchical structures in frontend architectures and how Flux is able to avoid complex hierarchies. We’ll first cover the idea of having several top-level components, each with their own hierarchies. Then, we’ll look at the side-effects that happen within hierarchies and how data-flows through Flux layers.

Multiple component hierarchies

A given application probably has a handful of major features. These are often implemented as the top-level components or modules in our code. These aren’t monolithic components; they’re decomposed into smaller and smaller components. Perhaps some of these components share the smaller multipurpose components. For example, a top-level component hierarchy might be composed of models, views, and controllers as is illustrated here:

This makes sense in terms of the structure of our application. When we look at pictures of component hierarchies, it’s easy to see what our application is made of. Each of these hierarchies, with the top-level component as their root, are like a little universes that exist independently of one anothers. Again, we’re back to the notion of separation of concerns. We can develop one feature without impacting another.

The problem with this approach is that user interface features often depend on other features. In other words, the state of one component hierarchy will likely depend on the state of another. How do we keep these two component trees synchronized with one another when there’s no mechanism in place to control when state can change? What ends up happening is that a component in one hierarchy will introduce an arbitrary dependency to a component in another hierarchy. This serves a single purpose, so we have to keep introducing new inter-hierarchy dependencies to make sure everything is synchronized.

Hierarchy depth and side-effects

One challenge with hierarchies is depth. That is, how far down will a given hierarchy extend? The features of our application are constantly changing and expanding in scope. This can lead to our component trees growing taller. But they also grow wider. For example, let’s say that our feature uses a component hierarchy that’s three levels deep.

Then, we add a new level. Well, we’ll probably have to add several new components to this new level and in higher levels. So to build upon our hierarchies, we have to scale in multiple directions—horizontally and vertically. This idea is illustrated here:

Scaling components in multiple directions is difficult, especially in component hierarchies where there’s no data-flow direction. That is, input that ends up changing the state of something can enter the hierarchy at any level. Undoubtedly, this has some sort of side-effect, and if we’re dependent on components in other hierarchies, all hope is lost.

Data-flow and layers

Flux has distinct architectural layers, which are more favorable to scaling architectures than hierarchies are. The reason is simple—we only need to scale components horizontally, within each layer of the architecture. We don’t need to add new components to a layer and add new layers. Let’s take a look at what scaling a Flux architecture looks like in the following diagram:

No matter how large an application gets, there’s no need to add new architectural layers. We simply add new components to these layers. The reason we’re able to do this without creating a tangled mess of component connections within a given layer is because all three layers play a part in the update round. An update round starts with an action and completes with the last view that is rendered. The data-flows through our application from layer to layer, in one direction.

Application data and UI state

When we have a separation of concerns that sticks presentation in one place and application data in another, we have two distinct places where we need to manage state. Except in Flux, the only place where there’s state is within a store. In this section, we’ll compare application data and UI data. We’ll then address the transformations that ultimately lead to changes in the user interface. Lastly, we’ll discuss the feature-centric nature of Flux stores.

Two of the same thing

Quite often, application data that’s fetched from an API is fed into some kind of view layer. This is also known as the presentation layer, responsible for transforming application data into something of value for the user—from data to information in other words. In these layers, we end up with state to represent the UI elements. For example, is the checkbox checked? Here is an illustration of how we tend to group the two types of state within our components:

This doesn’t really fit well with Flux architectures, because stores are where state belongs, including the UI. So, can a store have both application and UI state within it? Well, there isn’t a strong argument against it. If everything that has a state is self-contained within a store, it should be fairly simple to discern between application data and state that belongs to UI elements. Here’s an illustration of the types of state found in Flux stores:

The fundamental misconception with trying to separate UI state from other state is that components often depend on UI state. Even UI components in different features can depend on each other’s state in unpredictable ways. Flux acknowledges this and doesn’t try to treat UI state as something special that should be split off from application data.

The UI state that ultimately ends up in a store can be derived from a number of things. Generally, two or more items from our application data could determine a UI state item. A UI state could be derived from another UI state, or from something more complex, like a UI state and other application data. In other cases, the application data is simple enough that it can be consumed directly by the view. The key is that the view has enough information that it can render itself without having to track its own state.

Tightly coupled transformations

Application data and UI state are tightly coupled together in Flux stores. It only makes sense that the transformations that operate on this data be tightly coupled to the store as well. This makes it easy for us to change the state of the UI based on other application data or based on the state of other stores.

If our business logic code wasn’t in the store, then we’d need to start introducing dependencies to the components containing the logic needed by the store. Sure, this would mean generic business logic that transforms the state, and this could be shared in several stores, but this seldom happens at a high level. Stores are better off keeping their business logic that transforms the state of the store tightly coupled. If we need to reduce repetitive code, we can introduce smaller, more fine-grained utility functions to help with data transformations.

Note

We can get generic with our stores as well. These stores are abstract and don’t directly interface with views. We’ll go into more detail on this advanced topic later in the book.

Feature centric

If the data transformations that change the state of a store are tightly coupled to the store itself, does this mean that the store is tailored for a specific feature? In other words, do we care about stores being reused for other features? Sure, in some cases we have generic data that doesn’t make much sense in repeating several times across stores. But generally speaking, stores are feature specific. Features are synonymous with domains in Flux parlance—everyone divides up the capabilities of their UI in different ways.

This is a departure from other architectures that base their data models on the data model of the API. Then, they use these models to create more specific view models. Any given MV* framework will have loads of features in their model abstractions, things like data bindings and automatic API fetching. They’re only worried about storing state and publishing notifications when this state changes.

When stores encourage us to create and store new state that’s specific to the UI, we can more easily design for the user. This is the fundamental difference between stores in Flux and models in other architectures—the UI data model comes first. The transformations within stores exist to ensure that the correct state is published to views—everything else is secondary.

Summary

This chapter introduced you to the driving principles of Flux. These should be in the back your mind as you work on any Flux architecture. We started the chapter off with a brief retrospective of MV* style architectures that permeate frontend development. Some challenges with this style of architecture include cascading model updates and a lack of data-flow direction. We then looked at the prize concept of Flux—unidirectional data-flow.

Next, we covered how Flux favors explicit actions over implicit abstractions. This makes things easier to comprehend when reading Flux code, because we don’t have to go digging around for the root cause of a state change. We also looked at how Flux utilizes architectural layers to visualize how data-flows in one direction through the system.

Finally, we compared application data with state that’s generally considered specific to UI elements. Flux stores tend to focus on state that’s relevant to the feature it supports, and doesn’t distinguish between application data and UI state. Now that we have a handle on the principles that drive Flux architectures, it’s time for us to code one. In the next chapter, we’ll implement our skeleton Flux architecture, allowing us to focus on information design.

Chapter 3. Building a Skeleton Architecture

The best way to think in Flux is to write code in Flux. This is why we want to start building a skeleton architecture as early as possible. We call this phase of building our application the skeleton architecture because it isn’t yet the full architecture. It’s missing a lot of key application components, and this is on purpose. The aim of the skeleton is to keep the moving parts to a minimum, allowing us to focus on the information our stores will generate for our views.

We’ll get off the ground with a minimalist structure that, while small, doesn’t require a lot of work to turn our skeleton architecture into our code base. Then, we’ll move on to some of the information design goals of the skeleton architecture. Next, we’ll dive into implementing some aspects of our stores.

As we start building, we’ll begin to get a sense of how these stores map to domains—the features our users will interact with. After this, we’ll create some really simple views, which can help us ensure that our data flows are in fact reaching their final destination. Finally, we’ll end the chapter by running through a checklist for each of the Flux architectural layers, to make sure that we’ve validated our skeleton before moving on to other development activities.

General organization

As a first step in building a skeleton Flux architecture, we’ll spend a few minutes getting organized. In this section, we’ll establish a basic directory structure, figure out how we’ll manage our dependencies, and choose our build tools. None of this is set in stone—the idea is to get going quickly, but at the same time, establish some norms so that transforming our skeleton architecture into application code is as seamless as possible.

Directory structure

The directory structure used to start building our skeleton doesn’t need to be fancy. It’s a skeleton architecture, not the complete architecture, so the initial directory structure should follow suit. Having said that, we also don’t want to use a directory structure that’s difficult to evolve into what’s actually used in the product. Let’s take a look at the items that we’ll find in the root of our project directory:

Pretty simple right? Let’s walk through what each of these items represent:

  • main.js: This is the main entry point into the application. This JavaScript module will bootstrap the initial actions of the system.
  • dispatcher.js: This is our dispatcher module. This is where the Flux dispatcher instance is created.
  • actions: This directory contains all our action creator functions and action constants.
  • stores: This directory contains our store modules.
  • views: This directory contains our view modules.

This may not seem like much, and this is by design. The directory layout is reflective of the architectural layers of Flux. Obviously there will be more to the actual application once we move past the skeleton architecture phase, but not a whole lot. We should refrain from adding any extraneous components at this point though, because the skeleton architecture is all about information design.

Dependency management

As a starting point, we’re going to require the basic Facebook Flux dispatcher as a dependency of our skeleton architecture—even if we don’t end up using this dispatcher in our final product. We need to start designing our stores, as this is the most crucial and the most time-consuming aspect of the skeleton architecture; worrying about things like the dispatcher at this juncture simply doesn’t pay off.

We need to start somewhere and the Facebook dispatcher implementation is good enough. The question is, will we need any other packages? In Chapter 1, What is Flux? we walked through the setup of the Facebook Flux NPM package and used Webpack to build our code. Can this work as our eventual production build system?

Not having a package manager or a module bundler puts us at a disadvantage, right from the onset of the project. This is why we need to think about dependency management as a first step of the skeleton architecture, even though we don’t have many dependencies at the moment. If this is the first time we’re building an application that has a Flux architecture behind it, the way we handle dependencies will serve as a future blueprint for subsequent Flux projects.

Is it a bad idea to add more module dependencies during the development of our skeleton architecture? Not at all. In fact, it’s better that we use a tool that’s well suited for the job. As we’re implementing the skeleton, we’ll start to see places in our stores where a library would be helpful. For example, if we’re doing a lot of sorting and filtering on data collections and we’re building higher-order functions, using something like lodash for this is perfect.

On the other hand, pulling in something like ReactJS or jQuery at this stage doesn’t make a whole lot of sense because we’re still thinking about the information and not how to present it in the DOM. So that’s the approach we’re going to use in this book—NPM as our package manager and Webpack as our bundler. This is the basic infrastructure we need, without much overhead to distract us.

Information design

We know that the skeleton architecture we’re trying to build is specifically focused on getting the right information into the hands of our users. This means that we’re not paying much attention to user interactivity or formatting the information in a user-friendly way. It might help if we set some rough goals for ourselves—how do we know we’re actually getting anywhere with our information design?

In this section, we’ll talk about the negative influence API data models can have on our user interface design. Then, we’ll look at mapping data to what the user sees and how these mappings should be encouraged throughout our stores. Finally, we’ll think about the environment we find ourselves working in.

Users don’t understand models

Our job as user interface programmers is to get the right information to the user at the right time. How do we do this? Conventional wisdom revolves around taking some data that we got from the API and then rendering it as HTML. Apart from semantic markup and some styles, nothing much has changed with the data since it arrived from the API. We’re saying here’s the data we have, let’s make it look nice for the user. Here’s an illustration of this idea:

There’s no data transformation taking place here, which is fine, so long as the user is getting what they need. The problem this picture paints is that the data model of the API has taken the UI feature development hostage. We must heed everything that’s sent down to us from the backend. The reason this is a problem is because we’re limited in what we can actually do for the user. Something we can do is have our own models enhance the data that’s sent back from the API. This means that if we’re working on a feature that would require information that isn’t exactly as the API intended it, we can fabricate it as a frontend model, as shown here:

This gets us slightly closer to our goal in the sense that we can create a model of the feature we’re trying to implement and put it in front of the user. So while the API might not deliver exactly what we want to display on the screen, we can use our transformation functions to generate a model of the information we need.

During the skeleton architecture phase of our design process, we should think about stores independent of API’s as much as possible. Not completely independently; we don’t want to go way out into left field, jeopardizing the product. But the idea of producing a Flux skeleton architecture is to ensure that we’re producing the right information, first and foremost. If there’s no way the API can support what we’re trying to do, then we can take the necessary steps, before spending a lot of time implementing full-fledged features.

Stores map to what the user sees

State isn’t the only thing that’s encapsulated by the stores found in our Flux architecture. There’s also the data transformations that map old state to new state. We should spend more time thinking about what the user needs to see and less time thinking about the API data, which means that the store transformation functions are essential.

We need to embrace data transformations in Flux stores, because they’re the ultimate determinant of how things change in front of the user’s eyes. Without these transformations, the user would only be able to view static information. Of course, we could aim to design an architecture that only uses the data that’s passed into the system “as-is”, without transforming it. This never works out as we intend, for the simple reason that we’re going to uncover dependencies with other UI components.

What should our early goals be with stores and how we transform their state? Well, the skeleton architecture is all about experimentation, and if we start writing transformation functionality upfront, we’re likely to discover dependencies sooner. Dependencies aren’t necessarily a bad thing, except when we find a lot of them late in the project, well after we’ve completed the skeleton architecture phase. Of course, new features are going to add new dependencies. If we can use state transformations early on to identify potential dependencies, then we can avoid future headaches.

What do we have to work with?

The last thing we’ll need to consider before we roll up our sleeves and start implementing this skeleton Flux architecture is what’s already in place. For example, does this application already have an established API and we’re re-architecting the frontend? Do we need to retain the user experience of an existing UI? Is the project completely greenfield with no API or user experience input?

The following diagram illustrates how these external factors influence the way we treat the implementation of our skeleton architecture:

There’s nothing wrong with having these two factors shape our Flux architecture. In the case of existing APIs, we’ll have a starting point from which we can start writing our state transformation functions, to get the user the information that they need. In the case of keeping an existing user experience, we already know what the shape of our target information looks like, and we can work the transformation functions from a different angle.

When the Flux architecture is completely greenfield, we can let it inform both the user experience and the APIs that need to be implemented. It’s highly unlikely that any of the scenarios in which we find ourselves building a skeleton architecture will be cut-and-dried. These are just the starting points that we may find ourselves in. Having said that, it’s time to start implementing some skeleton stores.

Putting stores into action

In this section, we’re going to implement some stores in our skeleton architecture. They won’t be complete stores capable of supporting end-to-end work-flows. However, we’ll be able to see where the stores fit within the context of our application.

We’ll start with the most basic of all store actions, which are populating them with some data; this is usually done by fetching it via some API. Then, we’ll discuss changing the state of remote API data. Finally, we’ll look at actions that change the state of a store locally, without the use of an API.

Fetching API data

Regardless of whether or not there’s an API with application data ready to consume, we know that eventually this is how we’ll populate our store data. So it makes sense that we think about this as the first design activity of implementing skeleton stores.

Let’s create a basic store for the homepage of our application. The obvious information that the user is going to want to see here is the currently logged-in user, a navigation menu, and perhaps a summarized list of recent events that are relevant to the user. This means that fetching this data is one of the first things our application will have to do. Here’s our first implementation of the store:

// Import the dispatcher, so that the store can
// listen to dispatch events.
import dispatcher from '../dispatcher';

// Our "Home" store.
class HomeStore {
  constructor() {

    // Sets a default state for the store. This is
    // never a bad idea, in case other stores want to
    // iterate over array values - this will break if
    // they're undefined.
    this.state = {
      user: '',
      events: [],
      navigation: []
    };

    // When a "HOME_LOAD" event is dispatched, we
    // can assign "payload" to "state".
    dispatcher.register((e) => {
      switch (e.type) {
        case 'HOME_LOAD':
          Object.assign(this.state, e.payload);
          break;
      }
    });
  }
}

export default new HomeStore();

This is fairly easy to follow, so lets point out the important pieces. First, we need to import the dispatcher so that we can register our store. When the store is created, the default state is stored in the state property. When the HOME_LOAD action is dispatched, we change the state of the store. Lastly, we export the store instance as the default module member.

As the action name implies, HOME_LOAD is dispatched when data for the store has loaded. Presumably, we’re going to pull this data for the home store from some API endpoints. Let’s go ahead and put this store to use in our main.js module—our application entry point:

// Imports the "dispatcher", and the "homeStore".
import dispatcher from './dispatcher';
import homeStore from './stores/home';

// Logs the default state of the store, before
// any actions are triggered against it.
console.log(`user: "${homeStore.state.user}"`);
// → user: ""

console.log('events:', homeStore.state.events);
// → events: []

console.log('navigation:', homeStore.state.navigation);
// → navigation: []

// Dispatches a "HOME_LOAD" event, when populates the
// "homeStore" with data in the "payload" of the event.
dispatcher.dispatch({
  type: 'HOME_LOAD',
  payload: {
    user: 'Flux',
    events: [
      'Completed chapter 1',
      'Completed chapter 2'
    ],
    navigation: [
      'Home',
      'Settings',
      'Logout'
    ]
  }
});

// Logs the new state of "homeStore", after it's
// been populated with data.
console.log(`user: "${homeStore.state.user}"`);
// → user: "Flux"

console.log('events:', homeStore.state.events);
// → events: ["Completed chapter 1", "Completed chapter 2"]

console.log('navigation:', homeStore.state.navigation);
// → navigation: ["Home", "Settings", "Logout"]

This is some fairly straightforward usage of our home store. We’re logging the default state of the store, dispatching the HOME_LOAD action with some new payload data, and logging the state again to make sure that the state of the store did in fact change. So the question is, what does this code have to do with the API?

This is a good starting point for our skeleton architecture because there’s a number of things to think about before we even get to implementing API calls. We haven’t even started implementing actions yet, because if we did, they’d just be another distraction. And besides, actions and real API calls are easy to implement once we flesh out our stores.

The first question that comes to mind about the main.js module is the location of the dispatch() call to HOME_LOAD. Here, we’re bootstrapping data into the store. Is this the right place to do this? When the main.js module runs will we always require that this store be populated? Is this the place where we’ll want to bootstrap data into all of our stores? We don’t need immediate answers to these questions, because that would likely result in us dwelling on one aspect of the architecture for far too long, and there are many other issues to think about.

For example, does the coupling of our store make sense? The home store we just implemented has a navigation array. These are just simple strings right now, but they’ll likely turn into objects. The bigger issue is that the navigation data might not even belong in this store—several other stores are probably going to require navigation state data too. Another example is the way we’re setting the new state of the store using the dispatch payload. Using Object.assign() is advantageous, because we can dispatch the HOME_LOAD event with a payload with only one state property and everything will continue to function the same. Implementing this store took us very little time at all, but we’ve asked some very important questions and learned a powerful technique for assigning new store state.

This is the skeleton architecture, and so we’re not concerned with the mechanics of actually fetching the API data. We’re more concerned about the actions that get dispatched as a result of API data arriving in the browser; in this case, it’s HOME_LOAD. It’s the mechanics of information flowing through stores that matters in the context of a skeleton Flux architecture. And on that note, let’s expand the capabilities of our store slightly:

// We need the "dispatcher" to register our store,
// and the "EventEmitter" class so that our store
// can emit "change" events when the state of the
// store changes.
import dispatcher from '../dispatcher';
import { EventEmitter } from 'events';

// Our "Home" store which is an "EventEmitter"
class HomeStore extends EventEmitter {
  constructor() {

    // We always need to call this when extending a class.
    super();

    // Sets a default state for the store. This is
    // never a bad idea, in case other stores want to
    // iterate over array values - this will break if
    // they're undefined.
    this.state = {
      user: '',
      events: [],
      navigation: []
    };

    // When a "HOME_LOAD" event is dispatched, we
    // can assign "payload" to "state", then we can
    // emit a "change" event.
    dispatcher.register((e) => {
      switch (e.type) {
        case 'HOME_LOAD':
          Object.assign(this.state, e.payload);
          this.emit('change', this.state);
          break;
      }
    });
  }
}

export default new HomeStore();

The store still does everything it did before, only now the store class inherits from EventEmitter, and when the HOME_LOAD action is dispatched, it emits a change event using the store state as the event data. This gets us one step closer to having a full work-flow, as views can now listen to the change event to get the new state of the store. Let’s update our main module code to see how this is done:

// Imports the "dispatcher", and the "homeStore".
import dispatcher from './dispatcher';
import homeStore from './stores/home';

// Logs the default state of the store, before
// any actions are triggered against it.
console.log(`user: "${homeStore.state.user}"`);
// → user: ""

console.log('events:', homeStore.state.events);
// → events: []

console.log('navigation:', homeStore.state.navigation);
// → navigation: []

// The "change" event is emitted whenever the state of The
// store changes.
homeStore.on('change', (state) => {
  console.log(`user: "${state.user}"`);
  // → user: "Flux"

  console.log('events:', state.events);
  // → events: ["Completed chapter 1", "Completed chapter 2"]

  console.log('navigation:', state.navigation);
  // → navigation: ["Home", "Settings", "Logout"]
});

// Dispatches a "HOME_LOAD" event, when populates the
// "homeStore" with data in the "payload" of the event.
dispatcher.dispatch({
  type: 'HOME_LOAD',
  payload: {
    user: 'Flux',
    events: [
      'Completed chapter 1',
      'Completed chapter 2'
    ],
    navigation: [
      'Home',
      'Settings',
      'Logout'
    ]
  }
});

This enhancement to the store in our skeleton architecture brings about yet more questions, namely, about setting up event listeners on our stores. As you can see, we have to make sure that the handler is actually listening to the store before any actions are dispatched. All of these concerns we need to address, and we’ve only just begun to design our architecture. Let’s move on to changing the state of backend resources.

Changing API resource state

After we’ve set the initial store state by asking the API for some data, we’ll likely end up needing to change the state of that backend resource. This happens in response to user activity. In fact, the common pattern looks like the following diagram:

Let’s think about this pattern in the context of a Flux store. We’ve already seen how to load data into a store. In the skeleton architecture we’re building, we’re not actually making these API calls, even if they exist—we’re focused solely on the information that’s produced by the frontend right now. When we dispatch an action that changes the state of a store, we’ll probably need to update the state of this store in response to successful completion of the API call. The real question is, what does this entail exactly?

For example, does the call we make to change the state of the backend resource actually respond with the updated resource, or does it respond with a mere success indication? These types of API patterns have a dramatic impact on the design of our stores because it means the difference between having to always make a secondary call or having the data in the response.

Let’s look at some code now. First, we have a user store as follows:

import dispatcher from '../dispatcher';
import { EventEmitter } from 'events';

// Our "User" store which is an "EventEmitter"
class UserStore extends EventEmitter {
  constructor() {
    super();
    this.state = {
      first: '',
      last: ''
    };

    dispatcher.register((e) => {
      switch (e.type) {
        // When the "USER_LOAD" action is dispatched, we
        // can assign the payload to this store's state.
        case 'USER_LOAD':
          Object.assign(this.state, e.payload);
          this.emit('change', this.state);
          break;

        // When the "USER_REMOVE" action is dispatched,
        // we need to check if this is the user that was
        // removed. If so, then reset the state.
        case 'USER_REMOVE':
          if (this.state.id === e.payload) {
            Object.assign(this.state, {
              id: null,
              first: '',
              last: ''
            });

            this.emit('change', this.state);
          }

          break;
      }
    });
  }
}

export default new UserStore();

We’ll assume that this singular user store is for a page in our application where only a single user is displayed. Now, let’s implement a store that’s useful for tracking the state of several users:

import dispatcher from '../dispatcher';
import { EventEmitter } from 'events';

// Our "UserList" store which is an "EventEmitter"
class UserListStore extends EventEmitter {
  constructor() {
    super();

    // There's no users in this list by default.
    this.state = []

    dispatcher.register((e) => {
      switch (e.type) {

        // The "USER_ADD" action adds the "payload" to
        // the array state.
        case 'USER_ADD':
          this.state.push(e.payload);
          this.emit('change', this.state);
          break;

        // The "USER_REMOVE" action has a user id as
        // the "payload" - this is used to locate the
        // user in the array and remove it.
        case 'USER_REMOVE':
          let user = this.state.find(
            x => x.id === e.payload);

          if (user) {
            this.state.splice(this.state.indexOf(user), 1);
            this.emit('change', this.state);
          }

          break;
      }
    });
  }
}

export default new UserListStore();

Let’s now create the main.js module that will work with these stores. In particular, we want to see how interacting with the API to change the state of a backend resource will influence the design of our stores:

import dispatcher from './dispatcher';
import userStore from './stores/user';
import userListStore from './stores/user-list';

// Intended to simulate a back-end API that changes 
// state of something. In this case, it's creating
// a new resource. The returned promise will resolve
// with the new resource data.
function createUser() {
  return new Promise((resolve, reject) => {
    setTimeout(() => {
      resolve({
        id: 1,
        first: 'New',
        last: 'User'
      });
    }, 500);
  });
}

// Show the user when the "userStore" changes.
userStore.on('change', (state) => {
  console.log('changed', `"${state.first} ${state.last}"`);
});

// Show how many users there are when the "userListStore"
// changes.
userListStore.on('change', (state) => {
  console.log('users', state.length);
});

// Creates the back-end resource, then dispatches actions
// once the promise has resolved.
createUser().then((user) => {

  // The user has loaded, the "payload" is the resolved data.
  dispatcher.dispatch({
    type: 'USER_LOAD',
    payload: user
  });
  // Adds a user to the "userListStore", using the resolved
  // data.
  dispatcher.dispatch({
    type: 'USER_ADD',
    payload: user
  });

  // We can also remove the user. This impacts both stores.
  dispatcher.dispatch({
    type: 'USER_REMOVE',
    payload: 1
  });
});

Here, we can see that the createUser() function serves as a proxy for the actual API implementation. Remember, this is a skeleton architecture where the chief concern is the information constructed by our stores. Implementing a function that returns a promise is perfectly acceptable here because this is very easy to change later on once we start talking to the real API.

We’re on the lookout for interesting aspects of our stores—their state, how that state changes, and the dependencies between our stores. In this case, when we create the new user, the API returns the new object. Then, this is dispatched as a USER_LOAD action. Our userStore is now populated. We’re also dispatching a USER_ADD action so that the new user data can be added to this list. Presumably, these two stores service different parts of our application, and yet the same API call that changes the state of something in the backend is relevant.

What can we learn about our architecture from all of this? For starters, we can see that the promise callback is going to have to dispatch multiple actions for multiple stores. This means that we can probably expect more of the same with similar API calls that create resources. What about calls that modify users, would the code look similar?

Something that we’re missing here is an action to update the state of a user object within the array of users in userListStore. Alternatively, we could have this store also handle the USER_LOAD action. Any approach is fine, it’s the exercise of building the skeleton architecture that’s supposed to help us find the approach that best fits our application. For example, we’re dispatching a single USER_REMOVE action here too, and this is handled easily by both our stores. Maybe this is the approach we’re looking for?

Local actions

We’ll close the section on store actions with a look at local actions. These are actions that have nothing to do with the API. Local actions are generally in response to user interactions, and dispatching them will have a visible effect on the UI. For example, the user wants the toggle the visibility of some component on the page.

The typical application would just execute a jQuery one-liner to locate the element in the DOM and make the appropriate CSS changes. This type of thing doesn’t fly in Flux architectures, and it’s the type of thing we should start thinking about during the skeleton architecture phase of our application. Let’s implement a simple store that handles local actions:

import dispatcher from '../dispatcher';
import { EventEmitter } from 'events';

// Our "Panel" store which is an "EventEmitter"
class PanelStore extends EventEmitter {
  constructor() {

    // We always need to call this when extending a class.
    super();

    // The initial state of the store.
    this.state = {
      visible: true,
      items: [
        { name: 'First', selected: false },
        { name: 'Second', selected: false }
      ]
    };

    dispatcher.register((e) => {
      switch (e.type) {

        // Toggles the visibility of the panel, which is
        // visible by default.
        case 'PANEL_TOGGLE':
          this.state.visible = !this.state.visible;
          this.emit('change', this.state);
          break;

        // Selects an object from "items", but only
        // if the panel is visible.
        case 'ITEM_SELECT':
          let item = this.state.items[e.payload];

          if (this.state.visible && item) {
            item.selected = true;
            this.emit('change', this.state);
          }

          break;
      }
    });
  }
}

export default new PanelStore();

The PANEL_TOGGLE action and the ITEM_SELECT action are two local actions handled by this store. They’re local because they’re likely triggered by the user clicking a button or selecting a checkbox. Let’s dispatch these actions so we can see how our store handles them:

import dispatcher from './dispatcher';
import panelStore from './stores/panel';

// Logs the state of the "panelStore" when it changes.
panelStore.on('change', (state) => {
  console.log('visible', state.visible);
  console.log('selected', state.items.filter(
    x => x.selected));
});

// This will select the first item.
dispatcher.dispatch({
  type: 'ITEM_SELECT',
  payload: 0
});
// → visible true
// → selected [ { name: First, selected: true } ]

// This disables the panel by toggling the "visible"
// property value.
dispatcher.dispatch({ type: 'PANEL_TOGGLE' });
// → visible false
// → selected [ { name: First, selected: true } ]

// Nothing the second item isn't actually selected,
// because the panel is disabled. No "change" event
// is emitted here either, because the "visible"
// property is false.
dispatcher.dispatch({
  type: 'ITEM_SELECT',
  payload: 1
});

This example serves as an illustration as to why we should consider all things state-related during the skeleton architecture implementation phase. Just because we’re not implementing actual UI components right now, doesn’t mean we can’t guess at some of the potential states of common building blocks. In this code, we’ve discovered that the ITEM_SELECT action is actually dependent on the PANEL_TOGGLE action. This is because we don’t actually want to select an item and update the view when the panel is disabled.

Building on this idea, should other components be able to dispatch this action in the first place? We’ve just found a potential store dependency, where the dependent store would query the state of panelStore before actually enabling UI elements. All of this from local actions that don’t even talk to APIs, and without actual user interface elements. We’re probably going to find many more items like this throughout the course of our skeleton architecture, but don’t get hung up on finding everything. The idea is to learn what we can, while we have an opportunity to, because once we start implementing real features, things become more complicated.

Stores and feature domains

With more traditional frontend architectures, models that map directly to what’s returned from the API provide a clear and concise data model for our JavaScript components to work with. Flux, as we now know, leans more in the direction of the user, and focuses on the information that they need to see and interact with. This doesn’t need to be a gigantic headache for us, especially if we’re able to decompose our user interface into domains. Think of a domain as a really big feature.

In this section, we’ll talk about identifying the top-level features that form the core of our UI. Then, we’ll work on shedding irrelevant API data from the equation. We’ll finish the section with a look at the structure of our store data, and the role it plays in the design of our skeleton architecture.

Identifying top-level features

During the skeleton architecture phase of our Flux project, we should jump in and start writing store code, just as we’ve done in this chapter. We’ve been thinking about the information the user is going to need and how we can best get this information to the user. Something we didn’t spend a lot of time on upfront was trying to identify the top level features of the application. This is fine, because the exercises we’ve performed so far in this chapter are often a prerequisite for figuring out how to organize our user interface.

However, once we’ve identified how we’re going to implement some of the low-level store mechanisms that get us the information we’re after, we need to start thinking about these top-level features. And there’s a good reason for this—the stores we ultimately maintain will map to these features. When we say top-level, it’s tempting to use the navigation as the point of reference. There’s actually nothing wrong with using the page navigation as a guide; if it’s big enough for the main navigation, it’s probably a top-level feature that’s worthy of its own Flux store.

In addition to being a top-level feature, we need to think about the role of the store—why does it exist? What value does this add for the user? The reason these questions are important is because we could end up having six pages that all could have used the same store. So it’s a balance between consolidating value into one store and making sure that the store isn’t to large and general-purpose.

Applications are complex, with lots of moving parts that drive lots of features. Our user interface probably has 50 awesome features. But this is unlikely to require 50 awesome top-level navigation links and 50 Flux stores. Our stores will have to represent the complex intricacies of these features in their data, at some point. This comes later though, for now we just need to get a handle on approximately how many stores we’re working with, and how many dependencies we have between them.

Irrelevant API data

Use it or lose it—the mantra of Flux store data. The challenge with API data is that it’s a representation of a backend resource—it’s not going to return data that’s specifically required for our UI. An API exists so that more than one UI can be built on it. However, this means that we often end up with irrelevant data in our stores. For example, if we only need a few properties from an API resource, we don’t want to store 36 properties. Especially when some of these can themselves be collections. This is wasteful in terms of memory consumption, and confusing in terms of their existence. It’s actually the latter point that’s more concerning because we can easily mislead other programmers working on this project.

One potential solution is to exclude these unused values from the API response. Many APIs today support this, by letting us opt-in to the properties we want returned. And this is probably a good idea if it means drastically reduced network bandwidth. However, this approach can also be error-prone because we have to perform this filtering at the ajax call level, instead of at the store level. Let’s look at an example that takes a different approach, by specifying a store record:

import dispatcher from '../dispatcher';
import { EventEmitter } from 'events';

class PlayerStore extends EventEmitter {
  constructor() {
    super();

    // The property keys in the default state are
    // used to determine the allowable properties
    // used to set the state.
    this.state = {
      id: null,
      name: ''
    };

    dispatcher.register((e) => {
      switch (e.type) {
        case 'PLAYER_LOAD':

          // Make sure that we only take payload data
          // that's already a state property.
          for (let key in this.state) {
            this.state[key] = e.payload[key];
          }

          this.emit('change', this.state);
          break;
      }
    });
  }
}

export default new PlayerStore();

In this example, the default state object plays an important role, other than providing default state values. It also provides the store record. In other words, the property keys used by the default state determine the allowable values when the PLAYER_LOAD action is dispatched. Let’s see if this works as expected:

import dispatcher from './dispatcher';
import playerStore from './stores/player';

// Logs the state of the player store when it changes.
playerStore.on('change', (state) => {
  console.log('state', state);
});

// Dispatch a "PLAYER_LOAD" action with more payload
// data than is actually used by the store.
dispatcher.dispatch({
  type: 'PLAYER_LOAD',
  payload: {
    id: 1,
    name: 'Mario',
    score: 527,
    rank: 12
  }
});
// → state {id: 1, name: "Mario"}

Structuring store data

All of the examples shown so far in this chapter have relatively simple state objects within stores. Once we build the skeleton architecture up, these simple objects will turn into something more complicated. Remember, the state of a store reflects the state of the information that the user is looking at. This includes the state of some of the elements on the page.

This is something we need to keep an eye on. Just because we’re through performing the skeleton architecture exercise doesn’t mean an idea will hold up as we start to implement more elaborate features. In other words, if a store state becomes too large—too nested and deep—then it’s time to consider moving our stores around a little bit.

The idea is that we don’t want too many stores driving our views, because they’re more like models from an MVC architecture at this point. We want the stores to represent a specific feature of the application. This doesn’t always work out, because we could end up having some complex and convoluted state in the store for the feature. In this case, our top-level feature needs to be split somehow.

This will no doubt happen at some point during our time with Flux, and there’s no rule in place that says when it’s time to refactor stores. Instead, if the state data stays at a size where it feels comfortable to work with, you’re probably fine with the store as it is.

Bare bone views

We’ve made some progress with our skeleton stores to the point where we’re ready to start looking at skeleton views. These are simple classes, much in the same spirit as stores are, except we’re not actually rendering anything to the DOM. The idea with these bare bone views is to affirm the sound infrastructure of our architecture, and that these view components are in fact getting the information they expect. This is crucial because the views are the final item in the Flux data-flow, so if they’re not getting what they need, when they need it, we need to go back and fix our stores.

In this section, we’ll discuss how our bare-boned views can help us more quickly identify when stores are missing a particular piece of information. Then, we’ll look at how these views can help us identify potential actions in our Flux application.

Finding missing data

The first activity we’ll perform with our bare bone views is figuring out whether or not the stores are passing along all the essential information required by the view. By essential, we’re talking about things that would be problematic for the user were they not there. For example, we’re looking at a settings page, and there’s a whole section missing. Or, there’s a list of options to select from, but we don’t actually have the string labels to show because they’re part of some other API.

Once we figure out that these critical pieces of information are missing from the store, the next step is to determine if they’re a possibility, because if they’re not, we’ve just avoided spending an inordinate amount of time implementing a full-fledged view. However, these are the rare cases. Usually, it isn’t a big deal to go back to the store in question and add the missing transformation that will compute and set the missing state we’re looking for.

How much time do we need to spend on these bare bone views? Think of it this way—as we start implementing the actual views that render to the DOM for us, we’ll discover more missing state from the store. These, however, are superficial and easy to fix. With the bare bone views, we’re more concerned with teasing out the critical parts that are missing. What can we do with these views when we’re done with them? Are they garbage? Not necessarily, depending on how we want to implement our production views, we could either adjust them to become ReactJS components or we could embed the actual view inside the bare-bone view, making it more of a container.

Identifying actions

As we saw earlier in the chapter, the first set of actions to be dispatched by a given Flux architecture are going to be related to fetching data from the backend API. Ultimately, these are the start of the data-flows that end with the views. Sometimes, these are merely load type actions, where we’re explicitly saying to go fetch the resource and populate our store. Other times, we might have more abstract actions that describe the action taken by the user, resulting in several stores being populated from many different API endpoints.

This gets the user to a point where we can start thinking about how they’re going to want to interact with this information. The only way they do so is by dispatching more actions. Let’s create a view with some action methods. Essentially, the goal is to have access our views from the browser JavaScript console. This lets us view the state information associated with the view at any given point, as well as call the method to dispatch the given action.

To do this, we need to adjust our Webpack configuration slightly:

output: {
  …
  library: 'views'
}

This one line will export a global views variable in the browser window, and its value will be whatever our main.js module exports. Let’s have a look at this now:

import settingsView from './views/settings';
export { settingsView as settings };

Well, this looks interesting. We’re simply exporting our view as settings. So, as we’re creating our bare bone views in the skeleton architecture, we simply follow this pattern in main.js to keep adding views to the browser console to experiment with. Let’s now take a look at the settings view itself:

import dispatcher from '../dispatcher';
import settingsStore from '../stores/settings';

// This is a "bare bones" view because it's
// not rendering anything to the DOM. We're just
// using it to validate our Flux data-flows and
// to think about potential actions dispatched
// from this view.
class SettingsView {
  constructor() {

    // Logs the state of "settingsStore" when it
    // changes.
    settingsStore.on('change', (state) => {
      console.log('settings', state);
    });

    // The initial state of the store is logged.
    console.log('settings', settingsStore.state);
  }

  // This sets an email value by dispatching
  // a "SET_EMAIL" action.
  setEmail(email) {
    dispatcher.dispatch({
      type: 'SET_EMAIL',
      payload: 'foo@bar.com'
    });
  }

  // Do all the things!
  doTheThings() {
    dispatcher.dispatch({
      type: 'DO_THE_THINGS',
      payload: true
    })
  }
}

// We don't need more than one of these
// views, so export a new instance.
export default new SettingsView();

The only thing left to do now is to see what’s available in the browser console when we load this page. We should have a global views variable, and this should have each of our view instances as properties. Now, we get to play around with actions dispatched by views as though we’re users clicking around in the DOM. Let’s see how this looks:

views.settings.setEmail()
// → settings {email: "foo@bar.com", allTheThings: false}

views.settings.doTheThings()
// → settings {email: "foo@bar.com", allTheThings: true}


End-to-end scenarios

At some point, we’re going to have to wrap up the skeleton architecture phase of the project and start implementing real features. We don’t want the skeleton phase to drag on for too long because then we’ll start making too many assumptions about the reality of our implementation. At the same time, we’ll probably want to walk through a few end-to-end scenarios before we move on.

The aim of this section is to provide you with a few high-level points to be on the lookout for in each architectural layer. These aren’t strict criteria, but they can certainly help us formulate our own measurements that determine whether or not we’ve adequately answered our questions about the information architecture by building a skeleton. If we’re feeling confident, it’s time to go full steam and flesh out the application detail—the subsequent chapters in this book dive into the nitty-gritty of implementing Flux.

Action checklist

The following items are worth thinking about when we’re implementing actions:

  • Do our features have actions that bootstrap store data by fetching it from the API?
  • Do we have actions that change the state of backend resources? How are these changes reflected in our frontend Flux stores?
  • Does a given feature have any local actions, and are they distinct from actions that issue API requests?

Store checklist

The following items are worth thinking about when implementing stores:

  • Does the store map to a top-level feature in our application?
  • How well does the data structure of the store meet the needs of the views that use it? Is the structure too complex? If so, can we refactor the store into two stores?
  • Do the stores discard API data that isn’t used?
  • Do the stores map API data to relevant information that the user needs?
  • Is our store structure amenable to change once we start adding more elaborate view functionality?
  • Do we have too many stores? If so, do we need to rethink the way we’ve structured the top-level application features?

View checklist

The following items are worth thinking about when implementing views:

  • Does the view get the information it needs out of the store?
  • Which actions result in the view rendering?
  • Which actions does the view dispatch, in response to user interaction?

Summary

This chapter was about getting started with a Flux architecture by building some skeleton components. The goal being to think about the information architecture, without the distraction of other implementation issues. We could find ourselves in a situation where the API is already defined for us, or where the user experience is already in place. Either of these factors will influence the design of our stores, and ultimately the information we present to our users.

The stores we implemented were basic, loading data when the application starts and updating their state in response to an API call. We did, however, learn to ask the pertinent questions about our stores, such as the approach taken with parsing the new data to set as the store’s state, and how this new state will affect other stores.

Then, we thought about the top-level features that form the core of our application. These features give a good indication of the stores that our architecture will need. Toward the end of the skeleton architecture phase, we want to walk through a few end-to-end scenarios to sanity-check our chosen information design. We looked at a few high-level checklist items to help ensure we didn’t leave anything important out. In the following chapter, we’ll take a deeper look at actions and how they’re dispatched.

Chapter 4. Creating Actions

In the previous chapter, we worked on building a skeleton architecture for our Flux application. The actions were directly dispatched by the dispatcher. Now that we have a skeleton Flux architecture under our belts, it’s time to look more deeply into actions, and in particular, how actions are created.

We’ll start by talking about the names we give actions and the constants used to identify the available actions in our system. Then, we’ll implement some action creator functions, and we’ll think about how we can keep these modular. Even though we might be done with implementing our skeleton architecture, we may still have a need to mock some API data—we’ll go over how this is done with action creator functions.

Typical action creator functions are stateless—data in, data out. We’ll cover some scenarios where action creators actually depend on state, such as when long-running connections are involved. We’ll wrap the chapter up with a look at parameterized action creators, allowing us to reuse them for different purposes.

Advertisements