NDomain, a new framework to simplify DDD, CQRS and Event Sourcing development

On my last post I talked about how I ended up creating a framework to simplify DDD, CQRS and Event Sourcing development, as well as helping me better understand these concepts on the low level side of things.

NDomain

NDomain’s source code repository can be found on github. Here’s what you can expect from NDomain:

  • Robust EventStore implementation, where events can be stored and published using different technologies
  • Base aggregate class, whose state that can be rebuilt from stored events or from a snapshot
  • Repository to load/save aggregates, so you get true persistence ignorance in your domain layer
  • Brokerless message bus with transports for multiple technologies including Redis and Azure Queues
  • CommandBus and EventBus built on top of the message bus, as well as Command and Event handlers
  • Integration with your logging and IoC container
  • Fully async to its core, leveraging non blocking IO operations and keeping resource usage to a minimum
  • Naming convention based, meaning you don’t need to implement interfaces for each command/event handler, it just works and it’s fast!
  • No reflection to invoke command/event handlers nor rebuilding aggregates, all is wired up using compiled lambda expression trees are created on startup
  • In-proc implementations for all components, so you can decide to move to a distributed architecture later without having to refactor your whole solution.
  • A straightforward Fluent configuration API, to let you choose the implementation of each component
  • A suite of base unit test classes, so that all different implementations for a given component are tested in the same way

Great, how does it work?

Here’s some basics to get you started, and you can also check the samples.

Configuring the DomainContext

The DomainContext is NDomain’s container, where all components are accessible and message processors can be started and stopped.


var context = DomainContext.Configure()
                           .EventSourcing(c => c.WithAzureTableStorage(azureAccount, "events"))
                           .Logging(c => c.WithNLog())
                           .IoC(c => c.WithAutofac(container))
                           .Bus(c => c.WithAzureQueues(azureAccount)
                                      .WithRedisSubscriptionStore(redisConnection)
                                      .WithRedisSubscriptionBroker(redisConnection)
                                      .WithProcessor(p => p.Endpoint("background-worker")
                                                           .RegisterHandler<CommandHandlerThatUpdatesSomeAggregate>()
                                                           .RegisterHandler<EventHandlerThatUpdatesAReadModel>()
                                                           .RegisterHandler<EventHandlerThatUpdatesAnotherReadModel>()))
                           .Start();

DomainContext exposes an ICommandBus, IEventBus, IEventStore and IAggregateRepository that you can use by either passing the DomainContext around or if you use an IoC container you can just configure it and depend on them.

Creating aggregates

A sample Aggregate, enforcing domain rules by checking its state properties and firing state change events


public class Sale : Aggregate<SaleState>
{
    public Sale(string id, SaleState state) : base(id, state)  { }

    public bool CanPlaceOrder(Order order)
    {
        return State.AvailableStock >= order.Quantity;
    }

    public void PlaceOrder(Order order)
    {
        if (State.PendingOrders.ContainsKey(order.Id))
        {
            // idempotency
            return;
        }

        if (!CanPlaceOrder(order))
        {
            // return error code or throw exception
            throw new InvalidOperationException("not enough quantity");
        }

        this.On(new OrderPlaced { SaleId = this.Id, Order = order});
    }

    public void CancelOrder(string orderId)
    {
        if (!State.PendingOrders.ContainsKey(orderId))
        {
            // idempotency
            return;
        }

        this.On(new OrderCancelled { SaleId = this.Id, OrderId = orderId });
    }

    // check OpenStore samples for complete example
}

Aggregate’s State is changed when events are fired from aggregates, and can be rebuilt by applying all past events, loaded from the IEventStore.


public class SaleState : State
{
    public string SellerId { get; set; }
    public Item Item { get; set; }
    public decimal Price { get; set; }
    public int Stock { get; set; }
    public int AvailableStock { get; set; }

    public Dictionary<string, Order> PendingOrders { get; set; }

    public SaleState()
    {
        this.PendingOrders = new Dictionary<string, Order>();
    }

    private void On(SaleCreated ev)
    {
        this.SellerId = ev.SellerId;
        this.Item = ev.Item;
        this.Price = ev.Price;
        this.Stock = this.AvailableStock = ev.Stock;
    }

    private void On(OrderPlaced ev)
    {
        AvailableStock -= ev.Order.Quantity;
        PendingOrders[ev.Order.Id] = ev.Order;
    }

    private void On(OrderCancelled ev)
    {
        AvailableStock += PendingOrders[ev.OrderId].Quantity;
        PendingOrders.Remove(ev.OrderId);
    }

    private void On(OrderCompleted ev)
    {
        var order = PendingOrders[ev.OrderId];
        
        Stock -= order.Quantity;
        PendingOrders.Remove(ev.OrderId);
    }
    
    // check OpenStore samples for complete example
}

Aggregates are loaded and saved by an IAggregateRepository, that persists its state change events using the IEventStore. As events are persisted, they are also published on the IEventBus.

CQRS handlers and processors

A command handler processes commands sent by the ICommandBus, updates aggregates and persists state changes


public class SaleCommandHandler
{
    readonly IAggregateRepository<Sale> repository;
    
    public SaleCommandHandler(IAggregateRepository<Sale> repository)
    {
        this.repository = repository;
    }

    public async Task Handle(ICommand<CreateSale> command)
    {
        var cmd = command.Payload;

        await repository.CreateOrUpdate(cmd.SaleId,
                                        s => s.Create(cmd.SellerId, cmd.Item, cmd.Price, cmd.Stock));
    }

    public async Task Handle(ICommand<PlaceOrder> command)
    {
        var cmd = command.Payload;

        await repository.Update(cmd.SaleId, s => s.PlaceOrder(cmd.Order));
    }

    // other commands
}

An event handler reacts to published events, updates read models used in your queries


public class SaleEventHandler
{
    
    public async Task On(IEvent<OrderCompleted> @event)
    {
        var ev = @event.Payload;

        // do something with it
    }

    // .. other events
}

As you can see, NDomain tries to be as less intrusive in your code as much as possible, so you don’t need to implement message handler interfaces, as long as you keep the naming conventions.

Message processing is transactional, so if a message handler fails or times out, the message gets back to the queue to be retried. It is important to design your aggregates, command and event handlers to be idempotent to avoid side effects.

A processor has an endpoint address (internally a queue) where you can register message handlers, usually for commands and events, but really any POCO can be used as a message. When you register handlers, message subscriptions are created based on the message’s Type name, and whenever a message is sent each subscription will get a copy of it, in this case, a processor/handler.

Your commands/event handlers can scale horizontally, as multiple processors using the same endpoint address will process messages from its input queue in a competing consumers fashion.

Contributing

If you would like to have support for other technologies, please take a look at the existing implementations and feel free to implement your own and submit a pull request. NDomain’s source code is very clean and simple, let’s keep it that way!

Advertisements

My journey on DDD, CQRS and Event Sourcing

This post is not about what DDD, CQRS and Event Sourcing are, but rather how I’ve been using it.

Over the last year I’ve been developing a collaborative social app (web and mobile) on my spare time where you can have user groups with activities, polls, discussions, feeds, and more.

As I’m targeting mostly mobile audience, I wanted to support disconnected clients and let offline users work on cached data. Once they’re online, I can synchronize their changes. This is easier to accomplish with task based UIs (where user actions map to commands in CQRS) and it’s clear that one user’s action doesn’t really need to be immediately visible to all members of the group, since it’s very likely that they’re offline and will only see the changes later. However, I wanted to be able to track and list other changes that have been done by users and not just show the final, last version of the data, giving a better feeling of collaboration even though clients can be disconnected most of the time.

Possibly this app could scale to millions of users and I wanted to keep it free of ads, so I needed the backend to be fast, scalable, cloud hosted and be as cheap as possible. I’m currently using Azure, but the original plan was to use AWS. On Azure I can implement my messaging infrastructure on top of Azure Queues and use Table Storage for my EventStore. On AWS I could SQS for messaging and DynamoDB for my EventStore. The key point is that using the right set of abstractions, my architecture doesn’t get tied to any particular service, database or cloud service.

Below is an overview of my current architecture. Non-blue boxes are components that can be hosted in separate processes / machines, but there’s really no obligation for that. My current setup is one worker role for the API and another worker role for command and event handlers.

Backend architecture overview

Continue reading

Architecting Silverlight LOB applications (Part 6) – Building an MVVM Framework

Hello again! In this post I’m going to talk about building an MVVM framework.

As I said in the previous post, this post should be about the OrderView. This view should allow users to pick items from a list of products, add them to the actual order, choose quantities, view the current total price and submit the order. Also the user should be able to filter, sort and do pagination on the product list. I’m sure you’ve seen enough blog posts from other people talking about this subject, that is getting the selected product, add it to another collection or create an order detail based on it, then update some other data on the UI and finally submit the changes back to the server. The thing is, everyone has its own way for programming and eventually when you end up in a team, you may find that two people coded the same thing in a different way, and one has a bug in situation A and the other has a bug in situation B. Having a good MVVM framework with a well defined methodology is a must to prevent these situations. In this post I want to talk about essential components you must have in an MVVM framework. Later, I’ll describe an MVVM Framework I’ve been working on which was based on WCF RIA Services but doesn’t really depend on it.

Since we’re following best practices, we know that using a good MVVM architecture we can come up with a solution whose fetching, filtering, sorting, paging logic is entirely separated from the view, allowing us to also have different views for the same view model. For example, we can start by using a DataGrid and a DataPager to display our items but later provide a new view that uses comboboxes to select the sort options, an album-like listbox to show the items and custom buttons for paging. Also, we should be able to separate all this logic from its actual data access logic to be able to use mock objects for our model and do unit tests for our viewmodels. That’s not an easy task but that’s what I want to achieve from now on.

Well, to start, .NET / Silverlight already offers us some classes and interfaces that are very handy for MVVM scenarios.

  • INotifyPropertyChanged – Used to raise an event when a property changes. WPF / Silverlight Binding framework use this interface to update the view when a property changes.
  • INotifyCollectionChanged – Used to raise an event when an insert, remove, clear or replace operation has been done against a collection. WPF / Silverlight controls that have an ItemsSource property usually use this interface to create or delete visual items in a container. For example, ListBoxes display new ListBoxItems, DataGrids display new DataGridRows.
  • ICollectionView – Used to provide filter, sort descriptions, group descriptions, and item selection for an IEnumerable collection and have the view display only the filtered items, sorted according to the sort descriptions and highlight the selected item. (Has more features but these are the most relevant for the sake of this post).
  • IPagedCollectionView – Used to provide paging options to an IEnumerable collection. This is used by DataPagers mostly, that make calls to the MoveToPage(int pageIndex) method and allows us to register in the PageChanging event and fetch a new page of entities to be displayed.
  • There are other important interfaces like IEditableObject, IEditableCollectionView but I’m not going to cover those in this posts. They are used to update property values of an object in an atomic fashion.

Continue reading

Architecting Silverlight LOB applications (Part 5) – Modules and UI composition

In this fifth post and the next ones I’m going to write about client development and how to create a modular and composite silverlight application. In this post I’ll focus on UI composition.

When building composite LOB silverlight apps you should be aware that modularity is an important requirement. As a developer it should be easier for you to mantain your code and add new features to your application without having to change the rest, plus your application’s XAP file doesn’t grow bigger. Every module is a separate XAP file, that can be downloaded on demand, depending on the user’s intent to use a certain functionallity.

Below is one possible architecture to achieve what we need.

solution

Continue reading

Architecting Silverlight LOB applications (Part 3) – DomainServices and Repository

Hello again,

In the previous post I described an overview of an architecture that could be used when developing silverlight modular apps. In this post I’ll start with a simple business domain model and create a domain layer around that model.

In this example our app will be a web application for a company that sells products and that can be used by its customers to fulfill orders and its employees to process them. These orders have details which consist of products and quantities. Each product can have a product category. When a customer submits an order fulfillment, an employee of the company may process that order to prepare the items for shipping.

The company has a few departments, and each employee belongs to a department. This means that employees from the Sales department will be to process orders; employees from the Human Resources department will be able to recruit new employees; and employees from the Financial Department will be able to watch some charts and print reports with statistics of how sales are evolving as time goes by, sales reports, etc.

Continue reading

Architecting Silverlight LOB applications (Part 2) – Architecture overview

Continuing our series, I’d like to present an architecture that I’ve been working on and can be used in Silverlight LOB apps that are developed using DDD (Domain Driven Development) and have N-layers.

The following picture describes how an application can be layered into multiple Modules, all hosted within regions of a Shell. These modules can communicate with each other through well defined messages and contracts defined in the module Infrastructure layer. Furthermore, they retrieve and manipulate data objects through WCF Services that are suited for DDD (called WCF RIA Services). These data objects are often called entities and can represent your Domain Model and are persisted in a data store using Data Access Objects that abstract both the data store and the data access technology you’re using. WCF RIA Services provides ways for you to manipulate your entities in the client similarly as you would do in the server when it comes to querying, changing data and submitting changes.

overview

Such application often relies on server-side infrastructure for authentication and authorization, logging and error management. At the client-side they rely on frameworks that help reduce, harmonize and reuse code in module development both at the presentation logic and at the presentation views. Some layers may have components that depend on components from other layers. In such cases, we reduce these dependencies with IoC containers or similar frameworks like MEF.

Developing in an N-layered architecture can be troublesome when you want to reuse code that has to be specified both at server-side and at client-side when it comes to validation or other business logic. WCF RIA Services relies on metadata that describes entities how entities are defined, validated and how they behave with each other. Although this technique is very effective, there are related problems that aren’t noticed until you actually hit them and I’ll try to identify and address some of them.

Notice the mapping layer between the Data entities and the Domain entities. This layer is “optional” and can be useful when you don’t want to expose your data entities as your domain model. This helps achieve persistence ignorance and also allows you to create simple and lightweight classes to transport data. For simplicity, I’m not going to use this layer in the upcoming posts, so my data entities will be my domain entities.

In the following posts I’ll be building an app that respects these layers and will evolve over time. I’ll also build a simple framework aimed for MVVM development to show how effective this pattern can be.

Never soon enough, but see you soon.

Architecting Silverlight LOB applications (Part 1) – Concepts and technologies

In this first part of the series I’ll focus on some open source projects in Microsoft’s Patterns & Practices and other technologies that are available and can help us achieve our goal.

First we have PRISM (Composite Application Guidance for WPF and Silverlight). PRISM is composed of a guidance and a set of libraries that are designed to help you more easily build modular WPF and Silverlight apps. It also has built-in components that let you take advantage of some known patterns like MVVM, Event Aggregator and Dependency Injection. There are others but I’ll focus on these three because I find them to be the most useful.

prism

PRISM applies to client development. I think the best feature PRISM provides is the ability to specify your application modules (XAPs in Silverlight) as a catalog and load them on demand. We’ll explore this in detail in other parts of this series when we create a module catalog to be used within the application.

Dependency injection (a specific usage of the Inversion of Control concept) is a prime technique for building loosely coupled applications because it provides ways to handle dependencies between objects. It helps you build testable code because you can easily switch concrete implementations which your objects depend on. PRISM is built using this design principle and it uses the Unity container, which implements the DI technique. PRISM also lets you use other DI containers but I’ll focus on Unity since I haven’t enough experience with others.

This pattern can be applied in both client and server development.

I didn’t find a nice and simple figure about Dependency Injection so I’ll link to this post where I’ve used this technique in a WCF Service (OrderService) which used an IOrderRepository implemented by the OrderRepository class which had dependencies being injected in its constructor.

The MVVM pattern is becoming the alpha and the omega when building WPF and Silverlight apps. It helps you separate concerns when designing your presentation layer and if it is well thought you can create a great set of reusable code for your application and even reuse it for other applications.

mvvm

With MVVM you basically think of your view as a set of classes and properties and not as a set of UI controls. This allows you to easily switch your UI while keeping the same logic that it is bound to, and at the same time enables testability. Plus it allows you to reuse a set of UI logic in lots of views with less effort. This pattern is applied to client development.

The Event Aggregator pattern allows you to build your app with an Event Driven Architecture where components communicate with each other in a decoupled manner. It provides a way to publish and subscribe to events without knowing who is subscribing or who is publishing. The only thing you need to know is the event you are publishing or subscribing and how to react to that event. Usually modules do not depend on each other and using events as contracts to communicate between them is a good approach. This pattern applies to client development.

Next we’ll focus on server development.

Client-server communication in Silverlight has been enforced and recommended to use web services. Since we are building LOB applications it makes sense to use services that are good to deal with data. WCF services sure are good but are they ideal? What’s the cost of maintaining a complete service layer that lets us expose CRUD operations on our data and also provides us the ability to do other queries (like get products by productType), or other business operations? How many times have we done this? And how many times have we done this for different applications? How about validation of the data on the server and client side? How about proxy generation? How about the amount of data sent to the wire just to change a DateTime value on the database?

We all know Visual Studio’s generated code is ugly as hell and maintaining client-side complete service agents can be troublesome, because you usually need to keep it synchronized with your service contract. In Silverlight it is slightly more complicated to share an assembly with service and data contracts than WPF, because of type incompatibilities. Also we usually like our client-side data contracts to implement INotifyPropertyChanged and use ObservableCollections so we take advantage of data binding but at the server we really do not care about this.

WCF RIA Services aims to solve all these problems and many others.. I’ll skip the “all the things RIA Services can do part”, and point you to the right place if you still don’t know. The most important features we’ll explore are query composition, data validation, change tracking, unit of work, among others.

riaservices

I’ve been using RIA Services for the last 7 months and one thing I find very useful is that the way you communicate either with the server and with your model (at the client side) is through a very well defined API which makes it easy to create very generic code that can be used to create reusable viewmodels. This is especially useful when you have multiple persons working in the same project and you want to ensure that everyone will code the same way (I will definitely explore this in later posts).

WCF RIA Services current release is the RC2, it has a go live license, so we’re good to go.

This doesn’t mean you shouldn’t use WCF as well. There are cases where WCF RIA Services isn’t well suited to work for. For example, file uploading or downloading. In this situation it’s preferable that you create a WCF Service or an HttpHandler to deal with this particular case.

For our data layer the best options nowadays are still NHibernate (using Linq to NHibernate) or Entity Framework. Nevertheless, we should encapsulate our data access through repository objects and do not use our data access technology directly in our services. I’ve got a very simple post that talks exactly about this. In this case, we can use any or both. RIA Services generates code on the client based on reflection and other metadata that is applied in our server side entities. If you use Entity Framework with RIA Services it already has a specific metadata provider that is able to understand relationships between entities and generate them correctly in the client. If you wish to use NHibernate you will have to specify additional metadata in your entities. For simplicity I’ll use Entity Framework. There are other resources on the web that explains how to use NHibernate.

Last but not least, I’d like to talk about MEF. MEF stands for Managed Extensibility Framework, has been shipped with .NET 4 (SL4 also) and is designed to help us compose our apps in a decoupled way much easier than using an IoC container for dependency injection. You simply annotate your classes with Import/Export attributes and MEF will resolve dependencies between objects for you. Of course there’s more we can do. You can provide metadata when exporting an object and import only objects that follow some criteria. MEF can be used to download XAP’s when an Import must be satisfied and the exported object is in another XAP, which helps building modular apps. It is said that the new Prism version will use MEF for its modularity features, which makes it even more interesting.

In the next post I will start with a simple business scenario and follow a top-down approach by designing an architecture built upon these technologies. I haven’t had much time lately but I’m trying to catch up so see you soon!