One of the most common mistake which a junior database architect can make is a missed versioning schema. It is so easy to design a schema, release the corresponding application and to realize later how difficult to maintain this schema, support compatibility between the versions and migrate users to the new versions.

However, even when the new schema includes a concept of version, work is required to keep the schema in a health state, have a migration procedure and some tooling to automate the maintenance tasks.

FluentMigrator C# library provides all the needed tools to solve those problems. It provides a syntax to define the versioned schema, it provides a way to migrate databases from version to version and it includes tools to automate the tasks, during the development, deployment and in the field.


The core concept of FluentMigrator is a migration. Migration is a class which has a version and two methods Up() and Down(). Up() is responsible to migrate the target database from the previous version, to the version defined by the migration. Down() is responsible for the opposite operation – downgrading the database to the previous version.

public class AddNotesTable : Migration
      public override Up()

      public override Down()

Instead of using SQL, migrations are defined using fluent C# syntax. This approach makes the migrations almost independent from the concrete databases, hiding the differences in SQL between them.

Migration version is defined using MigrationAttribute. Attribute accepts a number, which will be used by a migration executor to sort all the defined migrations and execute them one by one.

In addition to the schema definition, migrations can also include data seeding.

public class CreateDevData: Migration
      public override Up()
            Insert.IntoTable("User").Row( new
                        Username = "devuser1",
                        DisplayName = "Dev User1"

      public override Down()
            // empty, not using

This example also demonstrates an idea of profiles – an ability to selectively execute some migrations to have, for example, a seeded database for development or testing.


All migrations are usually grouped in on assembly and can be executed using the various provided tool. FluentMigrator provides CLI, NAnt, MSBuild and Rake migration runners.

Migrate.exe /connection "Data Source=db\db.sqlite;Version=3;" /db sqlite /target migrations.dll

This code demonstrates usage of CLI tool to execute the migrations from migrations.dll for the database defined via the connection string using sqlite driver. Runner automatically detect the current database version and applies only the required migrations.

FluentMigrator is published under Apache 2.0 license and available at GitHub and NuGet.

Exploring .NET Open Source ecosystem: logging from netstandard libraries using LibLog

Continuing discussing the logging, let’s talk about LibLog. Unlike other logging libraries, LibLog targets just one specific scenario – logging within reusable libraries.

Why this scenario needs a specific treatment? There are two general logging approaches for the library developers: one is to choose a logging library and force all library consumers to use the same library, and another is to create a logging façade and ask the consuming application to create an adapter for the preferred logging library like NLog or Serilog. Obviously, both approaches are not very elegant, and LibLog provides a solution based on the optimized second approach.

LibLog consists from just one file which can be added to the project directly from the GitHub repository or using a corresponding NuGet package. In case of manual approach, LibLog.cs will require namespace editing (detailed instructions provided in the file header comments).

After that, LibLog’s entry point, LogProvider class, becomes available for use. By design it is only available for the assembly which contains LibLog.cs. If the library includes more than one assembly, InternalsVisibleToAttribute can be used to share the logging infrastructure across all the components of the library without exposing it to the consumers.

A simple usage scenario looks like this:

public class MyClass
        private static readonly ILog Logger = LogProvider.For();

        public void DoSomething()
            Logger.Trace("Method 'DoSomething' in progress");

That’s it, no the library is ready to automatically pick-up logger used by the consuming application. For example, if Serilog is the selected library, assigning Serilog’s Logger.Log will automatically connect all the moving parts together:

Log.Logger = new LoggerConfiguration()


            var myClass = new MyClass();


The result is

[21:09:18 APP] Starting...
[21:09:18 APP] Method 'DoSomething' in progress
[21:09:18 APP] Finishing... 

Everything described before just works when library targeting .NET Framework. In case of .NET Standard library (at least netstandard1.3) things are a bit more complicated. When LibLog is added to such library, compilation will fail and two modifications are required to fix it.

First, LIBLOG_PORTABLE conditional compilation symbol shall be defined in the project settings (don’t forget to define it in all used build configurations). Second, two missing NuGet packages shall be added - Microsoft.CSharp and System.Dynamic.Runtime. These modifications will fix the build and enable LibLog usage.

Exploring .NET Open Source ecosystem: logging with Serilog

Logging is an important part of every software project. Ideally, developers would prefer to attach a debugger to the failing system and investigate it in-situ. In real life, however, systems are rarely available to the developers in the moment the issue is happening. And this is the reason we are heavily rely on logs, which can be analyzed later.

One of the most popular logging library for the .NET applications is log4net. It is a powerful, flexible library and the usual workflow looks like this: a developer needs to log an object state and some action, so he creates a human-readable string which represents the state and the action. The string is passed to log4net and stored in a text file. With the amount of logging data generated by the modern software, it very desirable to be able to process the logs automatically, by some software tool. I this case stored text file will be parsed to extract the object state and analyze it.

When we taking into account the second part of the workflow (parsing and analyzing), it sounds crazy. Nicely structured data available before logging is transformed to unstructured text only to parse and to transform to the structured state a bit late. What a waste of CPU cycles and time!

Serilog, a structured logging library, aims to stop this craziness. With Serilog, data can be logged in an original form and passed to the structured storage (database, Windows Event Log, third-party service) without an additional overhead.

var user = new User {Name="Guest", Ip=""};
logger.Information("User {@User} logged", user);

If Serilog is configured to output data to console, the logged information will be present as

08:43:12 [Information] User {Name: Guest, Ip:} logged

And for other sinks (like Event Log) it will be captured as JSON

{ "User": {"Name": "Guest", "Ip": ""}}

Serilog supports more than 50 sink, including console, text files, Email, Elasticsearch, RethinkDB and others. Logging sinks are configured for each logger and more than one sink can be used:

var logger = new LoggerConfiguration()

Serilog allow to enrich logging data with some static information (like thread id), define custom serializers and filters, for selective logging.

Serilog is published under Apache 2.0 license at

Exploring .NET Open Source ecosystem: communicate with NetMQ

A proper introduction to NetMQ requires more than one post and is well beyond the format of this series of the posts. Instead, in a spirit of the series, this is an awareness raising post, to bring additional visibility to the great library.

NetMQ is a managed .NET port of ZeroMQ library. The idea of the both libraries is not to directly compete with the messaging solutions like RabbitMQ, ActiveMQ or NServiceBus, which provide rich high level feature set out of the box; but to provide a relatively low level API which allows to build complex solutions tailored to the specific business and functional needs.

Unlike the mentioned solutions, NetMQ does not include a central server or broker. NetMQ is based on a socket idea. Each side of communication require to open a socket and use it for communication. NetMQ, following ZeroMQ, supports many different type of sockets and each socket type has a unique behavior.

The simplest pair of sockets is RequestSocket/ResponseSocket. These two sockets, when connected to each other, allows to build synchronous client server communications.

Server code example

using (var responseSocket = new ResponseSocket("@tcp://*:5555"))
                while (true)
                    // receive a request message
                    var msg = responseSocket.ReceiveFrameString();

                    Console.WriteLine("Request received: " + msg);

                    // send a canned response
                    responseSocket.SendFrame("Response for " + msg);

Client code example

using (var requestSocket = new RequestSocket(">tcp://localhost:5555"))
                var message = requestSocket.ReceiveFrameString();
                Console.WriteLine("requestSocket : Received '{0}'", message);


Samples demonstrate creation of the sockets for server and client and sending the request/response between the applications.

The real power of NetMQ comes from the ability to craft your own protocol using multi-frame messages and from the variety of the available sockets. They include async sockets, pub/sub sockets, in-proc sockets. These features differentiate ZeroMQ and NetMQ and allows to build very complex solutions, however they bring lots of complexity and require some learning. ZeroMQ guide may be a good start, even if you will end up using NetMQ – it helps to understand main principles and pattern.

Exploring .NET Open Source ecosystem: simplifying object to object mapping with AutoMapper

Well-designed software applications are typically layered to provide a maximum isolation between the logical parts of application and often require transformation of data from layer to layer.

AutoMapper facilitates such transformations providing a convention-first approach for mapping an object of one type into an object of some other type. Convention-first, in this case, means zero configuration for the cases when the source and the target classes has properties with the same names. For other cases, AutoMapper provides a rich API to customize mapping configuration.

Use of AutoMapper eliminates routine, repetitive and error prone list of copy instructions. Automapper allows to define a mapping, which can be reused in the code multiple times to do a one line transformation. Common use cases for the Automapper library include mapping between Data Transfer Model and Model or between Model and ViewModel.

When two types are aligned (properties which are required to be transferred have the same names), configuration is as simple as the following one line of code


The usage is simple as well

OrderDto dto = Mapper.Map(order);

While the usage code stays the same all the time, configuration may become more complicated. For example, the following code demonstrates an action which will be called before the mapping and a custom mapping rule for one of the class members.

                          .BeforeMap((order, dt)=> {order.DateTime = DateTime.UtcNow;})
                          .ForMember(o=>o.CustomerName, x=>x.UseValue("admin")));

AutoMapper supports automatic flattening – mapping between a property and a class hierarchy. For example, the value of a Name property of an object referenced by a Customer property of Order may be automatically mapped (if compatible) to a CustomerName property of OrderDto.

Any time, when conventions do not satisfy the needs of the application, they can be replaced by the explicit configuration.

Exploring .NET Open Source ecosystem: making Unit Tests more robust with Moq

To continue the topic of unit testing, started in the previous post, let’s review Moq, a mocking library for .NET.

While the integration tests are easier to develop, and allows to reach higher code coverage in a shorter period of time, in long term low level, granular unit tests are more valuable. They allow to control the system behavior more precisely and catch any deviation as soon as it occurs. From other side, unit tests require more work and, sometimes, application’s architecture makes such tests extremely difficult to write. The common examples of such architectures are monolithic applications or tightly coupled components.

If architecture is fully monolithic – there is no magic, tools will be able to resolve it for the developer. However, if components are coupled but can be instantiated individually, using some variant of Inversion of Control (IoC) pattern, Moq can help with testing of such components. The main idea of Moq is to allow developer to use a mock object, configured to behave in a predictable way, instead of the real object. For the code consuming the mock object, there will be no difference between the mock and the real object.

For example, an application for parsing logs may include a log reader, log analyzer and log visualizer components. If the log reader implements an ILogReader interface, the log analyzer’s constructor can accept instance of the reader as a parameter. In this case, the log reader can be mocked to provide test input into the analyzer, instead of reading files from the disk.

Typical Moq usage pattern include three steps: create a mock, setup the mock behavior, call tested code, with provided mock object. Here are these stems, using the same Log Parser example:

 public void Test_one_entry()
     var logReader = new Mock();
     logReader.Setup(x => x.NextEntry())
         .Returns("2016-05-12 12:01pm [VRB] Initializing visual subsystem...");
     var logAnalyzer = new LogAnalyzer(logReader.Object);
     var entry = logAnalyzer.AnalyzeNextEntry();
     entry.Message.Should().Be("Initializing visual subsystem...");

The same as with the FluentAssertions library, possibilities of Moq go far beyond this simple example. Moq object can be configured to input parameters

logReader.Setup(x => x.SkipEntriesAndGet(10))
           .Returns("2016-05-12 12:01pm [VRB] Initializing visual subsystem...");

Or even use placeholder like It.IsAny() when the argument is not important. Moq can handle async calls, callbacks and many other scenarios. Check it out at

Exploring .NET Open Source ecosystem: simplifying unit testing with FluentAssertions

Unit Tests are useful. Almost every developer, with a least some experience in commercial software development, will agree with this statement, in general. Unfortunately, far less developers will agree that Unit Tests are useful enough to spend additional time and efforts to write them. In this situation, any tools or practices, which can simplify unit testing are welcome.

FluentAssertion library is an example of such tool. Goal of this library is to simplify the assertion part of the unit test by providing a more expressive way to define the asserts and by reporting the assertion failures in a friendly way.

Here is a basic example

double result = 19.99;
result.Should().BeInRange(99.99, 199.99, "because we filtered values for this range");

It demonstrates an expressive, highly readable syntax of FluentAssertion. This test will, obviously, fail and it will do it with the following friendly message.

Expected value to be between 99.99 and 199.99 because we filtered values for this range, but found 19.99

As it is expected from the any assertion library, FluentAssertions can handle exceptions, combinations of conditions and validating complex objects. It can also can validate metadata, events, XML and execution time.

Here is a more complicated sample where a Customer object compared to a CustomerDTO. All properties existing in both classes should match, all nested objects should be ignored, as well as all public fields

                options => options.ExcludingFields()

This is just a short list of the library features, and to not repeat the documentation, visit FluentAssertions web site.

Exploring .NET Open Source ecosystem: manipulating HTML with HtmlAgilityPack

Based on my experience, the need of parsing and manipulating HTML appearing surprisingly often. It may be required to clean a HTML file created by tools like Word or FrontPage (these tools are great for the end users, but inject lots of unnecessary information). Or parsing a webpage, or trying to construct a HTML page programmatically.

In all these cases, HtmlAgilityPack may be a handy tool. It allows to load, parse and modify a “real-world” HTML – HTML files which are not necessary clean and well formatted. Even better, for the parsed files, it builds a XML-like DOM which supports XPath and LINQ.

It is easy to learn and the simple example looks like

var doc = new HtmlDocument();
var docNode = doc.DocumentNode;
var content = docNode.Descendants()
                .First(x => x.GetAttributeValue("class", "")

This sample code returns content for the first item with the “icon” class.

This is a simple, but very useful library, so check it out at

Exploring .NET Open Source ecosystem: handling exceptions with Polly

It is unusual for the modern applications to be disconnected from the outside world. Remote servers, distributed databases, external services – all those technologies enrich the application. However, networks split, databases crush and servers reboot. When introduced without an adequate control, these services may become an additional point of failure.

Polly is a .NET library that helps to handle transient errors such us described above. In the .NET applications these issues usually lead to exceptions and Polly provides a way to define exception handling policies.

For example, HttpClient may throw HttpRequestException when network is temporary unavailable. In this case Polly can be configured to retry the request. Here is how.

First, we need to install Poly NuGet package by executing the following commang in the Package Management Console

PM> Install-Package Polly

Next step is to define a policy: provide a list of exceptions to handle and the policy behavior

var policy = Polly.Policy.Handle()
                         .WaitAndRetryAsync(5, i=>TimeSpan.FromSeconds(i));

This sample policy instructs Polly to retry failed operation five times, waiting before the each retry with the increasing time interval. After the five retries any new exception will be re-thrown to the caller.

When policy is defined, it can be used any number of times to execute similar operations

var result = await policy.ExecuteAsync(() => httpClient.GetStringAsync(""));

As you can see from the example, Polly supports async/await semantics, but it also can be called in a fully synchronous way, if needed. Policies can be nested and support filtering on the exception properties.

In addition to the simple Retry, WaitAndRetry and RetryForever patterns, Polly also support mode advanced patterns such as Circuit Braker. This patterns allows to handle the situations of real (non-transient) failures and prevent the system from spending cycles on useless retries.

Current version of Polly supports .NET 3.0 to 4.6 and .NET Core / .NET Standard support coming shortly.

.NET OSS community and Open Source project of the week

.NET community was never known as a strong proponent of the open source software. Traditionally, it gravitates towards the full stack solutions provided by Microsoft or well-known vendors of the commercial components.

With introduction of NuGet, open sourcing .NET, Roslyn and intentional push to Open Source from Microsoft, situation seems changing. However, Open Source is not a way of life for the majority of the .NET developers. To verify that, I did a couple of OSS-related presentations for the local .NET user groups. I was not (unfortunately) surprised to see that besides some really well-known libraries, not many OSS projects are known for the attendees.

To spread the good words about existing .NET Open Source projects, from time to time I am going to introduce one OSS library which I found useful.

Project of the week

For the first week, instead of the library I’d like to introduce two initiatives – First Timers Only and Up For Grabs. Both projects aim to help to find a project for the first contribution and based on the tags assigned by the repository maintainers. The idea is to suggest a simple isolated feature/defect fix to help newcomers to learn the submission process.

If you are a project maintainer, check your repository and mark the appropriate stories with tags. If you never contributed to an open source project – it’s time to try!

Will talk next week.

Next posts Previous posts