Desktop Bridge, previously known as a Project Centennial, is a technology which helps to bring existing desktop applications to the Universal Windows Platform. It allows to manually create an .appx package or use an existing installer and convert it in the .appx. It allows to integrate parts of the UWP in the desktop application and facilitates a gradual move from the classic desktop application to a UWP app.

This post contains links to the additional resources for my Tech Summit Toronto 2016 presentation:

Slides for my MVP MIX Toronto 2016 talks

Talk: Using NuGet libraries in your application

NuGet is not only a Visual Studio extension or a command line application. It is also a set of libraries which can be used to manipulate NuGet packages programmatically. Do you have a unique CI process, beyond the expected NuGet workflow? Do you need your own way to propagate dependencies between the subsystem? Or maybe you want to create a NuGet-based deployment process for the end users? During this session you will learn about the main NuGet library concepts, will see example of the embedded NuGet usage and will hear some guidance to help you with integrating NuGet with your own application.

The code demonstrated during the presentation is described in Using the NuGet v3 libraries in your projects post and used in Dropcraft project.

Slides: http://www.slideshare.net/LunarFrog/using-nuget-libraries-in-your-application

Talk: C# code sharing across the platforms

Portable, shared, net-standard libraries – so many options to choose when you need to share some code between the platforms. During this talk we will explore all the options and differences between the library types. After the session you will have a solid understanding of the modern .NET library types and code sharing strategies which you can apply for your next .NET Core, desktop or Xamarin project.

Slides: http://www.slideshare.net/LunarFrog/c-code-sharing-across-the-platforms

Exploring .NET Open Source ecosystem: logging from netstandard libraries using LibLog

Continuing discussing the logging, let’s talk about LibLog. Unlike other logging libraries, LibLog targets just one specific scenario – logging within reusable libraries.

Why this scenario needs a specific treatment? There are two general logging approaches for the library developers: one is to choose a logging library and force all library consumers to use the same library, and another is to create a logging façade and ask the consuming application to create an adapter for the preferred logging library like NLog or Serilog. Obviously, both approaches are not very elegant, and LibLog provides a solution based on the optimized second approach.

LibLog consists from just one file which can be added to the project directly from the GitHub repository or using a corresponding NuGet package. In case of manual approach, LibLog.cs will require namespace editing (detailed instructions provided in the file header comments).

After that, LibLog’s entry point, LogProvider class, becomes available for use. By design it is only available for the assembly which contains LibLog.cs. If the library includes more than one assembly, InternalsVisibleToAttribute can be used to share the logging infrastructure across all the components of the library without exposing it to the consumers.

A simple usage scenario looks like this:

public class MyClass
    {
        private static readonly ILog Logger = LogProvider.For();

        public void DoSomething()
        {
            Logger.Trace("Method 'DoSomething' in progress");
        }
    }

That’s it, no the library is ready to automatically pick-up logger used by the consuming application. For example, if Serilog is the selected library, assigning Serilog’s Logger.Log will automatically connect all the moving parts together:

Log.Logger = new LoggerConfiguration()
                .MinimumLevel.Verbose()
                .WriteTo.LiterateConsole()
                .CreateLogger();

            Log.Logger.Verbose("Starting...");

            var myClass = new MyClass();
            myClass.DoSomething();

            Log.Logger.Verbose("Finishing...");
            Console.ReadKey();

The result is

[21:09:18 APP] Starting...
[21:09:18 APP] Method 'DoSomething' in progress
[21:09:18 APP] Finishing... 

Everything described before just works when library targeting .NET Framework. In case of .NET Standard library (at least netstandard1.3) things are a bit more complicated. When LibLog is added to such library, compilation will fail and two modifications are required to fix it.

First, LIBLOG_PORTABLE conditional compilation symbol shall be defined in the project settings (don’t forget to define it in all used build configurations). Second, two missing NuGet packages shall be added - Microsoft.CSharp and System.Dynamic.Runtime. These modifications will fix the build and enable LibLog usage.

Exploring .NET Open Source ecosystem: logging with Serilog

Logging is an important part of every software project. Ideally, developers would prefer to attach a debugger to the failing system and investigate it in-situ. In real life, however, systems are rarely available to the developers in the moment the issue is happening. And this is the reason we are heavily rely on logs, which can be analyzed later.

One of the most popular logging library for the .NET applications is log4net. It is a powerful, flexible library and the usual workflow looks like this: a developer needs to log an object state and some action, so he creates a human-readable string which represents the state and the action. The string is passed to log4net and stored in a text file. With the amount of logging data generated by the modern software, it very desirable to be able to process the logs automatically, by some software tool. I this case stored text file will be parsed to extract the object state and analyze it.

When we taking into account the second part of the workflow (parsing and analyzing), it sounds crazy. Nicely structured data available before logging is transformed to unstructured text only to parse and to transform to the structured state a bit late. What a waste of CPU cycles and time!

Serilog, a structured logging library, aims to stop this craziness. With Serilog, data can be logged in an original form and passed to the structured storage (database, Windows Event Log, third-party service) without an additional overhead.

var user = new User {Name="Guest", Ip="127.0.0.1"};
logger.Information("User {@User} logged", user);

If Serilog is configured to output data to console, the logged information will be present as

08:43:12 [Information] User {Name: Guest, Ip: 127.0.0.1} logged

And for other sinks (like Event Log) it will be captured as JSON

{ "User": {"Name": "Guest", "Ip": "127.0.0.1"}}

Serilog supports more than 50 sink, including console, text files, Email, Elasticsearch, RethinkDB and others. Logging sinks are configured for each logger and more than one sink can be used:

var logger = new LoggerConfiguration()
    .MinimumLevel.Debug()
    .WriteTo.RollingFile("log-{Date}.txt")
    .WriteTo.LiterateConsole()
    .CreateLogger(););

Serilog allow to enrich logging data with some static information (like thread id), define custom serializers and filters, for selective logging.

Serilog is published under Apache 2.0 license at https://github.com/serilog

Exploring .NET Open Source ecosystem: communicate with NetMQ

A proper introduction to NetMQ requires more than one post and is well beyond the format of this series of the posts. Instead, in a spirit of the series, this is an awareness raising post, to bring additional visibility to the great library.

NetMQ is a managed .NET port of ZeroMQ library. The idea of the both libraries is not to directly compete with the messaging solutions like RabbitMQ, ActiveMQ or NServiceBus, which provide rich high level feature set out of the box; but to provide a relatively low level API which allows to build complex solutions tailored to the specific business and functional needs.

Unlike the mentioned solutions, NetMQ does not include a central server or broker. NetMQ is based on a socket idea. Each side of communication require to open a socket and use it for communication. NetMQ, following ZeroMQ, supports many different type of sockets and each socket type has a unique behavior.

The simplest pair of sockets is RequestSocket/ResponseSocket. These two sockets, when connected to each other, allows to build synchronous client server communications.

Server code example

using (var responseSocket = new ResponseSocket("@tcp://*:5555"))
            {
                while (true)
                {
                    // receive a request message
                    var msg = responseSocket.ReceiveFrameString();

                    Console.WriteLine("Request received: " + msg);

                    // send a canned response
                    responseSocket.SendFrame("Response for " + msg);
                }
            }

Client code example

using (var requestSocket = new RequestSocket(">tcp://localhost:5555"))
            {
                requestSocket.SendFrame("Hello");
                var message = requestSocket.ReceiveFrameString();
                Console.WriteLine("requestSocket : Received '{0}'", message);

                Console.ReadKey();
            }

Samples demonstrate creation of the sockets for server and client and sending the request/response between the applications.

The real power of NetMQ comes from the ability to craft your own protocol using multi-frame messages and from the variety of the available sockets. They include async sockets, pub/sub sockets, in-proc sockets. These features differentiate ZeroMQ and NetMQ and allows to build very complex solutions, however they bring lots of complexity and require some learning. ZeroMQ guide may be a good start, even if you will end up using NetMQ – it helps to understand main principles and pattern.

Exploring .NET Open Source ecosystem: simplifying object to object mapping with AutoMapper

Well-designed software applications are typically layered to provide a maximum isolation between the logical parts of application and often require transformation of data from layer to layer.

AutoMapper facilitates such transformations providing a convention-first approach for mapping an object of one type into an object of some other type. Convention-first, in this case, means zero configuration for the cases when the source and the target classes has properties with the same names. For other cases, AutoMapper provides a rich API to customize mapping configuration.

Use of AutoMapper eliminates routine, repetitive and error prone list of copy instructions. Automapper allows to define a mapping, which can be reused in the code multiple times to do a one line transformation. Common use cases for the Automapper library include mapping between Data Transfer Model and Model or between Model and ViewModel.

When two types are aligned (properties which are required to be transferred have the same names), configuration is as simple as the following one line of code

Mapper.Initialize(cfg=>cfg.CreateMap());

The usage is simple as well

OrderDto dto = Mapper.Map(order);

While the usage code stays the same all the time, configuration may become more complicated. For example, the following code demonstrates an action which will be called before the mapping and a custom mapping rule for one of the class members.

Mapper.Initialize(cfg=>cfg.CreateMap()
                          .BeforeMap((order, dt)=> {order.DateTime = DateTime.UtcNow;})
                          .ForMember(o=>o.CustomerName, x=>x.UseValue("admin")));

AutoMapper supports automatic flattening – mapping between a property and a class hierarchy. For example, the value of a Name property of an object referenced by a Customer property of Order may be automatically mapped (if compatible) to a CustomerName property of OrderDto.

Any time, when conventions do not satisfy the needs of the application, they can be replaced by the explicit configuration.

Exploring .NET Open Source ecosystem: making Unit Tests more robust with Moq

To continue the topic of unit testing, started in the previous post, let’s review Moq, a mocking library for .NET.

While the integration tests are easier to develop, and allows to reach higher code coverage in a shorter period of time, in long term low level, granular unit tests are more valuable. They allow to control the system behavior more precisely and catch any deviation as soon as it occurs. From other side, unit tests require more work and, sometimes, application’s architecture makes such tests extremely difficult to write. The common examples of such architectures are monolithic applications or tightly coupled components.

If architecture is fully monolithic – there is no magic, tools will be able to resolve it for the developer. However, if components are coupled but can be instantiated individually, using some variant of Inversion of Control (IoC) pattern, Moq can help with testing of such components. The main idea of Moq is to allow developer to use a mock object, configured to behave in a predictable way, instead of the real object. For the code consuming the mock object, there will be no difference between the mock and the real object.

For example, an application for parsing logs may include a log reader, log analyzer and log visualizer components. If the log reader implements an ILogReader interface, the log analyzer’s constructor can accept instance of the reader as a parameter. In this case, the log reader can be mocked to provide test input into the analyzer, instead of reading files from the disk.

Typical Moq usage pattern include three steps: create a mock, setup the mock behavior, call tested code, with provided mock object. Here are these stems, using the same Log Parser example:

 [Test]
 public void Test_one_entry()
 {
     var logReader = new Mock();
     logReader.Setup(x => x.NextEntry())
         .Returns("2016-05-12 12:01pm [VRB] Initializing visual subsystem...");
 
     var logAnalyzer = new LogAnalyzer(logReader.Object);
     var entry = logAnalyzer.AnalyzeNextEntry();
 
     entry.Message.Should().Be("Initializing visual subsystem...");
 }

The same as with the FluentAssertions library, possibilities of Moq go far beyond this simple example. Moq object can be configured to input parameters

logReader.Setup(x => x.SkipEntriesAndGet(10))
           .Returns("2016-05-12 12:01pm [VRB] Initializing visual subsystem...");

Or even use placeholder like It.IsAny() when the argument is not important. Moq can handle async calls, callbacks and many other scenarios. Check it out at github.com/Moq/moq4/

Exploring .NET Open Source ecosystem: simplifying unit testing with FluentAssertions

Unit Tests are useful. Almost every developer, with a least some experience in commercial software development, will agree with this statement, in general. Unfortunately, far less developers will agree that Unit Tests are useful enough to spend additional time and efforts to write them. In this situation, any tools or practices, which can simplify unit testing are welcome.

FluentAssertion library is an example of such tool. Goal of this library is to simplify the assertion part of the unit test by providing a more expressive way to define the asserts and by reporting the assertion failures in a friendly way.

Here is a basic example

double result = 19.99;
result.Should().BeInRange(99.99, 199.99, "because we filtered values for this range");

It demonstrates an expressive, highly readable syntax of FluentAssertion. This test will, obviously, fail and it will do it with the following friendly message.

Expected value to be between 99.99 and 199.99 because we filtered values for this range, but found 19.99

As it is expected from the any assertion library, FluentAssertions can handle exceptions, combinations of conditions and validating complex objects. It can also can validate metadata, events, XML and execution time.

Here is a more complicated sample where a Customer object compared to a CustomerDTO. All properties existing in both classes should match, all nested objects should be ignored, as well as all public fields

customer.ShouldBeEquivalentTo(customerDto,
                options => options.ExcludingFields()
                    .IncludingProperties()
                    .ExcludingNestedObjects());

This is just a short list of the library features, and to not repeat the documentation, visit FluentAssertions web site.

Exploring .NET Open Source ecosystem: manipulating HTML with HtmlAgilityPack

Based on my experience, the need of parsing and manipulating HTML appearing surprisingly often. It may be required to clean a HTML file created by tools like Word or FrontPage (these tools are great for the end users, but inject lots of unnecessary information). Or parsing a webpage, or trying to construct a HTML page programmatically.

In all these cases, HtmlAgilityPack may be a handy tool. It allows to load, parse and modify a “real-world” HTML – HTML files which are not necessary clean and well formatted. Even better, for the parsed files, it builds a XML-like DOM which supports XPath and LINQ.

It is easy to learn and the simple example looks like

var doc = new HtmlDocument();
doc.LoadHtml(html);
 
var docNode = doc.DocumentNode;
var content = docNode.Descendants()
                .First(x => x.GetAttributeValue("class", "")
.Equals("icon")).InnerText;

This sample code returns content for the first item with the “icon” class.

This is a simple, but very useful library, so check it out at htmlagilitypack.codeplex.com

Exploring .NET Open Source ecosystem: handling exceptions with Polly

It is unusual for the modern applications to be disconnected from the outside world. Remote servers, distributed databases, external services – all those technologies enrich the application. However, networks split, databases crush and servers reboot. When introduced without an adequate control, these services may become an additional point of failure.

Polly is a .NET library that helps to handle transient errors such us described above. In the .NET applications these issues usually lead to exceptions and Polly provides a way to define exception handling policies.

For example, HttpClient may throw HttpRequestException when network is temporary unavailable. In this case Polly can be configured to retry the request. Here is how.

First, we need to install Poly NuGet package by executing the following commang in the Package Management Console

PM> Install-Package Polly

Next step is to define a policy: provide a list of exceptions to handle and the policy behavior

var policy = Polly.Policy.Handle()
                         .WaitAndRetryAsync(5, i=>TimeSpan.FromSeconds(i));

This sample policy instructs Polly to retry failed operation five times, waiting before the each retry with the increasing time interval. After the five retries any new exception will be re-thrown to the caller.

When policy is defined, it can be used any number of times to execute similar operations

var result = await policy.ExecuteAsync(() => httpClient.GetStringAsync("http://lunarfrog.com/blog"));

As you can see from the example, Polly supports async/await semantics, but it also can be called in a fully synchronous way, if needed. Policies can be nested and support filtering on the exception properties.

In addition to the simple Retry, WaitAndRetry and RetryForever patterns, Polly also support mode advanced patterns such as Circuit Braker. This patterns allows to handle the situations of real (non-transient) failures and prevent the system from spending cycles on useless retries.

Current version of Polly supports .NET 3.0 to 4.6 and .NET Core / .NET Standard support coming shortly.

Next posts Previous posts