One portion of .NET Framework which didn’t find a way to .NET Core is a UI framework. Neither WinForms nor WPF are part of .NET Standard or supported by .NET Core. They are tightly coupled with Windows and may need significant rework to become available on other platforms. To fill the gap and fight the dominance of JavaScript and Electron in cross-platform development, .NET community started Avalonia project – a cross-platform UI framework inspired by WPF and running on top of .NET Core. Last week Avalonia project announced Beta release, so it is a good time to try it to see what it can do.

Overview

Avalonia is inspired by WPF but it does not try to stay compatible with WPF or any other XAML stacks. The project uses own dialect of XAML, with the biggest difference in a way it handles styles. It is not only dropped the idea of resource dictionaries, but also uses CSS-like concept of selectors for applying styles.

This is an example how the simple window and a styled control look like with Avalonia. Looks familiar, but there are twists to make WPF developers confusing from time to time. Experience with CSS will definitely be helpful to catch the new concepts.

This code works on top of .NET Core, allowing development of desktop applications for Windows, Linux and macOS. Based on the execution platform, Avalonia uses different platform-specific implementations and rendering engine.

Usage

Getting started is extremely easy. There is a Visual Studio extension which adds Avalonia templates to VS and brings visual designer for Avalonia XAML.

New Avalonia project

Alternative approach is to use Avalonia templates for .NET Core and create new application via dotnet new command.

Build generates a normal .NET Core project which can be deployed as any other .NET application. For a simple application development experience is surprisingly smooth. Now it is time to try Avalonia with a larger project to understand the gaps and any issues with running the app on different platforms.

Windows Compatibility Pack for .NET Core

.NET Core 2 is a great release from many angles. By implementing .NET Standard 2.0, it doubles a number of API which can be shared between .NET Core and .NET Standard applications. It allows reference of .NET Framework libraries. In some aspects it even faster than .NET Framework. But is it enough to allow migration of the real-world .NET Framework code to .NET Core?

Results of running compatibility analyzer on an enterprise-grade code base will be unspectacular. These types of applications heavily use XML configuration and ConfigurationSection which are not part of .NET Standard, they often depend on WCF and they use WinForms or WPF for UIs. From my experience, even for UI-less libraries, about 20% of code is not transferrable between .NET Core and .NET Standard.

Windows Compatibility Pack for .NET Core aims to fill this gap.

Compatibility pack is a set of packages which provide the missing APIs and allow to share more code between the platforms. UI libraries are still missing, but back-end developers should be happier now.

Compatibility pack solves the problem of missing APIs by combining type-forwarding and re-implementation approaches.

When an API exposed by the compatibility pack is used in .NET Framework execution environment, the type is forwarded and an existing .NET Framework implementation is used. However, for .NET Core, the pack provides new implementation.

It is important to understand, that Windows Compatibility Pack does not bring all these new features to other platforms. It allows to use missing APIs, but some of them are only for Windows platform, because even re-implemented code still depends on Windows.

Windows Compatibility Pack for .NET Core is not a permanent solution which will stay in codebase forever. Intention of the pack is to build a temporary bridge, allowing adoption of .NET Core to a greater extend. However, in the long term the goal stays the same – replace outdated APIs and features of .NET with newer .NET Standard-compatible alternatives.

Is there a place for DevOps in desktop software development?

Most of the articles, presentations or courses which promote DevOps approach are focused on the cloud and Web projects. Most of the DevOps successful stories are coming from the Internet industry. While there are many good reasons why this philosophy is so widely accepted by the Web community, desktop software developers may greatly benefit from the same principles.

Continuous deployment

The biggest difference between the traditional desktop software development and Web/cloud software development processes is the way software is delivered to the customers. While it looks like the need to have an installer as well as the user’s involvement in the installation process conflicts with the Ops component of DevOps, it also provides a hint how desktop software should evolve to stay actual.

Instead of using heavy monolithic installers, modern desktop applications may use some modular approaches allowing partial update of the software when such updates are available. Also, user’s involvement may be limited to just an acknowledgment or restarting of the application. Modern browsers, Visual Studio and some other applications provide a good example of this approach.

Technology-wise implementation of continuous deployment for desktop application is a no-brainer. Frameworks and tools such as Windows Store, Squirrel or Dropcraft make it easy to implement.

Continuous Integration

Part of the continuous deployment is the continuous integration. Web applications have no exclusivity there – source control, build and artifact management systems are widely adopted by all type of projects. In reality, most of the desktop-related companies which claim practicing of DevOps, do just continuous integration. They are organizing DevOps teams, hiring DevOps specialists and ask them to maintain source control and build servers - where is Dev here? Sounds more like a job description for an admin…

Title renaming is easy but does not solve the goal – ultimate responsibility of the developer for the developed feature, from the moment of developing it, to building and deploying to the customers. DevOps are still outsiders for the development team, somebody who are easy to blame for the failed builds or request to do unwanted maintenance work.

To follow DevOps approach, there is no need to organize new teams or increase budget for new roles – training, mentoring and shared responsibility for the CI pipeline will do the work as well.

Performance and usability monitoring

Monitoring of the user activities is a controversial topic for the desktop applications, while it is a normal practice in Web. Despite the uncontrolled nature of execution environment, hardware variety and difficulties with receiving customer feedback, performance monitoring solutions are not common in desktop software.

Use of telemetry information is one of the DevOps practices underutilized in desktop applications. Level of acceptance for sharing telemetry information varies from user to user and from industry to industry, so applications may provide different solutions for gathering and transferring telemetry data, acceptable for the end users. Unlike Web applications, where collection of telemetry is usually fully automated and hidden from the user, it is a good idea to let user of desktop application to control when and what is shared. An easy way to opt-out, ability to review the data before transferring, public data usage policy – all such approaches may increase acceptance of the monitoring by the users.

Is there a place for DevOps in desktop software development?

Yes! DevOps is as relevant for desktop applications as it is for Web and cloud solutions. Properly tuned continuous integration pipeline will lead to a stable, deployable build, which can be easily pushed to the customers and monitored for the user experience and performance, to guide product improvements.

Upcoming presentations on .NET Standard 2.0 and .NET Core 2.0

I am really existed about an opportunity to talk about .NET Standard 2.0 and .NET Core 2.0 with the Toronto and Toronto area .NET community. While .NET Standard 2.0 is a big step ahead for streamlining the compatibility story between the different implementations of the .NET Framework, .NET Core 2.0 brings better performance and simplifications for the ASP.NET Core developers.

If you are interested in the topic, here are the details about the upcoming meetups:

Additional resources for the presentation:

Main is allowed to be async!

One of the pain points of the async/await feature from the beginning was inability to mark console application’s Main method as async and to await other methods without using workarounds. Not anymore!

With the release of C# 7.1, Main method can be async as well. This is just a syntactic sugar - compiler will rewrite the code to apply the same workarounds as before, but at leas for developers code will look much nicer:

static async Task Main(string[] args)
{
   ...
   await DoWork();
   ...
}
EditorConfig support in Visual Studio 2017

Visual Studio 2017 is shipped with the new tools for managing code style settings. However, even knowing about it, I have never touched the tools as it was totally unclear how to manage these settings for different projects – my work and my personal project have different indent styles and I did not want to mix them.

Recently, I was very surprised to learn that VS 2017 supports .editorconfig out of the box, allowing to control setting per-project, by committing this file in repository. Here is an example file:

root = true

[*]
end_of_line = crlf
insert_final_newline = true
trim_trailing_whitespace = true

# 4 space indentation
[*.cs]
indent_style = space
indent_size = 4

These settings are standard not only for VS but also for many other editors, allowing to easily share the same setting across the different code editing applications.

Naming conventions and style are stored in the same file, so they are easily separable between the projects. However, these settings are not standard and will be ignored by other applications.

This feature is great and I only wish to learn about it earlier, it is definitely under-communicated.

Exploring .NET Open Source ecosystem: Working with CSV files with CSVHelper

ClosedXML library, described in the previous post, allows reach manipulations with Excel spreadsheets, however sometime all what is need by the application is to quickly import/export some data. In this CSV, as a lighter format, may be handier.

CSVHelper is a simple library which allows to work with CSV file in a strongly-typed manner and eliminates the need of manually parsing the text.

Exporting data to CSV

public class Animal
{
   public string Name { get; set; }
   public string Color { get; set; }
   public int Age { get; set; }

   public static void ExportToCsv(string fileName, IEnumerable animals)
   {
      using (var textWriter = new StreamWriter(fileName))
      using (var csv = new CsvWriter(textWriter))
      {
         csv.WriteRecords(animals);
      }
   }
}

In addition to storing the rows as the batch, library allows to store them one-by-one, which is useful for the large number of rows:

csv.WriteRecord(animal);
csv.NextRecord();

Reading CSV

Reading of information from CSV is equally simple:

public static IEnumerable ImportFromCsv(string fileName)
{
   using (var textReader = new StreamReader(fileName))
   using (var csv = new CsvReader(textReader))
   {
      return csv.GetRecords();
   }
}

Similar to writing to CSV, reading from CSV can be performed row-by-row as well.

Exploring .NET Open Source ecosystem: working with Excel files with ClosedXML

EPPlus is a stable, fully-featured library for working with Execl files. However, it is licensed under LGPL it is a showstopper for many businesses. In such situation a relatively new ClosedXML library may be handy.

It may not provide all the features of EPPlus, but is capable to handle the core manipulations with spreadsheets.

var workbook = new XLWorkbook();
var ws = workbook.AddWorksheet("data");
ws.Cell("A1").Value = 2;
ws.Cell("A2").Value = 2;
ws.Cell("A3").SetFormulaA1("=A1+A2");

Console.WriteLine(ws.Cell("A3").Value);
workbook.SaveAs("calculations.xlsx");
Exploring .NET Open Source ecosystem: handling database schema versioning with FluentMigrator

One of the most common mistake which a junior database architect can make is a missed versioning schema. It is so easy to design a schema, release the corresponding application and to realize later how difficult to maintain this schema, support compatibility between the versions and migrate users to the new versions.

However, even when the new schema includes a concept of version, work is required to keep the schema in a health state, have a migration procedure and some tooling to automate the maintenance tasks.

FluentMigrator C# library provides all the needed tools to solve those problems. It provides a syntax to define the versioned schema, it provides a way to migrate databases from version to version and it includes tools to automate the tasks, during the development, deployment and in the field.

Schema

The core concept of FluentMigrator is a migration. Migration is a class which has a version and two methods Up() and Down(). Up() is responsible to migrate the target database from the previous version, to the version defined by the migration. Down() is responsible for the opposite operation – downgrading the database to the previous version.

[Migration(10)]
public class AddNotesTable : Migration
{
      public override Up()
      {
            Create.Table("Notes")
                  .WithIdColumn()
                  .WithColumn("Body").AsString(4000).NotNullable()
                  .WithTimeStamps()
                  .WithColumn("UserId").AsInt32();
      }

      public override Down()
      {
            Delete.Table("Notes");
      }
}

Instead of using SQL, migrations are defined using fluent C# syntax. This approach makes the migrations almost independent from the concrete databases, hiding the differences in SQL between them.

Migration version is defined using MigrationAttribute. Attribute accepts a number, which will be used by a migration executor to sort all the defined migrations and execute them one by one.

In addition to the schema definition, migrations can also include data seeding.

[Profile("Development")]
public class CreateDevData: Migration
{
      public override Up()
      {
            Insert.IntoTable("User").Row( new
                  {
                        Username = "devuser1",
                        DisplayName = "Dev User1"
                  });
      }

      public override Down()
      {
            // empty, not using
      }
}

This example also demonstrates an idea of profiles – an ability to selectively execute some migrations to have, for example, a seeded database for development or testing.

Execution

All migrations are usually grouped in on assembly and can be executed using the various provided tool. FluentMigrator provides CLI, NAnt, MSBuild and Rake migration runners.

Migrate.exe /connection "Data Source=db\db.sqlite;Version=3;" /db sqlite /target migrations.dll

This code demonstrates usage of CLI tool to execute the migrations from migrations.dll for the database defined via the connection string using sqlite driver. Runner automatically detect the current database version and applies only the required migrations.

FluentMigrator is published under Apache 2.0 license and available at GitHub and NuGet.

Configuring TeamCity to run in Docker on Linux and build .NET Core projects

Recently, I had a need to setup a build pipeline for a medium size .NET Core project and, having a good previous experience with JetBrains TeamCity, I decided to use it in this case as well. Professional Edition is free and its limitations are acceptable for the project – up to three build agents and up to twenty build configurations.

This post provides a step-by-step guide on installing and configuring TeamCity. The starting point is a clean Ubuntu 16.04 LTS server, and the goal is to run TeamCity server, build agents and PostgreSQL on this system using Docker containers. Additionally, the server and the agents are configured to support .NET Core project builds. This solution can be equally easy deployed on a local system or in cloud, like Azure or AWS.

For use of TeamCity in production environment, it is recommended to use an external database for storing the configuration data. For the described case, I use PostgreSQL running in a Docker container as well. So, the full stack includes five Docker containers, one for PostgreSQL, one for TeamCity server and three for the build agents. PostgreSQL database and all the data generated by TeamCity are persisted on a local drive using Docker mounted volumes.

Installing Docker

If you are starting from a clean system, you will need to install Docker and Docker Compose first. The detailed instructions on installing Docker for Ubuntu are available at https://docs.docker.com/engine/installation/linux/ubuntu/ and to install Docker Compose use apt-get:

sudo apt-get install docker-compose

Folder structure

I use /srv folder as a root folder for all the data related to TeamCity builds and here is the full hierarchy of folders you will need to create inside of /srv:

Folders

When the folders are created, we are ready to define the stack.

Docker containers

Create a docker-compose.yaml file in the /srv/docker folder and paste the following content

It configures the stack of the required containers. The configuration is self-explanatory and I’d like to highlight just a couple of things:

The only required change for the file is a correct PostgreSQL password and if it is updated, save the file, close it and start the configured stack by running

docker-compose up -d

It will download all the required images and start the containers. We are ready to open and configure TeamCity.

TeamCity first start

In a browser, open TeamCity site. There is nothing special about configuring TeamCity running in Docker comparing with a conventional deployment, so these instructions are provided just for the completeness of the guide and based on a version 2017 of TeamCity.

TeamCity First Start

On the first page, just click Proceed, the data directory is properly configured already.

JDBC drivers needed

Now you need to connect TeamCity with the running instance of PostgreSQL. But before, you need JDBC drivers – they are not shipped with TeamCity. In terminal, open /srv/teamcity/data/lib/jdbc and put downloaded drivers here, for example by executing

sudo wget https://jdbc.postgresql.org/download/postgresql-42.1.1.jar 

Back to the browser and click Refresh JDBC drivers – TeamCity should detect the newly installed drivers and allow you to connect to the database.

Enter database connection information

Provide the required information (use database name, user name and the password defined in the docker-compose file) and click Proceed. If you are receiving a connection error, verify that the database host name is entered without ‘http’ and the host allows access to port 5432 for PostgreSQL (most likely will be blocked if the instance is hosted by Azure or AWS).

On the next page accept the agreement, create an administrative account and you are ready to use TeamCity.

Using TeamCity for building .NET Core project

After the start, three build agents shall be detected by TeamCity automatically, but marked as Unauthorized. They need to be authorized manually.

Agents

So far, we managed to configure and launch TeamCity and connect the build agent. The last step, before creating a new build project, is to install .NET Core plugin. This step is optional, as you can run .NET Core tasks from the command line runner, but the plugin simplifies steps definition by adding a dedicated .NET Core runner.

The plugin can be downloaded at plugins.jetbrains.com and can be installed via TeamCity UI – just open Administration\Plugins List page and upload the plugin. To enable the plugin, TeamCity requires restart and, unfortunately, there is no way to do it from the UI, so you need to use console again, go to /srv/docker and do

docker-compose stop
docker-compose up -d 

After that, the plugin is installed and the agents are capable to use it (see the agent’s properties)

Agent properties

That’s it – now you are ready to create a TeamCity project and configure the first build.

.NET Core build step Configured .NET Core build steps

Conclusion

This guide demonstrated an approach for deploying a Docker-based TeamCity setup for running .NET Core build. It is based on a free version of TeamCity and allows an easy cloud deployment.

Previous posts