Jekyll2022-03-06T20:33:39+00:00https://crossprogramming.com/feed.xmlcrossprogrammingPresenting Jesus Christ to the world using programming skills.
Structured logging in ASP.NET Core using Serilog and Seq2021-12-23T10:52:47+00:002021-12-23T10:52:47+00:00https://crossprogramming.com/2021/12/23/structured-logging-in-asp-net-core-using-serilog-and-seq<ul>
<li><a href="#context">Context</a></li>
<li><a href="#unstructured-logging">What is unstructured logging?</a></li>
<li><a href="#structured-logging">What is structured logging?</a></li>
<li><a href="#why-structured-logging">Why should I use structured logging?</a></li>
<li><a href="#what-is-serilog">What is Serilog?</a>
<ul>
<li><a href="#serilog-sinks">Serilog sinks</a></li>
<li><a href="#serilog-enrichers">Serilog enrichers</a></li>
<li><a href="#serilog-properties">Serilog properties</a></li>
<li><a href="#serilog-stringification">Serilog stringification</a></li>
<li><a href="#serilog-destructuring">Serilog destructuring</a>
<ul>
<li><a href="#destructuring-operator">Using destructuring operator</a></li>
<li><a href="#destructuring-policies">Using destructuring policies</a></li>
<li><a href="#destructuring-libraries">Using destructuring libraries</a></li>
</ul>
</li>
<li><a href="#configure-serilog">Configure Serilog</a>
<ul>
<li><a href="#configure-serilog-nouns">Configure Serilog nouns</a></li>
<li><a href="#configure-serilog-as-logging-provider">Configure Serilog as an ASP.NET Core logging provider</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="#what-is-seq">What is Seq?</a>
<ul>
<li><a href="#run-seq-using-docker">Run Seq using Docker</a></li>
<li><a href="#query-seq-data">Crash course for querying Seq data</a></li>
</ul>
</li>
<li><a href="#log-application-events">Log application events</a>
<ul>
<li><a href="#logging-providers">Logging providers</a></li>
<li><a href="#message-templates">Message templates</a></li>
<li><a href="#log-scopes">Log scopes</a></li>
</ul>
</li>
<li><a href="#use-cases">Use cases</a>
<ul>
<li><a href="#debugging-use-case">Debugging</a>
<ul>
<li><a href="#identify-error-root-cause">Identify error root cause</a></li>
<li><a href="#fetch-conversation-events">Fetch events from same conversation</a></li>
</ul>
</li>
<li><a href="#analytics-use-case">Analytics</a>
<ul>
<li><a href="#identify-most-used-application-features">Identify most used application features</a></li>
</ul>
</li>
<li><a href="#auditing-use-case">Auditing</a>
<ul>
<li><a href="#audit-user-actions">Audit user actions</a></li>
</ul>
</li>
<li><a href="#performance-use-case">Performance</a>
<ul>
<li><a href="#identify-slowest-sql-queries">Identify slowest SQL queries</a></li>
<li><a href="#identify-slowest-application-features">Identify slowest application features</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="#references">References</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ul>
<hr />
<!-- markdownlint-disable MD033 -->
<h2 id="context">Context</h2>
<p>Back in 2016 I was part of a team developing an e-care web application using SAP Hybris platform for an European telco. Among many other things, I was tasked with the initial deployment to the UAT environment which was supposed to be promoted to production as soon as the client would have validated that particular release. The web application was running in a Tomcat cluster made out of 8 or 9 Linux servers which I was able to access via SSH only, thus in order to investigate any issue occurring on that particular environment, I had to run specific Linux commands inside the console to search for relevant lines of text found inside application log files - if I remember correctly, we were using <a href="https://man7.org/linux/man-pages/man1/less.1.html">less</a> command.<br />
One of my colleagues had a MacBook Pro and with the help of <a href="https://iterm2.com/">iterm2</a> he was able to split his window into one pane per server and run each command against all of them at the same time; unfortunately for me, I was using a laptop running Windows, so I had to open one console per server and run each command inside each console which was a very time consuming and error prone activity.<br />
There were two particular issues with this approach (beside lack of productivity due to dealing with multiple consoles): any real-time investigation was limited by Linux CLI support for searching text files and when any more offline advanced investigation was needed, we had to ask the client IT department to send us specific log files and use a text editor like <a href="https://notepad-plus-plus.org/">Notepad++</a> to search across several files. These issues are direct consequences of employing <a href="#unstructured-logging">unstructured logging</a> when dealing with application events.</p>
<p>The purpose of this post is to present a way to create and query application events using the <a href="#structured-logging">structured logging</a> mechanism provided by ASP.NET Core, with the help of <a href="https://serilog.net/">Serilog</a> and <a href="https://datalust.co/seq">Seq</a>.</p>
<p>All code fragments found in this post are part of my pet project <a href="https://github.com/satrapu/aspnet-core-logging">aspnet-core-logging</a>; furthermore, I have created a <a href="https://github.com/satrapu/aspnet-core-logging/tree/v20210824">tag</a> to ensure these fragments will remain the same, no matter how this project will evolve in the future.</p>
<h2 id="unstructured-logging">What is unstructured logging?</h2>
<p>Creating an application event is usually done by instantiating a string, maybe using <a href="https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/tokens/interpolated">string interpolation</a> and then sending it to a console, file or database using a logging library like <a href="https://logging.apache.org/log4net/">Log4Net</a>; an educated developer might even check whether the logging library has been configured to log that particular event before creating it to avoid waisting time and resources - see more details about such approach inside the <strong>Performance</strong> section of the article found <a href="https://logging.apache.org/log4net/release/manual/internals.html#performance">here</a>.<br />
This approach is called <strong>logging</strong>, since we keep a <strong>log</strong> of events. As the events have been created using plain text, when we need to search for specific data inside the log, we usually resort to the basic search feature offered by the text editor at hand or maybe regular expressions. Neither approach is suitable for complex searches - a text editor can only search for words contained inside the events stored in one or more files, while writing a custom regex to fetch specific data is not an easy task, plus searching through <em>all</em> events means you need to access <em>all</em> log files, a feat which might involve terrabytes of data or even more; also, how would you write a regex to answer a question like: <em>What events created during this specific time range contain (or do not contain) this particular pieces of information</em>?<br />
This is <strong>unstructured logging</strong>, since an event is just a line of text which does not have any structure.</p>
<h2 id="structured-logging">What is structured logging?</h2>
<p><strong>Structured logging</strong> means creating events having a particular <strong>structure</strong>; such data can then be ingested by another service which offers the means to parse, index and finally query it.<br />
ASP.NET Core was built having structure logging in mind with the help of <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-5.0#logging-providers-1">logging providers</a>, <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-5.0#log-message-template-1">message templates</a> and <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-5.0#log-scopes-1">log scopes</a>.</p>
<p>Several years ago I stumbled upon one of Nicholas Blumhardt’s <a href="https://softwareengineering.stackexchange.com/a/312586">answers</a> on the internet which really got me thinking about structured logging. It took me years to finally have the opportunity of using it in a commercial project, but after using it, I strongly believe it’s a game changer!</p>
<h2 id="why-structured-logging">Why should I use structured logging?</h2>
<p>The short answer is: <em>be able to (quickly) answer (almost) any business or technical question about your application behavior, its data and its users</em>.<br />
Read the rest of this post for the longer answer.</p>
<h2 id="what-is-serilog">What is Serilog?</h2>
<p>ASP.NET Core provides several <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-5.0#built-in-logging-providers-1">built-in logging providers</a> and when I have initially started using this framework, as I was already familiar with Log4Net, I employed the <a href="https://www.nuget.org/packages/Microsoft.Extensions.Logging.Log4Net.AspNetCore/">Microsoft.Extensions.Logging.Log4Net.AspNetCore</a> NuGet package - that was mid 2018. Several months later, I stumbled upon <a href="https://serilog.net/">Serilog</a> and <a href="https://datalust.co/seq">Seq</a> and I was blown away by this very powerful combination.</p>
<p><strong>Serilog</strong> is a logging framework <em>with powerful structured event data in mind</em> (as stated on its <a href="https://serilog.net/">home page</a>) which was initially released as a <a href="https://www.nuget.org/packages/Serilog/">NuGet package</a> in 2013. One uses Serilog to create events having a given structure and stores them in specific place, like a file or database. Serilog uses several <em>nouns</em>, like: <a href="#serilog-sinks">sinks</a>, <a href="#serilog-enrichers">enrichers</a>, <a href="#serilog-properties">properties</a>, <a href="#serilog-destructuring">destructuring policies</a> and others, at the same time offering the means to configure its behavior either via code-based or file-based <a href="#configure-serilog">configuration sources</a>.<br />
The community around this library is pretty solid, as seen on <a href="https://www.nuget.org/packages?q=Tags%3A%22serilog%22%22">NuGet gallery</a>, so one more reason to make Serilog my preferred logging framework!</p>
<h3 id="serilog-sinks">Serilog sinks</h3>
<p>A Serilog sink is a component which receives events generated via Serilog and stores them or sends them over to other components which will further process these events. There are plenty of such sinks, ready to support almost any given scenario - see the official list <a href="https://github.com/serilog/serilog/wiki/Provided-Sinks">here</a>. If this list does not cover <em>your</em> scenario, you can always write your own sink by starting from an already implemented one or from <a href="https://github.com/serilog/serilog/wiki/Developing-a-sink">scratch</a>! Of course, there are many more sinks, as one can see by <a href="https://github.com/search?l=C%23&q=serilog+sink&type=Repositories">querying GitHub</a>.</p>
<p>In 2019 I joined a team developing a deal advisory management application running in Azure where we had 3 web APIs, each one writing events during local development to its own file. Each developer had to search through several files in order to investigate a particular issue or to find a relevant piece of information, a thing which was neither comfortable, nor quick.<br />
After discovering Seq, I’m now strongly recommending using it for <strong>local development</strong> via <a href="https://github.com/serilog/serilog-sinks-seq">Serilog.Sinks.Seq</a> sink - this enables sending events from the application to a running Seq instance (most likely <a href="#run-seq-using-docker">a Docker container</a> running on local machine) which will ingest them and then provide the means to perform rather complex queries against <em>all</em> events, thus avoiding the overhead and performance penalty caused by logging to files.</p>
<p>In case the application production environment (or any remote environment, for that matter) is hosted by a cloud provider, I strongly recommend using the Serilog sink which integrates with the logging/monitoring service of that particular provider - e.g. one could pair <a href="https://github.com/serilog/serilog-sinks-applicationinsights">Serilog.Sinks.ApplicationInsights</a> with Azure, <a href="https://github.com/Cimpress-MCP/serilog-sinks-awscloudwatch">Serilog Sink for AWS CloudWatch</a> with AWS and <a href="https://github.com/manigandham/serilog-sinks-googlecloudlogging">Serilog.Sinks.GoogleCloudLogging</a> with GCP, etc.<br />
Outside cloud, one could use Seq for all environments, which comes with the benefit of having to learn only one query language instead of two.</p>
<p>Here’s a fragment found inside <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/appsettings.Development.json#L14-L33">appsettings.Development.json file</a> configuring Serilog sinks when application runs locally:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">"Serilog"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="nl">"Using"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"Serilog.Sinks.Console"</span><span class="p">,</span><span class="w">
</span><span class="s2">"Serilog.Sinks.Seq"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"WriteTo"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"Name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Console"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Args"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"theme"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Serilog.Sinks.SystemConsole.Themes.AnsiConsoleTheme::Code, Serilog.Sinks.Console"</span><span class="p">,</span><span class="w">
</span><span class="nl">"outputTemplate"</span><span class="p">:</span><span class="w"> </span><span class="s2">"{Timestamp:HH:mm:ss.fff} {Level:u3} | cid:{ConversationId} fid:{ApplicationFlowName} tid:{ThreadId} | {SourceContext}{NewLine}{Message:lj}{NewLine}{Exception}"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"Name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Seq"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Args"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"serverUrl"</span><span class="p">:</span><span class="w"> </span><span class="s2">"http://localhost:5341"</span><span class="p">,</span><span class="w">
</span><span class="nl">"controlLevelSwitch"</span><span class="p">:</span><span class="w"> </span><span class="s2">"$controlSwitch"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="err">...</span><span class="w">
</span></code></pre></div></div>
<p>And here’s a fragment found inside <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/appsettings.DemoInAzure.json#L19-L29">appsettings.DemoInAzure.json file</a> configuring the only Serilog sink used when application runs in Azure:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">"Serilog"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="nl">"Using"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"Serilog.Sinks.ApplicationInsights"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"WriteTo"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"Name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ApplicationInsights"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Args"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"telemetryConverter"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Serilog.Sinks.ApplicationInsights.Sinks.ApplicationInsights.TelemetryConverters.TraceTelemetryConverter, Serilog.Sinks.ApplicationInsights"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="err">...</span><span class="w">
</span></code></pre></div></div>
<p>There are several <strong>important things</strong> worth mentioning:</p>
<ul>
<li>We usually need to install one NuGet package per sink, so just declaring them under the <code class="language-plaintext highlighter-rouge">Using</code> section is not enough</li>
<li>Each sink has its own list of configuration properties which must be declared under the <code class="language-plaintext highlighter-rouge">Args</code> section, but usually the GitHub repo of each Serilog sink states how to configure it, so it shouldn’t be that hard to properly set it up
<ul>
<li>The <code class="language-plaintext highlighter-rouge">Console</code> sink uses several placeholders found under its <code class="language-plaintext highlighter-rouge">Args</code> section:
<ul>
<li><code class="language-plaintext highlighter-rouge">Timestamp</code> represents the date and time when the event was created</li>
<li><code class="language-plaintext highlighter-rouge">Level</code> represents the logging level associated with the event</li>
<li><code class="language-plaintext highlighter-rouge">SourceContext</code> represents the name of the <code class="language-plaintext highlighter-rouge">logger</code> used for creating the event; usually it is the name of the class where event was created</li>
<li><code class="language-plaintext highlighter-rouge">NewLine</code> represents a new line to split event data on several lines</li>
<li><code class="language-plaintext highlighter-rouge">Message</code> represents the event data</li>
<li><code class="language-plaintext highlighter-rouge">Exception</code> represents a logged exception (which includes its stack trace)</li>
<li>Though not used, there is another very important placeholder, <code class="language-plaintext highlighter-rouge">Properties</code>, which contains, obviously, all properties associated with the event</li>
</ul>
</li>
<li>The <code class="language-plaintext highlighter-rouge">Seq</code> sink uses <code class="language-plaintext highlighter-rouge">serverUrl</code> to point to the running Seq instance which will ingest events; my pet project runs Seq via <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/docker-compose.yml#L73-L87">Docker Compose</a>, thus explaining why the host is the local one</li>
</ul>
</li>
<li>The <code class="language-plaintext highlighter-rouge">Console</code> sink is used to quickly spot any errors which might occur while performing local dev testing</li>
<li>
<p>Several sinks might need extra setup outside the application configuration files - e.g. the <strong>instrumentation key</strong> used by Azure Application Insights is set inside <a href="https://github.com/satrapu/aspnet-core-logging/blob/2cec7a7990a9ef2fdf61011baedfeff9d8da21e8/Sources/Todo.WebApi/Startup.cs#L142-L151">Startup class</a>:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">private</span> <span class="k">void</span> <span class="nf">ConfigureApplicationInsights</span><span class="p">(</span><span class="n">IServiceCollection</span> <span class="n">services</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">if</span> <span class="p">(</span><span class="n">IsSerilogApplicationInsightsSinkConfigured</span><span class="p">)</span>
<span class="p">{</span>
<span class="kt">var</span> <span class="n">applicationInsightsOptions</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">ApplicationInsightsOptions</span><span class="p">();</span>
<span class="n">Configuration</span><span class="p">.</span><span class="nf">Bind</span><span class="p">(</span><span class="n">applicationInsightsOptions</span><span class="p">);</span>
<span class="n">services</span><span class="p">.</span><span class="nf">AddApplicationInsightsTelemetry</span><span class="p">(</span><span class="n">applicationInsightsOptions</span><span class="p">.</span><span class="n">InstrumentationKey</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div> </div>
</li>
</ul>
<h3 id="serilog-enrichers">Serilog enrichers</h3>
<p>A Serilog enricher is used to add additional properties (<em>enrichment</em>) to each event generated via Serilog.<br />
Check this <a href="https://github.com/serilog/serilog/wiki/Enrichment">wiki page</a> for more details about pre-built enrichers.<br />
The community has provided many other enrichers, like: <a href="https://github.com/TinyBlueRobots/Serilog.Enrichers.AssemblyName">Serilog.Enrichers.AssemblyName</a> (enriches events with assembly related info) or <a href="https://github.com/yesmarket/Serilog.Enrichers.OpenTracing">Serilog.Enrichers.OpenTracing</a> (enriches events with OpenTracing context). There are many more enrichers, as one can see by <a href="https://github.com/search?l=C%23&q=serilog+enrich&type=repositories">querying GitHub</a>.</p>
<p>My pet project uses <a href="https://github.com/serilog/serilog-enrichers-thread">Serilog.Enrichers.Thread</a> in order to ensure that the thread ID managed by .NET platform is made available to each application event; this enricher enables better understanding of the application when a particular user action is handled via more than just one thread.<br />
Here’s a fragment found inside <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/appsettings.json#L50-L53">appsettings.json file</a> configuring Serilog enrichment:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="w"> </span><span class="nl">"Serilog"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="nl">"Enrich"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"FromLogContext"</span><span class="p">,</span><span class="w">
</span><span class="s2">"WithThreadId"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>There are several <strong>important things</strong> worth mentioning:</p>
<ul>
<li>We usually need to install one NuGet package per enricher, so just declaring them under the <code class="language-plaintext highlighter-rouge">Enrich</code> section is not enough</li>
<li>Each enricher has its own list of string values which must be declared under the <code class="language-plaintext highlighter-rouge">Enrich</code> section, but usually the GitHub repo of each Serilog enricher states what they are, so it shouldn’t be that hard to properly set it up</li>
</ul>
<h3 id="serilog-properties">Serilog properties</h3>
<p>Serilog properties are used to provide additional information to each or particular events, thus providing more value to the person querying such data. Each token found in a message template will be made available by Serilog as a property which can be used in a query running inside Seq or any other service capable of handling structured events.</p>
<p>My pet project uses properties to enrich events with information about the name of the application which changes based on the current hosting environment (e.g. local, Azure or anything else) and to provide default values to properties which will be populated at run-time only (e.g. the name of the application flow initiated by a user, the ID of the thread used for running the current code or the ID of the conversation used for grouping related events).<br />
Here’s a fragment found inside <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/appsettings.json#L54-L59">appsettings.json file</a> configuring Serilog properties when application runs in any environment:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">"Serilog"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="nl">"Properties"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"Application"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Todo.WebApi"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ApplicationFlowName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"N/A"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ConversationId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"N/A"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ThreadId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"N/A"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>Here’s the properties found inside <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/appsettings.Development.json#L34-L36">appsettings.Development.json</a> used when application runs locally:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">"Serilog"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="nl">"Properties"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"Application"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Todo.WebApi.Development"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>There are several <strong>important things</strong> worth mentioning:</p>
<ul>
<li>The <code class="language-plaintext highlighter-rouge">N/A</code> values will be replaced at run-time by actual meaningful values (via <a href="#log-scopes">log scopes</a>, as described several sections below, to avoid coupling application code with Serilog API), e.g. <code class="language-plaintext highlighter-rouge">ApplicationFlowName</code> property will be populated with values like: <code class="language-plaintext highlighter-rouge">TodoItem/Add</code>, <code class="language-plaintext highlighter-rouge">TodoItems/FetchByQuery</code> or <code class="language-plaintext highlighter-rouge">Security/GenerateJwt</code>, based on what user action took place at a particular moment of time</li>
<li>All of the properties above act as <strong>global</strong> ones, since they will accompany <strong>any</strong> event</li>
<li>Due to the built-in <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/?view=aspnetcore-5.0#appsettingsjson-1">configuration override mechanism</a> provided by ASP.NET Core, when application runs in any environment, each event will be accompanied by <code class="language-plaintext highlighter-rouge">Application</code>, <code class="language-plaintext highlighter-rouge">ApplicationFlowName</code>, <code class="language-plaintext highlighter-rouge">ConversationId</code> and <code class="language-plaintext highlighter-rouge">ThreadId</code> properties, but the value of the <code class="language-plaintext highlighter-rouge">Application</code> property will be set to <strong><code class="language-plaintext highlighter-rouge">Todo.WebApi</code></strong> when application runs in <strong>production</strong> environment and will be set to <strong><code class="language-plaintext highlighter-rouge">Todo.WebApi.Development</code></strong> when application runs in <strong>development</strong> environment.</li>
</ul>
<h3 id="serilog-stringification">Serilog stringification</h3>
<p>Stringification means invoking the <strong>ToString</strong> method of an object used as part of an event; Serilog offers the <a href="https://github.com/serilog/serilog/wiki/Structured-Data#forcing-stringification">$ stringification operator</a> for this purpose, as seen below:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">var</span> <span class="n">unknown</span> <span class="p">=</span> <span class="k">new</span><span class="p">[]</span> <span class="p">{</span> <span class="m">1</span><span class="p">,</span> <span class="m">2</span><span class="p">,</span> <span class="m">3</span> <span class="p">}</span>
<span class="n">Log</span><span class="p">.</span><span class="nf">Information</span><span class="p">(</span><span class="s">"Received {$Data}"</span><span class="p">,</span> <span class="n">unknown</span><span class="p">);</span>
</code></pre></div></div>
<p>This will render:</p>
<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Received "System.Int32[]"
</code></pre></div></div>
<p>Though my pet project is not using this operator (yet), I believe it might be useful in some scenarios, thus it’s worth mentioning it.</p>
<h3 id="serilog-destructuring">Serilog destructuring</h3>
<p>Destructuring means extracting pieces of information from an object like a <a href="https://en.wikipedia.org/wiki/Data_transfer_object">DTO</a> or <a href="https://en.wikipedia.org/wiki/Plain_old_CLR_object">POCO</a> and create properties with values.</p>
<h4 id="destructuring-operator">Using destructuring operator</h4>
<p>Serilog offers the <a href="https://github.com/serilog/serilog/wiki/Structured-Data#preserving-object-structure">@ destructuring operator</a>.<br />
For instance, my pet project uses this operator in order to log the search criteria used for fetching a list of records from a PostgreSQL database via an Entity Framework Core query.<br />
Here’s a fragment found inside <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.Services/TodoItemLifecycleManagement/TodoItemService.cs#L80-L104">TodoService class</a>:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">...</span>
<span class="k">private</span> <span class="k">async</span> <span class="n">Task</span><span class="p"><</span><span class="n">IList</span><span class="p"><</span><span class="n">TodoItemInfo</span><span class="p">>></span> <span class="nf">InternalGetByQueryAsync</span><span class="p">(</span><span class="n">TodoItemQuery</span> <span class="n">todoItemQuery</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">LogInformation</span><span class="p">(</span><span class="s">"About to fetch items using query {@TodoItemQuery} ..."</span><span class="p">,</span> <span class="n">todoItemQuery</span><span class="p">);</span>
<span class="p">...</span>
<span class="p">}</span>
<span class="p">...</span>
</code></pre></div></div>
<h4 id="destructuring-policies">Using destructuring policies</h4>
<p>In case there is a need to customize the way events are serialized, one can define several <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/appsettings.json#L60-L86">destructuring policies</a>, like this:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">...</span><span class="w">
</span><span class="nl">"Destructure"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"Name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"With"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Args"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"policy"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Todo.Integrations.Serilog.Destructuring.DeleteTodoItemInfoDestructuringPolicy, Todo.Integrations.Serilog"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"Name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"With"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Args"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"policy"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Todo.Integrations.Serilog.Destructuring.NewTodoItemInfoDestructuringPolicy, Todo.Integrations.Serilog"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="p">]</span><span class="err">,</span><span class="w">
</span><span class="err">...</span><span class="w">
</span></code></pre></div></div>
<p>This is how the <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.Integrations.Serilog/Destructuring/DeleteTodoItemInfoDestructuringPolicy.cs">DeleteTodoItemInfoDestructuringPolicy</a> class looks like:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">class</span> <span class="nc">DeleteTodoItemInfoDestructuringPolicy</span> <span class="p">:</span> <span class="n">IDestructuringPolicy</span>
<span class="p">{</span>
<span class="k">public</span> <span class="kt">bool</span> <span class="nf">TryDestructure</span><span class="p">(</span><span class="kt">object</span> <span class="k">value</span><span class="p">,</span> <span class="n">ILogEventPropertyValueFactory</span> <span class="n">propertyValueFactory</span><span class="p">,</span>
<span class="k">out</span> <span class="n">LogEventPropertyValue</span> <span class="n">result</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">result</span> <span class="p">=</span> <span class="k">null</span><span class="p">;</span>
<span class="n">DeleteTodoItemInfo</span> <span class="n">deleteTodoItemInfo</span> <span class="p">=</span> <span class="k">value</span> <span class="k">as</span> <span class="n">DeleteTodoItemInfo</span><span class="p">;</span>
<span class="k">if</span> <span class="p">(</span><span class="n">deleteTodoItemInfo</span> <span class="p">==</span> <span class="k">null</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">return</span> <span class="k">false</span><span class="p">;</span>
<span class="p">}</span>
<span class="n">result</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">StructureValue</span><span class="p">(</span><span class="k">new</span> <span class="n">List</span><span class="p"><</span><span class="n">LogEventProperty</span><span class="p">></span>
<span class="p">{</span>
<span class="k">new</span> <span class="nf">LogEventProperty</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">deleteTodoItemInfo</span><span class="p">.</span><span class="n">Id</span><span class="p">),</span> <span class="k">new</span> <span class="nf">ScalarValue</span><span class="p">(</span><span class="n">deleteTodoItemInfo</span><span class="p">.</span><span class="n">Id</span><span class="p">)),</span>
<span class="k">new</span> <span class="nf">LogEventProperty</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">deleteTodoItemInfo</span><span class="p">.</span><span class="n">Owner</span><span class="p">),</span>
<span class="k">new</span> <span class="nf">ScalarValue</span><span class="p">(</span><span class="n">deleteTodoItemInfo</span><span class="p">.</span><span class="n">Owner</span><span class="p">.</span><span class="nf">GetNameOrDefault</span><span class="p">()))</span>
<span class="p">});</span>
<span class="k">return</span> <span class="k">true</span><span class="p">;</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<h4 id="destructuring-libraries">Using destructuring libraries</h4>
<p>In case destructuring operator and policies are not good enough, one can use libraries provided by community, like:</p>
<ul>
<li><a href="https://github.com/destructurama/attributed">Destructurama.Attributed</a> which uses attributes to customize event serialization</li>
<li><a href="https://github.com/destructurama/by-ignoring">Destructurama.ByIgnoring</a> which enables excluding individual properties from events (e.g. log an event representing a user, but exclude any sensitive data, like its <code class="language-plaintext highlighter-rouge">Password</code> property)</li>
<li><a href="https://github.com/destructurama/json-net">Destructurama.JsonNet</a> which enables handling JSON.NET dynamic types as like any other event</li>
</ul>
<h3 id="configure-serilog">Configure Serilog</h3>
<p>In order for an ASP.NET Core application to use Serilog, several things need to be setup:</p>
<ul>
<li>Configure the sinks, enrichers and any other Serilog related nouns</li>
<li>Configure application to use Serilog as a logging provider</li>
</ul>
<h4 id="configure-serilog-nouns">Configure Serilog nouns</h4>
<p>Up until now I have shown the configuration file based way of setting up Serilog, but this library supports code based configuration too. From what I’ve seen until now, almost each GitHub repo hosting anything related to Serilog (e.g. sink, enricher, etc.), usually documents both approaches. I personally favor setting up Serilog via configuration files since I do not want to redeploy the application each time I need to adjust Serilog setup (i.e. when I need to increase or decrease the current logging level to capture less or more events).</p>
<p>Read more about the different ways of configuring Serilog <a href="https://github.com/serilog/serilog/wiki/Configuration-Basics">here</a>, <a href="https://github.com/tsimbalar/serilog-settings-comparison/blob/master/docs/README.md">here</a> and <a href="https://github.com/serilog/serilog-settings-configuration">here</a>.<br />
In case you have to run your application on top of .NET Framework, check <a href="https://github.com/serilog/serilog/wiki/AppSettings">this wiki page</a> to understand how to configure Serilog via the <code class="language-plaintext highlighter-rouge">appSettings</code> XML section found inside the application configuration file.</p>
<p>The Serilog JSON configuration of my pet project found inside the appsettings.json can be seen <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/appsettings.json#L22-L86">here</a>; each hosting environment overrides various parts of Serilog configuration, usually the logging level, sinks and global properties, as one can see inside <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/appsettings.Development.json#L10-L37">appsettings.Development.json</a>, <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/appsettings.IntegrationTests.json#L9-L36">appsettings.IntegrationTests.json</a> and <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/appsettings.DemoInAzure.json#L7-L33">appsettings.DemoInAzure.json</a> files.</p>
<p>There are several <strong>important things</strong> worth mentioning:</p>
<ul>
<li>The <code class="language-plaintext highlighter-rouge">LevelSwitches</code> section defines a switch used for controlling the current <a href="https://github.com/serilog/serilog/wiki/Writing-Log-Events#log-event-levels">logging level</a> used by Serilog - see more details <a href="https://github.com/serilog/serilog/wiki/Writing-Log-Events#dynamic-levels">here</a>
<ul>
<li><strong>IMPORTANT:</strong> This switch can be used for reconfiguring Serilog without the need to restart the application - see more details <a href="https://github.com/serilog/serilog-settings-configuration/issues/72">here</a></li>
</ul>
</li>
<li>The <code class="language-plaintext highlighter-rouge">MinimumLevel</code> section defines what gets logged and what not - e.g. <code class="language-plaintext highlighter-rouge">{ "Override": { "Microsoft": "Warning"} }</code> means that from all events generated by classes found under the <strong>Microsoft</strong> namespace and any of its descendants, Serilog will log only warnings and errors, discarding the rest; <code class="language-plaintext highlighter-rouge">{ "Override": { "Microsoft.EntityFrameworkCore.Database.Command": "Information"} }</code> means Serilog will log all SQL commands executed by Entity Framework Core</li>
<li>The <code class="language-plaintext highlighter-rouge">Using</code> section declares all Serilog sinks to be used by the application</li>
<li>The <code class="language-plaintext highlighter-rouge">WriteTo</code> section configures each Serilog sink declared inside the <code class="language-plaintext highlighter-rouge">Using</code> section</li>
<li>The <code class="language-plaintext highlighter-rouge">Enrich</code> section declares all Serilog enrichers to be used by the application</li>
<li>The <code class="language-plaintext highlighter-rouge">Properties</code> section declares all Serilog properties which will accompany all application events</li>
<li>The <code class="language-plaintext highlighter-rouge">Destructure</code> section declares all Serilog classes used for serializing particular application events</li>
</ul>
<h4 id="configure-serilog-as-logging-provider">Configure Serilog as an ASP.NET Core logging provider</h4>
<p>Until now I have shown how to configure Serilog nouns, now’s the time to show how to add Serilog as a logging provider to an ASP.NET Core application.<br />
The usual approach is to setup things up in two places:</p>
<ul>
<li>
<p><a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/Program.cs#L20-L23">Program class</a>, in order to capture any errors occurring during host setup phase</p>
<p>I first need to instantiate Serilog <code class="language-plaintext highlighter-rouge">Logger</code> class:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">private</span> <span class="k">static</span> <span class="k">readonly</span> <span class="n">Logger</span> <span class="n">logger</span> <span class="p">=</span>
<span class="k">new</span> <span class="nf">LoggerConfiguration</span><span class="p">()</span>
<span class="p">.</span><span class="n">Enrich</span><span class="p">.</span><span class="nf">FromLogContext</span><span class="p">()</span>
<span class="p">.</span><span class="n">WriteTo</span><span class="p">.</span><span class="nf">Console</span><span class="p">()</span>
<span class="p">.</span><span class="nf">CreateLogger</span><span class="p">();</span>
</code></pre></div> </div>
<p>Then Serilog will be able to log any error:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">static</span> <span class="k">void</span> <span class="nf">Main</span><span class="p">(</span><span class="kt">string</span><span class="p">[]</span> <span class="n">args</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">try</span>
<span class="p">{</span>
<span class="nf">CreateHostBuilder</span><span class="p">(</span><span class="n">args</span><span class="p">).</span><span class="nf">Build</span><span class="p">().</span><span class="nf">Run</span><span class="p">();</span>
<span class="p">}</span>
<span class="k">catch</span> <span class="p">(</span><span class="n">Exception</span> <span class="n">exception</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">Fatal</span><span class="p">(</span><span class="n">exception</span><span class="p">,</span> <span class="s">"Todo ASP.NET Core Web API failed to start"</span><span class="p">);</span>
<span class="k">throw</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">finally</span>
<span class="p">{</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">Dispose</span><span class="p">();</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div> </div>
</li>
<li>
<p><a href="https://github.com/satrapu/aspnet-core-logging/blob/2cec7a7990a9ef2fdf61011baedfeff9d8da21e8/Sources/Todo.WebApi/Startup.cs#L168-L196">Startup class</a>, in order to let infrastructure know that it can use Serilog as a logging provider:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">services</span><span class="p">.</span><span class="nf">AddLogging</span><span class="p">(</span><span class="n">loggingBuilder</span> <span class="p">=></span>
<span class="p">{</span>
<span class="k">if</span> <span class="p">(</span><span class="n">IsSerilogFileSinkConfigured</span><span class="p">)</span>
<span class="p">{</span>
<span class="kt">string</span> <span class="n">logsHomeDirectoryPath</span> <span class="p">=</span> <span class="n">Environment</span><span class="p">.</span><span class="nf">GetEnvironmentVariable</span><span class="p">(</span><span class="n">LogsHomeEnvironmentVariable</span><span class="p">);</span>
<span class="k">if</span> <span class="p">(</span><span class="kt">string</span><span class="p">.</span><span class="nf">IsNullOrWhiteSpace</span><span class="p">(</span><span class="n">logsHomeDirectoryPath</span><span class="p">)</span> <span class="p">||</span> <span class="p">!</span><span class="n">Directory</span><span class="p">.</span><span class="nf">Exists</span><span class="p">(</span><span class="n">logsHomeDirectoryPath</span><span class="p">))</span>
<span class="p">{</span>
<span class="kt">var</span> <span class="n">currentWorkingDirectory</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">DirectoryInfo</span><span class="p">(</span><span class="n">Directory</span><span class="p">.</span><span class="nf">GetCurrentDirectory</span><span class="p">());</span>
<span class="n">DirectoryInfo</span> <span class="n">logsHomeDirectory</span> <span class="p">=</span> <span class="n">currentWorkingDirectory</span><span class="p">.</span><span class="nf">CreateSubdirectory</span><span class="p">(</span><span class="s">"Logs"</span><span class="p">);</span>
<span class="n">Environment</span><span class="p">.</span><span class="nf">SetEnvironmentVariable</span><span class="p">(</span><span class="n">LogsHomeEnvironmentVariable</span><span class="p">,</span> <span class="n">logsHomeDirectory</span><span class="p">.</span><span class="n">FullName</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="k">if</span> <span class="p">(!</span><span class="n">WebHostingEnvironment</span><span class="p">.</span><span class="nf">IsDevelopment</span><span class="p">())</span>
<span class="p">{</span>
<span class="n">loggingBuilder</span><span class="p">.</span><span class="nf">ClearProviders</span><span class="p">();</span>
<span class="p">}</span>
<span class="n">loggingBuilder</span><span class="p">.</span><span class="nf">AddSerilog</span><span class="p">(</span><span class="k">new</span> <span class="nf">LoggerConfiguration</span><span class="p">()</span>
<span class="p">.</span><span class="n">ReadFrom</span><span class="p">.</span><span class="nf">Configuration</span><span class="p">(</span><span class="n">Configuration</span><span class="p">)</span>
<span class="p">.</span><span class="nf">CreateLogger</span><span class="p">(),</span> <span class="n">dispose</span><span class="p">:</span> <span class="k">true</span><span class="p">);</span>
<span class="p">});</span>
</code></pre></div> </div>
<p>There are several <strong>important things</strong> worth mentioning:</p>
<ul>
<li>In case the current environment has been configured to use <code class="language-plaintext highlighter-rouge">Serilog.Sinks.File</code> sink, then I will ensure the environment variable <code class="language-plaintext highlighter-rouge">%LOGS_HOME%</code> <a href="https://github.com/satrapu/aspnet-core-logging/blob/2cec7a7990a9ef2fdf61011baedfeff9d8da21e8/Sources/Todo.WebApi/appsettings.json#L41">declared</a> under the appropriate <code class="language-plaintext highlighter-rouge">Args</code> section will be correctly populated at run-time, so that the log files can be correctly located in that given location (i.e. the <strong>Logs</strong> directory found under the current working directory)</li>
<li>Any built-in logging providers are removed when application runs outside local development environment to minimize the impact logging has over the application performance</li>
<li>I’m configuring Serilog via the <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/Program.cs#L51-L60">current application configuration</a></li>
<li>There is a downside to my current approach, as the Serilog setup found in Program.cs file differs from the one found in Startup.cs file; on the other hand, Nicholas Blumhardt has come up with <a href="https://nblumhardt.com/2020/10/bootstrap-logger/">a solution</a> and I’m itching for experimenting with it as I’m not happy having to maintain two Serilog configurations</li>
<li>I had to add several Serilog related <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Directory.Build.targets#L31-L35">NuGet packages</a>:
<ul>
<li><a href="https://www.nuget.org/packages/Serilog/">Serilog</a> used for creating <a href="#destructuring-policies">destructuring policies</a></li>
<li><a href="https://www.nuget.org/packages/Serilog.AspNetCore">Serilog.AspNetCore</a> used for adding Serilog as logging provider to the ASP.NET Core application</li>
<li><a href="https://www.nuget.org/packages/Serilog.Enrichers.Thread">Serilog.Enrichers.Thread</a> used for enriching events with the current thread ID</li>
<li><a href="https://www.nuget.org/packages/Serilog.Sinks.ApplicationInsights">Serilog.Sinks.ApplicationInsights</a> used when sending events to an Azure Application Insights instance</li>
<li><a href="https://www.nuget.org/packages/Serilog.Sinks.Seq">Serilog.Sinks.Seq</a> used for sending events to a Seq instance</li>
</ul>
</li>
</ul>
</li>
</ul>
<h2 id="what-is-seq">What is Seq?</h2>
<p>Being able to create events with a given structure is not enough when you need to extract relevant data out of them - one needs the means to parse, index and query such data.<br />
<strong>Seq</strong> is <em>machine data, for humans</em> (as stated on its <a href="https://datalust.co/seq">home page</a>) and it’s <em>very</em> well equipped to perform these things.</p>
<p>One of the nice things about Seq is that you can freely use it for both development and production, as long as you’re the only user. If you need more users to access your Seq server, you have to start paying - check pricing <a href="https://datalust.co/pricing">here</a>.</p>
<h3 id="run-seq-using-docker">Run Seq using Docker</h3>
<p>The quickest way of running Seq locally is via Docker. Since my pet project uses PostgreSQL too, it felt naturally to run all application dependencies using <a href="https://docs.docker.com/compose/">Docker Compose</a>. The <a href="https://hub.docker.com/r/datalust/seq">instructions</a> found on Docker Hub are pretty easy to follow and adapting them to Docker Compose is not hard:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">version</span><span class="pi">:</span> <span class="s1">'</span><span class="s">3.8'</span>
<span class="na">services</span><span class="pi">:</span>
<span class="s">...</span>
<span class="s">seq</span><span class="pi">:</span>
<span class="na">container_name</span><span class="pi">:</span> <span class="s">seq</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">datalust/seq:2021.2</span>
<span class="na">restart</span><span class="pi">:</span> <span class="s">unless-stopped</span>
<span class="na">volumes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">seq_data:/data</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="c1"># Ingestion port</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">5341:5341/tcp"</span>
<span class="c1"># UI port</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">8888:80/tcp"</span>
<span class="na">networks</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">local_seq</span>
<span class="na">environment</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">ACCEPT_EULA=Y</span>
<span class="na">volumes</span><span class="pi">:</span>
<span class="s">...</span>
<span class="s">seq_data</span><span class="pi">:</span>
<span class="na">external</span><span class="pi">:</span> <span class="no">true</span>
<span class="na">networks</span><span class="pi">:</span>
<span class="s">...</span>
<span class="s">local_seq</span><span class="pi">:</span>
</code></pre></div></div>
<p>There are several <strong>important things</strong> worth mentioning:</p>
<ul>
<li>The ingestion port exposed by Docker to localhost as <strong>5341</strong> matches the port used by <code class="language-plaintext highlighter-rouge">Seq</code> sink (<a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/appsettings.Development.json#L29">remember</a> <code class="language-plaintext highlighter-rouge">"serverUrl": "http://localhost:5341"</code>?)</li>
<li>
<p>Once Seq Docker container has started, one can access its UI by opening a browser and navigating to URL: <a href="http://localhost:8888/#/events">http://localhost:8888/#/events</a>, as seen below:
<img src="/assets/structured-logging-in-aspnet-core-using-serilog-and-seq/1-seq-events-page.png" alt="seq-events-page" /></p>
</li>
<li>As soon as Seq has started ingesting events, one can expand them in order to see all relevant details:
<img src="/assets/structured-logging-in-aspnet-core-using-serilog-and-seq/2-expanded-event.png" alt="expanded-event" /></li>
</ul>
<h3 id="query-seq-data">Crash course for querying Seq data</h3>
<p>Seq uses a SQL-like query language for querying ingested events which is <em>very</em> well <a href="https://docs.datalust.co/docs/the-seq-query-language">documented</a>; due to its sheer complexity, it cannot be the topic of just <em>one</em> post, so I will only show several examples and let the reader consult the official documentation.<br />
Another reason for not writing more about Seq is that you might decide to use a different server for querying structured events, like <a href="https://azure.microsoft.com/en-us/services/monitor/">Azure Monitor</a> and its component <a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview">Application Insights</a>, so any Seq related info will not help you at all.</p>
<ul>
<li>
<p>Given a user, what application flows did he executed during the past 24 hours?</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">select</span> <span class="k">distinct</span><span class="p">(</span><span class="n">ApplicationFlowName</span><span class="p">)</span> <span class="k">as</span> <span class="n">FlowName</span>
<span class="k">from</span> <span class="n">stream</span>
<span class="k">where</span>
<span class="o">@</span><span class="nb">Timestamp</span> <span class="o">>=</span> <span class="n">Now</span><span class="p">()</span> <span class="o">-</span> <span class="mi">24</span><span class="n">h</span>
<span class="k">and</span> <span class="n">FlowInitiator</span> <span class="o">=</span> <span class="s1">'c2F0cmFwdQ=='</span>
<span class="k">and</span> <span class="o">@</span><span class="n">MessageTemplate</span> <span class="k">like</span> <span class="s1">'% has finished executing application flow %'</span>
</code></pre></div> </div>
<p><img src="/assets/structured-logging-in-aspnet-core-using-serilog-and-seq/3-flows-executed-by-user-during-past-24h.png" alt="flows-executed-by-user-during-past-24h" /></p>
<p>The <code class="language-plaintext highlighter-rouge">ApplicationFlowName</code> (the name of the application flow which implements a business feature) and <code class="language-plaintext highlighter-rouge">FlowInitiator</code> (the obfuscated name of the current user who initiated the flow) are custom Serilog properties populated at run-time via log scopes, while <code class="language-plaintext highlighter-rouge">@MessageTemplate</code> and <code class="language-plaintext highlighter-rouge">@Timestamp</code> are <a href="https://docs.datalust.co/docs/built-in-properties-and-functions">built-in properties</a> provided by Seq.<br />
The <code class="language-plaintext highlighter-rouge">from stream</code> clause says that events will be extracted from the ingested ones.</p>
</li>
<li>
<p>What kind of messages are logged by this application?</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">select</span> <span class="k">distinct</span><span class="p">(</span><span class="o">@</span><span class="n">MessageTemplate</span><span class="p">)</span> <span class="k">as</span> <span class="n">MessageTemplate</span>
<span class="k">from</span> <span class="n">stream</span>
<span class="k">where</span>
<span class="o">@</span><span class="n">MessageTemplate</span> <span class="k">not</span> <span class="k">like</span> <span class="s1">'--- REQUEST %'</span>
<span class="k">and</span> <span class="o">@</span><span class="n">MessageTemplate</span> <span class="k">not</span> <span class="k">like</span> <span class="s1">'--- RESPONSE %'</span>
<span class="k">order</span> <span class="k">by</span> <span class="n">MessageTemplate</span>
</code></pre></div> </div>
<p><img src="/assets/structured-logging-in-aspnet-core-using-serilog-and-seq/4-ingested-message-templates.png" alt="ingested-message-templates" /></p>
<p>The <code class="language-plaintext highlighter-rouge">@MessageTemplate</code> property represents the <strong>template</strong> used by the application to create an event and sent to Seq via a Serilog sink; looking at each such template, one can understand what is being logged and whether this pose any security risk or not. For instance, a security auditor might check each template to understand whether any sensitive data (e.g. passwords, authentication tokens, etc.) is being logged. If this is the case, the developer will need to patch the code and redeploy the new application version, thus fixing the security issue. Having to manually read the entire code base to figure out whether application logs sensitive data is a very tedious and error prone process, so using a Seq query instead is the better approach.</p>
</li>
</ul>
<h2 id="log-application-events">Log application events</h2>
<p>An ASP.NET Core application logs events using <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-5.0">Microsoft.Extensions.Logging.ILogger</a> interface provided via <a href="https://www.nuget.org/packages/Microsoft.Extensions.Logging.Abstractions">Microsoft.Extensions.Logging.Abstractions</a> NuGet package.<br />
Any class which needs to log events will require infrastructure to inject an <code class="language-plaintext highlighter-rouge">ILogger</code> object and will use any of its <code class="language-plaintext highlighter-rouge">LogXYZ</code> overloads to create and send the event to the underlying logging provider, i.e. Serilog.</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">class</span> <span class="nc">TodoItemService</span> <span class="p">:</span> <span class="n">ITodoItemService</span>
<span class="p">{</span>
<span class="p">...</span>
<span class="k">private</span> <span class="k">readonly</span> <span class="n">ILogger</span> <span class="n">logger</span><span class="p">;</span>
<span class="k">public</span> <span class="nf">TodoItemService</span><span class="p">(</span><span class="n">TodoDbContext</span> <span class="n">todoDbContext</span><span class="p">,</span> <span class="n">ILogger</span><span class="p"><</span><span class="n">TodoItemService</span><span class="p">></span> <span class="n">logger</span><span class="p">)</span>
<span class="p">{</span>
<span class="p">...</span>
<span class="k">this</span><span class="p">.</span><span class="n">logger</span> <span class="p">=</span> <span class="n">logger</span> <span class="p">??</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">logger</span><span class="p">));</span>
<span class="p">}</span>
<span class="k">private</span> <span class="k">async</span> <span class="n">Task</span><span class="p"><</span><span class="n">IList</span><span class="p"><</span><span class="n">TodoItemInfo</span><span class="p">>></span> <span class="nf">InternalGetByQueryAsync</span><span class="p">(</span><span class="n">TodoItemQuery</span> <span class="n">todoItemQuery</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">LogInformation</span><span class="p">(</span><span class="s">"About to fetch items using query {@TodoItemQuery} ..."</span><span class="p">,</span> <span class="n">todoItemQuery</span><span class="p">);</span>
<span class="p">...</span>
<span class="p">}</span>
<span class="p">...</span>
<span class="p">}</span>
</code></pre></div></div>
<h3 id="logging-providers">Logging providers</h3>
<p>I’ve already mentioned Serilog as an ASP.NET Core <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-5.0#logging-providers-1">logging provider</a>. Microsoft offers several built-in logging providers, but there are plenty <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-5.0#third-party-logging-providers-1">3rd parties</a> as well.<br />
In case you have invested in a specific logging framework, like Log4Net or NLog, the good news is you’ll most likely be able to use it, as community most likely has provided an integration with ASP.NET Core.<br />
If such integration is missing, it’s a good opportunity for a developer to make a name for himself ;)!</p>
<h3 id="message-templates">Message templates</h3>
<p>The <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-5.0">Microsoft.Extensions.Logging.ILogger</a> interface comes with several <code class="language-plaintext highlighter-rouge">LogXYZ</code> overloads where <code class="language-plaintext highlighter-rouge">message</code> parameter is always a string. When I initially started using this interface, before starting using structured logging, I used string interpolation believing that the <code class="language-plaintext highlighter-rouge">message</code> is the actual info to be logged, so my logging code would look like this:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">var</span> <span class="n">user</span> <span class="p">=</span> <span class="n">service</span><span class="p">.</span><span class="nf">GetUser</span><span class="p">(...);</span>
<span class="kt">var</span> <span class="n">action</span> <span class="p">=</span> <span class="n">service</span><span class="p">.</span><span class="nf">GetAction</span><span class="p">(...);</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">LogInformation</span><span class="p">(</span><span class="s">$"User with name </span><span class="p">{</span><span class="n">user</span><span class="p">.</span><span class="n">UserName</span><span class="p">}</span><span class="s"> has initiated action </span><span class="p">{</span><span class="n">action</span><span class="p">.</span><span class="n">Name</span><span class="p">}</span><span class="s">"</span><span class="p">);</span>
</code></pre></div></div>
<p>The Log4Net based logging provider I was using at that time would happily write the above string inside the currently configured console or file, but that was an <em>unstructured</em> way of logging.<br />
The <em>structured</em> way means treating <code class="language-plaintext highlighter-rouge">message</code> as a <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-5.0#log-message-template-1"><strong>message template</strong></a> and not as a plain string. The logging provider knows how to create the event based on this template, but will also have the chance of promoting the actual values used for replacing the placeholders to properties which have specific semantics, thus being able to handle a <em>structured</em> event.</p>
<p>Considering all of the above, the <strong>correct</strong> way of logging structured events in ASP.NET Core is:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">var</span> <span class="n">user</span> <span class="p">=</span> <span class="n">service</span><span class="p">.</span><span class="nf">GetUser</span><span class="p">(...);</span>
<span class="kt">var</span> <span class="n">action</span> <span class="p">=</span> <span class="n">service</span><span class="p">.</span><span class="nf">GetAction</span><span class="p">(...);</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">LogInformation</span><span class="p">(</span><span class="s">"User with name {UserName} has initiated action {ActionName}"</span><span class="p">,</span> <span class="n">user</span><span class="p">.</span><span class="n">UserName</span><span class="p">,</span> <span class="n">action</span><span class="p">.</span><span class="n">Name</span><span class="p">);</span>
</code></pre></div></div>
<p>There are several <strong>important things</strong> worth mentioning:</p>
<ul>
<li>The above code fragment contains:
<ul>
<li>A <strong>message template</strong>: <code class="language-plaintext highlighter-rouge">User with name {UserName} has initiated action {ActionName}</code></li>
<li>Two <strong>placeholders</strong>: <code class="language-plaintext highlighter-rouge">{UserName}</code> and <code class="language-plaintext highlighter-rouge">{ActionName}</code>
<ul>
<li>Please note I’m using <a href="https://techterms.com/definition/pascalcase">Pascal case</a> for their names, as they represent properties I will most probably use inside Seq queries</li>
</ul>
</li>
<li>Two <strong>values</strong> which will replace the placeholders when the logging provider will handle the event at run-time: <code class="language-plaintext highlighter-rouge">user.UserName</code> and <code class="language-plaintext highlighter-rouge">action.Name</code>
<ul>
<li>The order of the values <em>is</em> important, since each placeholder will be replaced with the corresponding value</li>
</ul>
</li>
</ul>
</li>
<li>We no longer need to use string interpolation</li>
</ul>
<p>Because I’m employing message templates and since I’m using Seq, I could run the following query in order to identify the users which have logged-in during the last 24 hours:</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">select</span>
<span class="k">distinct</span><span class="p">(</span><span class="n">UserName</span><span class="p">)</span>
<span class="k">from</span> <span class="n">stream</span>
<span class="k">where</span>
<span class="o">@</span><span class="nb">Timestamp</span> <span class="o">>=</span> <span class="n">now</span><span class="p">()</span> <span class="o">-</span> <span class="mi">24</span><span class="n">h</span>
<span class="k">and</span> <span class="n">ActionName</span> <span class="o">=</span> <span class="s1">'Login'</span>
</code></pre></div></div>
<p>Additionally, I could run a query to identify which users did not login during the past 6 months, and thus I should deactivate their accounts; I could run many other such queries - the only real impediments in getting the most out of the ingested structured events are my imagination and my ability in mastering Seq query language!</p>
<h3 id="log-scopes">Log scopes</h3>
<p>What happens if I want to ensure that a particular set of events share the same property? For instance, there is good reason in identifying all events generated while processing a particular HTTP request - what we want is to basically <em>group</em> such events by their HTTP request identifier.<br />
ASP.NET Core provides the so-called <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-5.0#log-scopes-1"><strong>log scopes</strong></a> which are used exactly for such grouping purposes.</p>
<p>In order to group events by their HTTP request identifier, one can employ an ASP.NET Core <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware/?view=aspnetcore-5.0">middleware</a> which will generate an identifier (basically a GUID) which will accompany any event created while processing that particular HTTP request.</p>
<p>The below fragment belongs to <a href="https://github.com/satrapu/aspnet-core-logging/blob/v20210824/Sources/Todo.WebApi/Logging/ConversationIdProviderMiddleware.cs">ConversationIdProviderMiddleware class</a>:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">Invoke</span><span class="p">(</span><span class="n">HttpContext</span> <span class="n">httpContext</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">if</span> <span class="p">(!</span><span class="n">httpContext</span><span class="p">.</span><span class="n">Request</span><span class="p">.</span><span class="n">Headers</span><span class="p">.</span><span class="nf">TryGetValue</span><span class="p">(</span><span class="n">ConversationId</span><span class="p">,</span> <span class="k">out</span> <span class="n">StringValues</span> <span class="n">conversationId</span><span class="p">)</span>
<span class="p">||</span> <span class="kt">string</span><span class="p">.</span><span class="nf">IsNullOrWhiteSpace</span><span class="p">(</span><span class="n">conversationId</span><span class="p">))</span>
<span class="p">{</span>
<span class="n">conversationId</span> <span class="p">=</span> <span class="n">Guid</span><span class="p">.</span><span class="nf">NewGuid</span><span class="p">().</span><span class="nf">ToString</span><span class="p">(</span><span class="s">"N"</span><span class="p">);</span>
<span class="n">httpContext</span><span class="p">.</span><span class="n">Request</span><span class="p">.</span><span class="n">Headers</span><span class="p">.</span><span class="nf">Add</span><span class="p">(</span><span class="n">ConversationId</span><span class="p">,</span> <span class="n">conversationId</span><span class="p">);</span>
<span class="p">}</span>
<span class="n">httpContext</span><span class="p">.</span><span class="n">Response</span><span class="p">.</span><span class="n">Headers</span><span class="p">.</span><span class="nf">Add</span><span class="p">(</span><span class="n">ConversationId</span><span class="p">,</span> <span class="n">conversationId</span><span class="p">);</span>
<span class="k">using</span> <span class="p">(</span><span class="n">logger</span><span class="p">.</span><span class="nf">BeginScope</span><span class="p">(</span><span class="k">new</span> <span class="n">Dictionary</span><span class="p"><</span><span class="kt">string</span><span class="p">,</span> <span class="kt">object</span><span class="p">></span>
<span class="p">{</span>
<span class="p">[</span><span class="n">ConversationId</span><span class="p">]</span> <span class="p">=</span> <span class="n">conversationId</span><span class="p">.</span><span class="nf">ToString</span><span class="p">()</span>
<span class="p">}))</span>
<span class="p">{</span>
<span class="k">await</span> <span class="nf">nextRequestDelegate</span><span class="p">(</span><span class="n">httpContext</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>In the lines above I’m checking whether a <code class="language-plaintext highlighter-rouge">ConversationId</code> has already been provided as an HTTP header; if not, I’m creating a new one and adding it to both HTTP request and response.<br />
I’m then creating a log scope to store a dictionary containing the <code class="language-plaintext highlighter-rouge">ConversationId</code> key - this will ensure that this key-value pair will accompany <em>all</em> events created during this HTTP operation; I personally believe using key-value pairs make the code more readable than using other ways of setting the scope, but feel free to disagree.<br />
Identifying events belonging to one particular <em>conversation</em> is a matter of running the following Seq query:</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">select</span> <span class="o">*</span>
<span class="k">from</span> <span class="n">stream</span>
<span class="k">where</span> <span class="n">ConversationId</span> <span class="o">=</span> <span class="s1">'340436533dfd467e9659b3f7978981cb'</span>
</code></pre></div></div>
<p>This query will find several events:
<img src="/assets/structured-logging-in-aspnet-core-using-serilog-and-seq/5-query-events-by-conversation-id.png" alt="query-events-by-conversation-id" /></p>
<h2 id="use-cases">Use cases</h2>
<p>Aggregating application events (and not just this kind of events, as one could also collect events generated by the infrastructure used for running this application!) into one place and being able to query them using a structured language is very powerful and can cut costs expressed in reduced issue investigation time, reduced employee & end-user frustration, increased chance of making the best possible business decisions, and more, so much more!</p>
<p><strong>IMPORTANT</strong>: I took the liberty of describing several use cases where I personally believe structured logging really shines, but most certainly they are not the only ones!</p>
<h3 id="debugging-use-case">Debugging</h3>
<p>One of the most common purposes we use logging for is <strong>debugging</strong>; since we usually must not attach a debugger to a production environment to investigate an issue as this will most likely cause worse ones, we have to rely on reading the existing logged events to figure why a particular piece of application behaves the way it does.</p>
<h4 id="identify-error-root-cause">Identify error root cause</h4>
<p>In case the application throws an exception, we usually log it and display a notification to the end-user saying that an error has occurred while processing his request. We can do better than that: let’s generate an error ID, include it inside the message used for logging the exception and make sure the notification to the end-user mentions it too so that any bug report which will eventually need to be taken care by the developers will include it. It’s way easier to run a query to fetch the exception along with all of its relevant details once you know its associated error ID than it is to manually search through all events logged during the period of time mentioned inside the bug report (usually given by the time when the report was created, though the report might be created at a later time after the bug was spotted).</p>
<p>We need to configure <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/error-handling?view=aspnetcore-5.0">exception handling</a> inside the <a href="https://github.com/satrapu/aspnet-core-logging/blob/2cec7a7990a9ef2fdf61011baedfeff9d8da21e8/Sources/Todo.WebApi/Startup.cs#L120-L124">Startup.Configure method</a>:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">void</span> <span class="nf">Configure</span><span class="p">(</span><span class="n">IApplicationBuilder</span> <span class="n">applicationBuilder</span><span class="p">,</span> <span class="n">IHostApplicationLifetime</span> <span class="n">hostApplicationLifetime</span><span class="p">,</span> <span class="n">ILogger</span><span class="p"><</span><span class="n">Startup</span><span class="p">></span> <span class="n">logger</span><span class="p">)</span>
<span class="p">{</span>
<span class="p">...</span>
<span class="n">applicationBuilder</span><span class="p">.</span><span class="nf">UseExceptionHandler</span><span class="p">(</span><span class="k">new</span> <span class="n">ExceptionHandlerOptions</span>
<span class="p">{</span>
<span class="n">ExceptionHandler</span> <span class="p">=</span> <span class="n">CustomExceptionHandler</span><span class="p">.</span><span class="n">HandleException</span><span class="p">,</span>
<span class="n">AllowStatusCode404Response</span> <span class="p">=</span> <span class="k">true</span>
<span class="p">});</span>
<span class="p">...</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The <a href="https://github.com/satrapu/aspnet-core-logging/blob/2cec7a7990a9ef2fdf61011baedfeff9d8da21e8/Sources/Todo.WebApi/ExceptionHandling/CustomExceptionHandler.cs#L74-L90">CustomExceptionHandler.ConvertToProblemDetails method</a> converts the caught exception into a <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.problemdetails?view=aspnetcore-5.0">ProblemDetails</a> instance allowing for a consistent error response:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">private</span> <span class="k">static</span> <span class="n">ProblemDetails</span> <span class="nf">ConvertToProblemDetails</span><span class="p">(</span><span class="n">Exception</span> <span class="n">exception</span><span class="p">,</span> <span class="kt">bool</span> <span class="n">includeDetails</span><span class="p">)</span>
<span class="p">{</span>
<span class="kt">var</span> <span class="n">problemDetails</span> <span class="p">=</span> <span class="k">new</span> <span class="n">ProblemDetails</span>
<span class="p">{</span>
<span class="n">Status</span> <span class="p">=</span> <span class="p">(</span><span class="kt">int</span><span class="p">)</span> <span class="nf">GetHttpStatusCode</span><span class="p">(</span><span class="n">exception</span><span class="p">),</span>
<span class="n">Title</span> <span class="p">=</span> <span class="s">"An unexpected error occurred while trying to process the current request"</span><span class="p">,</span>
<span class="n">Detail</span> <span class="p">=</span> <span class="n">includeDetails</span> <span class="p">?</span> <span class="n">exception</span><span class="p">?.</span><span class="nf">ToString</span><span class="p">()</span> <span class="p">:</span> <span class="kt">string</span><span class="p">.</span><span class="n">Empty</span><span class="p">,</span>
<span class="n">Extensions</span> <span class="p">=</span>
<span class="p">{</span>
<span class="p">{</span><span class="n">ErrorData</span><span class="p">,</span> <span class="n">exception</span><span class="p">?.</span><span class="n">Data</span><span class="p">},</span>
<span class="p">{</span><span class="n">ErrorId</span><span class="p">,</span> <span class="n">Guid</span><span class="p">.</span><span class="nf">NewGuid</span><span class="p">().</span><span class="nf">ToString</span><span class="p">(</span><span class="s">"N"</span><span class="p">)},</span>
<span class="p">{</span><span class="n">ErrorKey</span><span class="p">,</span> <span class="nf">GetErrorKey</span><span class="p">(</span><span class="n">exception</span><span class="p">)}</span>
<span class="p">}</span>
<span class="p">};</span>
<span class="k">return</span> <span class="n">problemDetails</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The above <code class="language-plaintext highlighter-rouge">ProblemDetails.Extensions</code> dictionary contains an <code class="language-plaintext highlighter-rouge">ErrorId</code> key which points to a plain <code class="language-plaintext highlighter-rouge">Guid</code> - this is our error ID which will allows us to run a query like below (given its value is <code class="language-plaintext highlighter-rouge">1d6640cd16974e84b5ef7deacc590a6b</code>):</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">select</span> <span class="o">*</span>
<span class="k">from</span> <span class="n">stream</span>
<span class="k">where</span> <span class="n">ErrorId</span> <span class="o">=</span> <span class="s1">'1d6640cd16974e84b5ef7deacc590a6b'</span>
</code></pre></div></div>
<p>Or you can run the equivalent:</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">select</span> <span class="o">*</span>
<span class="k">from</span> <span class="n">stream</span>
<span class="k">where</span> <span class="n">ProblemDetails</span><span class="p">.</span><span class="n">Extensions</span><span class="p">.</span><span class="n">errorId</span> <span class="o">=</span> <span class="s1">'1d6640cd16974e84b5ef7deacc590a6b'</span>
</code></pre></div></div>
<p>This query will find exactly one event:
<img src="/assets/structured-logging-in-aspnet-core-using-serilog-and-seq/6-query-exception-details-by-error-id.png" alt="query-exception-details-by-error-id" /></p>
<h4 id="fetch-conversation-events">Fetch events from same conversation</h4>
<p>Let’s assume we have received a bug report mentioning the above error ID, <code class="language-plaintext highlighter-rouge">1d6640cd16974e84b5ef7deacc590a6b</code>. We can query Seq to get the exception details, but what happened during that HTTP request until that moment? To answer this question, I will re-use the aforementioned conversation ID concept. ASP.NET Core has built-in support for grouping requests, as documented <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-5.0#automatically-log-scope-with-spanid-traceid-and-parentid">here</a>; on the other hand, as I’m a rather curious person, I’ve implemented my own <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware/?view=aspnetcore-5.0">middleware</a> to inject the conversation ID via log scope into each event generated during the same conversation, as already seen inside the <a href="https://github.com/satrapu/aspnet-core-logging/blob/2cec7a7990a9ef2fdf61011baedfeff9d8da21e8/Sources/Todo.WebApi/Logging/ConversationIdProviderMiddleware.cs#L29-L47">ConversationIdMiddleware.Invoke method</a>.</p>
<p>Identifying the appropriate <code class="language-plaintext highlighter-rouge">conversation ID</code> when we already know the <code class="language-plaintext highlighter-rouge">error ID</code> can be done via:</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">select</span> <span class="n">ConversationId</span>
<span class="k">from</span> <span class="n">stream</span>
<span class="k">where</span> <span class="n">ProblemDetails</span><span class="p">.</span><span class="n">Extensions</span><span class="p">.</span><span class="n">errorId</span> <span class="o">=</span> <span class="s1">'1d6640cd16974e84b5ef7deacc590a6b'</span>
</code></pre></div></div>
<p>This query will find the conversation ID:
<img src="/assets/structured-logging-in-aspnet-core-using-serilog-and-seq/7-identify-conversation-id-by-error-id.png" alt="identify-conversation-id-by-error-id" /></p>
<p>Fetching all events belonging to the conversation where the exception has occurred can be done via:</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">select</span> <span class="n">ToIsoString</span><span class="p">(</span><span class="o">@</span><span class="nb">Timestamp</span><span class="p">)</span> <span class="k">as</span> <span class="nb">Date</span><span class="p">,</span> <span class="o">@</span><span class="n">Arrived</span><span class="p">,</span> <span class="o">@</span><span class="n">Message</span>
<span class="k">from</span> <span class="n">stream</span>
<span class="k">where</span> <span class="n">ConversationId</span> <span class="o">=</span> <span class="s1">'f3d8ea64b29749d69b898f77ab472c7f'</span>
<span class="k">order</span> <span class="k">by</span> <span class="nb">Date</span> <span class="k">asc</span>
</code></pre></div></div>
<p>The above query projects the Seq built-in property <code class="language-plaintext highlighter-rouge">@Timestamp</code> into a new value and will use it to sort entries ascending:
<img src="/assets/structured-logging-in-aspnet-core-using-serilog-and-seq/8-fetch-events-from-given-conversation.png" alt="fetch-events-from-given-conversation" /></p>
<h3 id="analytics-use-case">Analytics</h3>
<p>When running an application in production, we usually want to understand how its end-users are using it in order to better shape it (e.g., invest most effort into most used features to make them more appealing and more useful, what features to discard as they are not used as much as expected, etc.). Structured logging can be used as a tool to get such data; this does not mean that no other analytics tool should be used, it’s just that employing this one is very easy and it can offer good results without much effort.</p>
<h4 id="identify-most-used-application-features">Identify most used application features</h4>
<p>Since the Todo Web API has been built around the concept of <a href="https://github.com/satrapu/aspnet-core-logging/tree/2cec7a7990a9ef2fdf61011baedfeff9d8da21e8/Sources/Todo.ApplicationFlows">application flows</a>, each processed business-related HTTP request will trigger the execution of a particular flow and its name, outcome and execution time, along with the user triggering it, are <a href="https://github.com/satrapu/aspnet-core-logging/blob/2cec7a7990a9ef2fdf61011baedfeff9d8da21e8/Sources/Todo.ApplicationFlows/NonTransactionalBaseApplicationFlow.cs#L48-L76">logged</a> using log scopes:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">async</span> <span class="n">Task</span><span class="p"><</span><span class="n">TOutput</span><span class="p">></span> <span class="nf">ExecuteAsync</span><span class="p">(</span><span class="n">TInput</span> <span class="n">input</span><span class="p">,</span> <span class="n">IPrincipal</span> <span class="n">flowInitiator</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">using</span> <span class="p">(</span><span class="n">logger</span><span class="p">.</span><span class="nf">BeginScope</span><span class="p">(</span><span class="k">new</span> <span class="n">Dictionary</span><span class="p"><</span><span class="kt">string</span><span class="p">,</span> <span class="kt">object</span><span class="p">></span> <span class="p">{</span> <span class="p">[</span><span class="n">ApplicationFlowName</span><span class="p">]</span> <span class="p">=</span> <span class="n">flowName</span> <span class="p">}))</span>
<span class="p">{</span>
<span class="kt">bool</span> <span class="n">isSuccess</span> <span class="p">=</span> <span class="k">false</span><span class="p">;</span>
<span class="n">Stopwatch</span> <span class="n">stopwatch</span> <span class="p">=</span> <span class="n">Stopwatch</span><span class="p">.</span><span class="nf">StartNew</span><span class="p">();</span>
<span class="kt">string</span> <span class="n">flowInitiatorName</span> <span class="p">=</span> <span class="n">flowInitiator</span><span class="p">.</span><span class="nf">GetNameOrDefault</span><span class="p">();</span>
<span class="k">try</span>
<span class="p">{</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">LogInformation</span><span class="p">(</span>
<span class="s">"User [{FlowInitiator}] has started executing application flow [{ApplicationFlowName}] ..."</span><span class="p">,</span>
<span class="n">flowInitiatorName</span><span class="p">,</span> <span class="n">flowName</span><span class="p">);</span>
<span class="n">TOutput</span> <span class="n">output</span> <span class="p">=</span> <span class="k">await</span> <span class="nf">InternalExecuteAsync</span><span class="p">(</span><span class="n">input</span><span class="p">,</span> <span class="n">flowInitiator</span><span class="p">);</span>
<span class="n">isSuccess</span> <span class="p">=</span> <span class="k">true</span><span class="p">;</span>
<span class="k">return</span> <span class="n">output</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">finally</span>
<span class="p">{</span>
<span class="n">stopwatch</span><span class="p">.</span><span class="nf">Stop</span><span class="p">();</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">LogInformation</span><span class="p">(</span>
<span class="s">"User [{FlowInitiator}] has finished executing application flow [{ApplicationFlowName}] "</span>
<span class="p">+</span> <span class="s">"with the outcome: [{ApplicationFlowOutcome}]; "</span>
<span class="p">+</span> <span class="s">"time taken: [{ApplicationFlowDurationAsTimeSpan}] ({ApplicationFlowDurationInMillis}ms)"</span><span class="p">,</span>
<span class="n">flowInitiatorName</span><span class="p">,</span> <span class="n">flowName</span><span class="p">,</span> <span class="n">isSuccess</span> <span class="p">?</span> <span class="s">"success"</span> <span class="p">:</span> <span class="s">"failure"</span><span class="p">,</span> <span class="n">stopwatch</span><span class="p">.</span><span class="n">Elapsed</span><span class="p">,</span>
<span class="n">stopwatch</span><span class="p">.</span><span class="n">ElapsedMilliseconds</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>There are several important things worth mentioning regarding the above code fragment:</p>
<ul>
<li><strong>{FlowInitiator}</strong>: represents the obfuscated name of the user who has triggered the execution of the flow</li>
<li><strong>{ApplicationFlowName}</strong>: pretty obvious</li>
<li><strong>{ApplicationFlowOutcome}</strong>: represents the outcome of the flow: either <strong>success</strong> or <strong>failure</strong></li>
<li><strong>{ApplicationFlowDurationAsTimeSpan}</strong>: a <a href="https://docs.microsoft.com/en-us/dotnet/standard/base-types/standard-timespan-format-strings#the-constant-c-format-specifier">string representation</a> of the time needed to execute the flow</li>
<li><strong>{ApplicationFlowDurationInMillis}</strong>: represents the number of milliseconds spent executing the flow</li>
</ul>
<p>Since these tokens will be available for later querying in Seq, this means we can see which are the most used application flows, along with their minimum, average and maximum durations, thus allowing business stakeholders prioritize development work; basically, the developers should focus first on reducing the execution time of the most used application flows with the largest average execution durations.<br />
Run the below query to fetch this information:</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">select</span>
<span class="k">count</span><span class="p">(</span><span class="o">*</span><span class="p">)</span> <span class="k">as</span> <span class="n">NumberOfCalls</span>
<span class="p">,</span> <span class="k">min</span><span class="p">(</span><span class="n">ApplicationFlowDurationInMillis</span><span class="p">)</span> <span class="k">as</span> <span class="n">MinDurationInMillis</span>
<span class="p">,</span> <span class="n">mean</span><span class="p">(</span><span class="n">ApplicationFlowDurationInMillis</span><span class="p">)</span> <span class="k">as</span> <span class="n">AvgDurationInMillis</span>
<span class="p">,</span> <span class="k">max</span><span class="p">(</span><span class="n">ApplicationFlowDurationInMillis</span><span class="p">)</span> <span class="k">as</span> <span class="n">MaxDurationInMillis</span>
<span class="k">from</span> <span class="n">stream</span>
<span class="k">where</span> <span class="o">@</span><span class="nb">Timestamp</span> <span class="o">>=</span> <span class="n">Now</span><span class="p">()</span> <span class="o">-</span> <span class="mi">8</span><span class="n">h</span>
<span class="k">and</span> <span class="n">ApplicationFlowName</span> <span class="o"><></span> <span class="s1">'N/A'</span>
<span class="k">group</span> <span class="k">by</span> <span class="n">ApplicationFlowName</span>
<span class="k">having</span> <span class="n">AvgDurationInMillis</span> <span class="o">></span> <span class="mi">5</span>
<span class="k">order</span> <span class="k">by</span> <span class="n">NumberOfCalls</span> <span class="k">desc</span>
</code></pre></div></div>
<p>The above query will discard several categories of events:</p>
<ul>
<li>Older than 8 hours (<code class="language-plaintext highlighter-rouge">@Timestamp >= Now() - 8h</code>)</li>
<li>Not belonging to a specific business-related application flow (<code class="language-plaintext highlighter-rouge">ApplicationFlowName <> 'N/A'</code>)</li>
<li>Have taken, in average, more than 5 milliseconds to execute (<code class="language-plaintext highlighter-rouge">having AvgDurationInMillis > 5</code>)</li>
</ul>
<p>The query results look something like this:
<img src="/assets/structured-logging-in-aspnet-core-using-serilog-and-seq/9-fetch-data-for-analytics-purposes.png" alt="fetch-data-for-analytics-purposes" /></p>
<p>Based on the above data, it seems that developers should start looking into optimizing the <code class="language-plaintext highlighter-rouge">Events/ApplicationStarted/NotifyListeners</code>, <code class="language-plaintext highlighter-rouge">Events/ApplicationStarted</code> and <code class="language-plaintext highlighter-rouge">TodoItem/Delete</code> application flows. Since the first two of them happen when application starts, most likely deleting data should be optimized first!<br />
The bottom line is that structured logging helps making business and technical decisions, as long as the relevant data has been properly logged.</p>
<h3 id="auditing-use-case">Auditing</h3>
<p>Auditing is the process which allows recording user actions performed against particular parts of the application.</p>
<h4 id="audit-user-actions">Audit user actions</h4>
<p>Knowing what user has performed a specific action helps stakeholders in making the right decision about where to invest the most effort when developing and/or maintaining an application.<br />
Given that the previously mentioned <strong>application flow</strong> concept logs both the flow name and its initiator, we can run the below query to understand who has done what and when:</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">select</span>
<span class="n">ApplicationFlowName</span>
<span class="p">,</span> <span class="n">FlowInitiator</span>
<span class="p">,</span> <span class="n">ToIsoString</span><span class="p">(</span><span class="o">@</span><span class="nb">Timestamp</span><span class="p">)</span> <span class="k">as</span> <span class="nb">Date</span>
<span class="k">from</span> <span class="n">stream</span>
<span class="k">where</span> <span class="o">@</span><span class="nb">Timestamp</span> <span class="o">>=</span> <span class="n">Now</span><span class="p">()</span> <span class="o">-</span> <span class="mi">2</span><span class="n">h</span>
<span class="k">and</span> <span class="n">FlowInitiator</span> <span class="o"><></span> <span class="k">NULL</span>
<span class="k">and</span> <span class="k">not</span> <span class="n">StartsWith</span><span class="p">(</span><span class="n">ApplicationFlowName</span><span class="p">,</span> <span class="s1">'Events/Application'</span><span class="p">)</span>
<span class="k">and</span> <span class="n">ApplicationFlowName</span> <span class="k">not</span> <span class="k">in</span> <span class="p">[</span><span class="s1">'ApplicationFlowServingTestingPurposes'</span><span class="p">,</span> <span class="s1">'Database/RunMigrations'</span><span class="p">,</span> <span class="s1">'N/A'</span><span class="p">]</span>
<span class="k">order</span> <span class="k">by</span> <span class="nb">Date</span> <span class="k">desc</span>
<span class="k">limit</span> <span class="mi">10</span>
</code></pre></div></div>
<p>The above query will discard several categories of events:</p>
<ul>
<li>Older than 2 hours (<code class="language-plaintext highlighter-rouge">@Timestamp >= Now() - 2h</code>)</li>
<li>Not having a user associated with them (<code class="language-plaintext highlighter-rouge">FlowInitiator <> NULL</code>)</li>
<li>Not having their names starting with <code class="language-plaintext highlighter-rouge">Events/Application</code> string (<code class="language-plaintext highlighter-rouge">not StartsWith(ApplicationFlowName, 'Events/Application')</code>)</li>
<li>Not having their names appearing in a given list (<code class="language-plaintext highlighter-rouge">ApplicationFlowName not in ['ApplicationFlowServingTestingPurposes', 'Database/RunMigrations', 'N/A']</code>)</li>
</ul>
<p>Additionally, the query will fetch the first 10 events matching the given search criteria.<br />
The query results look something like this:
<img src="/assets/structured-logging-in-aspnet-core-using-serilog-and-seq/10-fetch-data-for-auditing-user-actions-purposes.png" alt="fetch-data-for-auditing-user-actions-purposes" /></p>
<h3 id="performance-use-case">Performance</h3>
<p>Being able to know about the hot spots of the application is crucial, since end-users are not happy dealing with an application which responds slowly; the stakeholders cannot be happy with ever-increasing bills for an application which is sub-optimal as it consumes way too much CPU, memory and storage.<br />
You got the point - nobody is happy with an under-performant application; on the other hand, making a performant application needs the initial knowledge about what is not performant and here structured logging can help also, as we can query for events representing the data we care about, like: what operation took the most time to finish, what is the minimum amount of memory used by the application in the last 24 hours, so on and so forth.</p>
<h4 id="identify-slowest-sql-queries">Identify slowest SQL queries</h4>
<p>I was quite amazed finding out that Entity Framework Core logs the time needed to execute each SQL statement via the <code class="language-plaintext highlighter-rouge">elapsed</code> property accompanying the <code class="language-plaintext highlighter-rouge">Microsoft.EntityFrameworkCore.Database.Command.CommandExecuted</code> events.<br />
Having this knowledge, I can fetch the slowest top 3 SQL commands executed by this ORM during the past 4 hours via:</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">select</span>
<span class="o">@</span><span class="n">Id</span> <span class="k">as</span> <span class="n">ID</span>
<span class="p">,</span> <span class="n">commandText</span> <span class="k">as</span> <span class="n">RawSql</span>
<span class="p">,</span> <span class="k">parameters</span> <span class="k">as</span> <span class="k">Parameters</span>
<span class="p">,</span> <span class="n">ToNumber</span><span class="p">(</span><span class="n">elapsed</span><span class="p">)</span> <span class="k">as</span> <span class="n">ExecutionTimeInMillis</span>
<span class="k">from</span> <span class="n">stream</span>
<span class="k">where</span>
<span class="o">@</span><span class="nb">Timestamp</span> <span class="o">>=</span> <span class="n">Now</span><span class="p">()</span> <span class="o">-</span> <span class="mi">4</span><span class="n">h</span>
<span class="k">and</span> <span class="n">EventId</span><span class="p">.</span><span class="n">Name</span> <span class="o">=</span> <span class="s1">'Microsoft.EntityFrameworkCore.Database.Command.CommandExecuted'</span>
<span class="k">and</span> <span class="n">ExecutionTimeInMillis</span> <span class="o">></span> <span class="mi">5</span>
<span class="k">and</span> <span class="n">commandText</span> <span class="o"><></span> <span class="k">NULL</span>
<span class="k">and</span> <span class="n">commandText</span> <span class="k">NOT</span> <span class="k">LIKE</span> <span class="s1">'%FROM pg_catalog%'</span>
<span class="k">and</span> <span class="n">commandText</span> <span class="k">NOT</span> <span class="k">LIKE</span> <span class="s1">'%EFMigrationsHistory%'</span>
<span class="k">order</span> <span class="k">by</span> <span class="n">ExecutionTimeInMillis</span> <span class="k">desc</span>
<span class="k">limit</span> <span class="mi">3</span>
</code></pre></div></div>
<p>The query results look something like this:
<img src="/assets/structured-logging-in-aspnet-core-using-serilog-and-seq/11-identify-slowest-sql-commands.png" alt="identify-slowest-sql-commands" /></p>
<p>Using the above information, developers with the help of a capable DBA will be able to optimize database access by focusing on the slowest queries. Of course, there are other ways of identifying slow queries (e.g. <a href="https://www.cybertec-postgresql.com/en/3-ways-to-detect-slow-queries-in-postgresql/">3 ways to detect slow queries in PostgreSQL</a>), but since we’re already using structured logging, this is one of the easiest way and will also work with <a href="https://docs.microsoft.com/en-us/ef/core/providers/?tabs=dotnet-core-cli#current-providers">any database</a> supported by Entity Framework Core!</p>
<h4 id="identify-slowest-application-features">Identify slowest application features</h4>
<p>Identifying such features is rather easy since we’re using the same aforementioned application flow concept which logs the time spent with its execution.<br />
We will query the top 3 slowest application flows via:</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">select</span> <span class="o">*</span><span class="p">,</span> <span class="n">ToNumber</span><span class="p">(</span><span class="n">ApplicationFlowDurationInMillis</span><span class="p">)</span> <span class="k">as</span> <span class="n">ExecutionTimeInMillis</span>
<span class="k">from</span> <span class="n">stream</span>
<span class="k">where</span>
<span class="o">@</span><span class="nb">Timestamp</span> <span class="o">>=</span> <span class="n">Now</span><span class="p">()</span> <span class="o">-</span> <span class="mi">4</span><span class="n">h</span>
<span class="k">and</span> <span class="n">Has</span><span class="p">(</span><span class="n">ApplicationFlowOutcome</span><span class="p">)</span>
<span class="k">and</span> <span class="n">ExecutionTimeInMillis</span> <span class="o">></span> <span class="mi">250</span>
<span class="k">order</span> <span class="k">by</span> <span class="n">ExecutionTimeInMillis</span> <span class="k">desc</span>
<span class="k">limit</span> <span class="mi">3</span>
</code></pre></div></div>
<p>The above query will fetch data from the past 4 hours and will ignore flows which took less than 250 milliseconds to complete; the results look something like this:
<img src="/assets/structured-logging-in-aspnet-core-using-serilog-and-seq/12-identify-slowest-application-flows.png" alt="identify-slowest-application-flows" /></p>
<p>The first result (deleting a database row) has a rather unusual execution time (more than 70 seconds), so the developer should start improving the performance of the application by focusing first on optimizing this application flow.<br />
(Well, this execution time is mostly due to debugging a test method on my machine, but <em>that</em> developer doesn’t know this yet)</p>
<h2 id="references">References</h2>
<ul>
<li>Logging in ASP.NET Core
<ul>
<li>Reading
<ul>
<li><a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-5.0">Logging in .NET Core and ASP.NET Core</a></li>
<li><a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/loggermessage?view=aspnetcore-5.0">High-performance logging with LoggerMessage in ASP.NET Core</a></li>
</ul>
</li>
</ul>
</li>
<li>Serilog
<ul>
<li>Reading
<ul>
<li><a href="https://serilog.net/">Official Site</a></li>
<li><a href="https://github.com/serilog/serilog/wiki">Wiki</a></li>
<li><a href="https://github.com/serilog/serilog/wiki/Debugging-and-Diagnostics">Debugging and Diagnostics</a></li>
<li><a href="https://github.com/serilog/serilog/wiki/Enrichment">Enrichment</a></li>
<li><a href="https://github.com/serilog/serilog/wiki/Formatting-Output">Formatting Output</a></li>
<li><a href="https://benfoster.io/blog/serilog-best-practices/">Serilog Best Practices by Ben Foster</a></li>
</ul>
</li>
<li>Extensions
<ul>
<li><a href="https://github.com/serilog/serilog-expressions">Serilog Expressions</a>
<ul>
<li><a href="https://stackoverflow.com/a/44035241/5786708">Define filter in configuration file</a></li>
</ul>
</li>
<li><a href="https://github.com/serilog/serilog-extensions-hosting">Serilog.Extensions.Hosting</a></li>
<li>Many, many others - just google them</li>
</ul>
</li>
<li>Alternatives
<ul>
<li><a href="https://logging.apache.org/log4net/">Log4Net</a></li>
<li><a href="https://nlog-project.org/">NLog</a></li>
<li>Others</li>
</ul>
</li>
</ul>
</li>
<li>Seq
<ul>
<li>Reading
<ul>
<li><a href="https://datalust.co/seq">Official Site</a></li>
<li><a href="https://docs.datalust.co/docs">Official Documentation</a></li>
<li><a href="https://docs.datalust.co/docs/getting-started-with-docker">Getting Started with Docker</a></li>
<li><a href="https://docs.datalust.co/docs/using-serilog#structured-logging-with-serilog-and-seq">Structured Logging with Serilog and Seq</a></li>
<li><a href="https://github.com/datalust/seq-cheat-sheets">Seq Cheat Sheets</a></li>
</ul>
</li>
<li>Tools
<ul>
<li><a href="https://github.com/datalust/seqcli">seqcli</a></li>
<li><a href="https://github.com/datalust/seq-forwarder">Seq Forwarder</a></li>
<li><a href="https://github.com/datalust/seq-input-healthcheck">Seq Health Check</a></li>
</ul>
</li>
<li>Alternatives
<ul>
<li><a href="https://www.elastic.co/what-is/elk-stack">ELK Stack</a></li>
<li>Others</li>
</ul>
</li>
</ul>
</li>
<li><a href="https://nblumhardt.com/">Nicholas Blumhardt’s blog</a></li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>Structured logging is not just for debugging purposes, as it can be used for various other purposes, like: spotting performance bottlenecks, auditing, analytics, distributed tracing and a lot more.<br />
Using structured logging is definitely one of the best ways a developer can employ in order to help both business and technical stakeholders make better and more informed decisions to positively impact the outcome of a particular software system.<br />
The only downside to structured logging I see right now is that you have to learn a new language for each server you are going to use for querying events, so for instance, you need to learn one when using Seq and another one when using Azure Application Insights, but I think the price is well worth it due to the amazing amount of information you can extract.</p>
<p>So what are you waiting for? Go put some structure into your events and query them like a boss!</p>Context What is unstructured logging? What is structured logging? Why should I use structured logging? What is Serilog? Serilog sinks Serilog enrichers Serilog properties Serilog stringification Serilog destructuring Using destructuring operator Using destructuring policies Using destructuring libraries Configure Serilog Configure Serilog nouns Configure Serilog as an ASP.NET Core logging provider What is Seq? Run Seq using Docker Crash course for querying Seq data Log application events Logging providers Message templates Log scopes Use cases Debugging Identify error root cause Fetch events from same conversation Analytics Identify most used application features Auditing Audit user actions Performance Identify slowest SQL queries Identify slowest application features References ConclusionUse Docker Compose when running integration tests with Azure Pipelines2020-09-03T08:18:07+00:002020-09-03T08:18:07+00:00https://crossprogramming.com/2020/09/03/use-docker-compose-when-running-integration-tests-with-azure-pipelines<ul>
<li><a href="#context">Context</a></li>
<li><a href="#motivation">Why should I use Docker Compose?</a></li>
<li><a href="#solution-high-level-view">Solution high-level view</a></li>
<li><a href="#solution-low-level-view">Solution low-level view</a>
<ul>
<li><a href="#install-docker-on-macos">Install Docker on macOS-based agents</a></li>
<li><a href="#run-powershell-script">Run PowerShell script</a></li>
<li><a href="#prepare-compose-environment-variables">Prepare compose environment variables</a></li>
<li><a href="#start-compose-service">Start compose service</a></li>
<li><a href="#identify-compose-service-metadata">Identify compose service metadata</a>
<ul>
<li><a href="#identify-container-id">Identify container ID</a></li>
<li><a href="#identify-compose-service-name">Identify compose service name</a></li>
</ul>
</li>
<li><a href="#wait-for-compose-service">Wait for compose service to become healthy</a></li>
<li><a href="#identify-host-port">Identify compose service host port</a></li>
<li><a href="#expose-host-port-as-pipeline-variable">Expose host port as a pipeline variable</a></li>
<li><a href="#run-integration-tests">Run integration tests</a></li>
</ul>
</li>
<li><a href="#issues">Issues</a>
<ul>
<li><a href="#compose-file-version">Compose file version</a></li>
<li><a href="#docker-compose-writing-to-sdterr">Docker Compose writes to standard error stream</a></li>
<li><a href="#unstable-windows-docker-image">Unstable Windows Docker image</a></li>
</ul>
</li>
<li><a href="#other-use-cases">Other use cases</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ul>
<hr />
<!-- markdownlint-disable MD033 -->
<h2 id="context">Context</h2>
<p>In my previous Azure DevOps related <a href="https://crossprogramming.com/2019/12/27/use-docker-when-running-integration-tests-with-azure-pipelines.html">post</a> I have presented two approaches for running integration tests targeting a PostgreSQL database hosted in a Docker container:</p>
<ul>
<li><a href="https://crossprogramming.com/2019/12/27/use-docker-when-running-integration-tests-with-azure-pipelines.html#service-containers">Service containers</a> - these containers run on Linux and Windows-based only agents</li>
<li><a href="https://crossprogramming.com/2019/12/27/use-docker-when-running-integration-tests-with-azure-pipelines.html#self-managed-docker-containers">Self-managed containers</a> - these containers run on Linux, macOS and Windows-based agents.</li>
</ul>
<p>This post has several goals:</p>
<ul>
<li>Run the same tests against the same database, but this time using <a href="https://docs.docker.com/compose/">Docker Compose</a> instead of plain Docker containers</li>
<li>Run Docker Compose on Linux, macOS and Windows-based agents</li>
<li>Create a generic solution capable of running various compose workloads</li>
</ul>
<p>The source code used by this post can be found here: <a href="https://github.com/satrapu/aspnet-core-logging/tree/feature/use-docker-compose-for-it">feature/use-docker-compose-for-it</a>.</p>
<h2 id="motivation">Why should I use Docker Compose?</h2>
<p>Using Docker Compose for orchestrating services needed to run integration tests instead of plain Docker containers provides several advantages:</p>
<ul>
<li><strong>Simpler Azure Pipeline</strong>: Docker Compose allows orchestrating several containers using one <a href="https://docs.docker.com/compose/compose-file/">compose file</a>, so I only need one build step in my Azure Pipeline to ensure all services needed to run my tests are up & running, while using plain Docker containers for the same goal means defining one build step per container; additionally, declaring more services in the compose file does not need declaring extra build steps</li>
<li><strong>Avoid “Works on My Machine” syndrome</strong>: I can run the compose services on my development machine, thus ensuring both developer and Azure Pipeline have the same experience when running integration tests; this can also be achieved using plain Docker containers, but with more effort, since you need to run one <code class="language-plaintext highlighter-rouge">docker container run</code> command per service and optionally setting up other things like: volumes, networks, etc.</li>
<li><strong>Shorter feedback loop</strong>: If I change anything in the compose file, I can quickly run <code class="language-plaintext highlighter-rouge">docker-compose up</code> and verify whether everything still works as expected, without the need to re-run my entire CI pipeline</li>
</ul>
<h2 id="solution-high-level-view">Solution high-level view</h2>
<p>My solution to using Docker Compose when running integration tests with Azure Pipelines consists of <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/db4it-compose/docker-compose.yml">one compose file</a> (pretty obvious, since I want to run Docker Compose) and <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/RunComposeServices.ps1">one PowerShell script</a>.<br />
This script <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/RunComposeServices.ps1#L93-L96">starts</a> the compose service declared inside the compose file and will periodically <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/RunComposeServices.ps1#L160-L162">poll</a> the service to check whether it has reached its <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/db4it-compose/docker-compose.yml#L9-L19">declared health state</a>. Once the service is healthy (ready to handle incoming connections to the PostgreSQL database), the script will also <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/RunComposeServices.ps1#L257-L258">register</a> a variable storing the host port mapped by Docker to the container port (that is 5432 for a PosgreSQL database), so that the following build steps might have the chance of interacting with the database using this port. When the build step used for running the integration tests starts, it will <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/azure-pipelines.job-template.yml#L250-L255">pass</a> the connection string (having <a href="http://www.npgsql.org/doc/connection-string-parameters.html">its port</a> set to the previously identified host port) pointing to the database as an environment variable to the <code class="language-plaintext highlighter-rouge">dotnet test</code> command (similar to the approach documented in the <a href="https://crossprogramming.com/2019/12/27/use-docker-when-running-integration-tests-with-azure-pipelines.html#run-integration-tests">previous post</a>) and when the tests run, they will be able to communicate with a running database.</p>
<p><strong>IMPORTANT</strong>: Since my CI pipeline is currently using Docker containers for running tests only, my compose file does not declare any Docker volume! Based on your scenarios, you might need to declare such volumes in your compose file.</p>
<h2 id="solution-low-level-view">Solution low-level view</h2>
<h3 id="install-docker-on-macos">Install Docker on macOS-based agents</h3>
<p>In order to be able to run a compose workload on a macOS-based Azure DevOps agents, I only need to install Docker for macOS, as already documented on my <a href="https://crossprogramming.com/2019/12/27/use-docker-when-running-integration-tests-with-azure-pipelines.html#run-docker-on-macos">previous post</a> - the Docker package I’m using includes Docker Compose - sweet!</p>
<h3 id="run-powershell-script">Run PowerShell script</h3>
<p>The first step for starting the compose workload in my pipeline is <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/azure-pipelines.job-template.yml#L204-L229">running</a> the aforementioned PowerShell script using an <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/powershell?view=azure-devops">PowerShell@2</a> Azure DevOps task:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">PowerShell@2</span>
<span class="s">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">start_compose_services_used_by_integration_tests'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Start</span><span class="nv"> </span><span class="s">compose</span><span class="nv"> </span><span class="s">services</span><span class="nv"> </span><span class="s">used</span><span class="nv"> </span><span class="s">by</span><span class="nv"> </span><span class="s">integration</span><span class="nv"> </span><span class="s">tests'</span>
<span class="na">inputs</span><span class="pi">:</span>
<span class="na">targetType</span><span class="pi">:</span> <span class="s1">'</span><span class="s">filePath'</span>
<span class="na">filePath</span><span class="pi">:</span> <span class="s1">'</span><span class="s">$(Build.SourcesDirectory)/Build/RunComposeServices.ps1'</span>
<span class="na">arguments</span><span class="pi">:</span> <span class="s">...</span>
<span class="na">errorActionPreference</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Continue'</span>
<span class="na">failOnStderr</span><span class="pi">:</span> <span class="s">False</span>
<span class="na">workingDirectory</span><span class="pi">:</span> <span class="s">$(Build.SourcesDirectory)</span>
</code></pre></div></div>
<p>Check the <a href="#docker-compose-writing-to-sdterr">section below</a> in order to understand the reason behind setting the <strong>errorActionPreference</strong> and <strong>failOnStderr</strong> to the particular values from above.</p>
<h3 id="prepare-compose-environment-variables">Prepare compose environment variables</h3>
<p>My compose file looks something like this (some details were omitted for brevity):</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">version</span><span class="pi">:</span> <span class="s2">"</span><span class="s">3.7"</span>
<span class="na">services</span><span class="pi">:</span>
<span class="na">db4it</span><span class="pi">:</span>
<span class="na">image</span><span class="pi">:</span> <span class="s2">"</span><span class="s">${db_docker_image}"</span>
<span class="s">...</span>
<span class="na">environment</span><span class="pi">:</span>
<span class="na">POSTGRES_DB</span><span class="pi">:</span> <span class="s2">"</span><span class="s">${db_name}"</span>
<span class="na">POSTGRES_USER</span><span class="pi">:</span> <span class="s2">"</span><span class="s">${db_username}"</span>
<span class="na">POSTGRES_PASSWORD</span><span class="pi">:</span> <span class="s2">"</span><span class="s">${db_password}"</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="m">5432</span>
</code></pre></div></div>
<p>The file above contains one service, <strong>db4it</strong>, along with several other variables, like: <strong>${db_docker_image}</strong>, <strong>${db_password}</strong>, etc., which need to be replaced with actual values before the compose service starts.<br />
In order to replace <strong>${db_docker_image}</strong>, which represents the name of the PostgreSQL Docker image, and since I want to run a compose workload on various Azure DevOps agents which will run PostgreSQL as Linux and Windows containers, I have several options:</p>
<ul>
<li>Create 2 compose files: one for running PostgreSQL as a Linux container and another one for running PostgreSQL as a Windows container</li>
<li>Use a parameterized compose file and replace each parameter with an environment variable at run time</li>
<li>Some other option?</li>
</ul>
<p>Since Docker Compose <a href="https://docs.docker.com/compose/environment-variables/">knows</a> how to handle environment variables and since I’m already using job parameters, I’ve chosen the second option. Another reason (maybe the most important one) is that storing sensitive pieces of information in files put under source control is a security risk, so I’m not going to include the database password inside the compose file, but store it as an Azure DevOps <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#secret-variables">secret variable</a> and pass it to the script used for starting compose services as a <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/azure-pipelines.yml#L89">parameter</a>.</p>
<p>Before Docker Compose starts running the service, it will search various places in order to find all values it can use for replacing the appropriate variables. My PowerShell script has two optional parameters allowing specifying such variables which will be promoted to environment variables, thus enabling Docker Compose to find and use them in the compose file.
One such parameter represents the relative path to a <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/db4it-compose/.env">.env file</a>, while the second one represents a hash table where variables are provided as key-value pairs; one could use both or just one of them for specifying the compose variables. I have used the .env file for storing non-sensitive key-value pairs, since this file is put under <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/db4it-compose/.env">source control</a>; on the other hand, I have used the hash table for sensitive ones (e.g. the ${db_password} value).<br />
Please note that the key-value pairs found inside the hash table override the ones found inside the .env file - this is by design.</p>
<p>Passing the relative path to the .env file (to be <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/RunComposeServices.ps1#L49-L56">resolved</a> considering the current script path as base path) and the key-value pairs as parameters to this script is done via:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">PowerShell@2</span>
<span class="s">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">start_compose_services_used_by_integration_tests'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Start</span><span class="nv"> </span><span class="s">compose</span><span class="nv"> </span><span class="s">services</span><span class="nv"> </span><span class="s">used</span><span class="nv"> </span><span class="s">by</span><span class="nv"> </span><span class="s">integration</span><span class="nv"> </span><span class="s">tests'</span>
<span class="na">inputs</span><span class="pi">:</span>
<span class="s">...</span>
<span class="s">arguments</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">-ComposeProjectName '${{ parameters.integrationTests.composeProjectName }}' `</span>
<span class="s">-RelativePathToComposeFile './db4it-compose/docker-compose.yml' `</span>
<span class="s">-RelativePathToEnvironmentFile './db4it-compose/.env' `</span>
<span class="s">-ExtraEnvironmentVariables `</span>
<span class="s">@{ `</span>
<span class="s">'db_docker_image'='${{ parameters.integrationTests.databaseDockerImage }}'; `</span>
<span class="s">'db_name'='${{ parameters.integrationTests.databaseName }}'; `</span>
<span class="s">'db_username'='${{ parameters.integrationTests.databaseUsername }}'; `</span>
<span class="s">'db_password'='${{ parameters.integrationTests.databasePassword }}'; `</span>
<span class="s">}</span>
<span class="s">...</span>
</code></pre></div></div>
<p>I’m passing the hash table as PowerShell parameter using <code class="language-plaintext highlighter-rouge">@{key1 = value1; key2 = value2; ...}</code> construct; please note that I’ve used the ` (tick) symbol to keep each key-value pair on a separate line to increase code readability.<br />
See more about working with hash tables in PowerShell <a href="https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_hash_tables?view=powershell-7">here</a>.</p>
<p>Declaring an environment variable in PowerShell is as simple as this:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">[</span><span class="n">System.Environment</span><span class="p">]::</span><span class="n">SetEnvironmentVariable</span><span class="p">(</span><span class="nv">$EnvironmentVariableName</span><span class="p">,</span><span class="w"> </span><span class="nv">$EnvironmentVariableValue</span><span class="p">,</span><span class="w"> </span><span class="s1">'Process'</span><span class="p">)</span><span class="w">
</span></code></pre></div></div>
<p>Please note the <code class="language-plaintext highlighter-rouge">'Process'</code> string passed as the 3rd parameter - this means that the key-value pairs will be <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/RunComposeServices.ps1#L58-L84">visible</a> to the compose workload which will be started as a stand alone process several lines below inside the aforementioned PowerShell script.</p>
<p>See more about working with environment variables in PowerShell <a href="https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-7">here</a> and see more about Docker Compose .env files <a href="https://docs.docker.com/compose/environment-variables/#the-env-file">here</a>.</p>
<h3 id="start-compose-service">Start compose service</h3>
<p>Once the environment variables have been setup, starting the compose service is done using <a href="https://docs.docker.com/compose/reference/up/">docker-compose up</a> command:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="o">=</span><span class="s2">"</span><span class="nv">$ComposeFilePath</span><span class="s2">"</span><span class="w"> </span><span class="err">`</span><span class="w">
</span><span class="nt">--project-name</span><span class="o">=</span><span class="s2">"</span><span class="nv">$ComposeProjectName</span><span class="s2">"</span><span class="w"> </span><span class="err">`</span><span class="w">
</span><span class="n">up</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--detach</span><span class="w">
</span></code></pre></div></div>
<p>The <strong>–file</strong> argument represents the full path to the compose file which has been <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/RunComposeServices.ps1#L40-L47">calculated</a> by combining the full path to the current script and the relative path to the compose file passed as a <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/azure-pipelines.job-template.yml#L213">parameter</a>; see more about this argument <a href="https://docs.docker.com/compose/reference/overview/#specifying-a-path-to-a-single-compose-file">here</a>.<br />
The <strong>–project-name</strong> argument is needed in order to separate this particular compose workload from others running on the same Docker host; think of this project name like a namespace in C# or package in Java; see more about this argument <a href="https://docs.docker.com/compose/reference/overview/#use--p-to-specify-a-project-name">here</a>.<br />
The <strong>–detach</strong> argument is needed to ensure the compose service is run in the background since I want to run the tests against it using the following build step.</p>
<h3 id="identify-compose-service-metadata">Identify compose service metadata</h3>
<p>In order to be able to correctly determine whether the compose service has reached its healthy state, I need to identify its container ID assigned by Docker and its compose service name declared inside the compose file.</p>
<p>You might say: <em>Hey, but I already know the name of the compose service, since it’s found inside the compose file!</em> and you would be right, but do you also remember I said one of the goals of this post is <em>Create a generic solution capable of running various compose workloads</em>? Due to this reason, the PowerShell script cannot assume any service names and it has to resort to several Docker commands to identify the two pieces of information with extra help coming from the labels automatically added by Docker when creating a container for a Docker Compose service.</p>
<h4 id="identify-container-id">Identify container ID</h4>
<p>At this point, I know that the compose service is running, so I can request the container ID from Docker using the <a href="https://docs.docker.com/engine/reference/commandline/container_ls/">docker container ls</a> command since each compose service is in fact a Docker container; on the other hand, if my pipeline is running several Docker containers, I cannot tell which one is the one I’m interested in, so I need to <a href="https://docs.docker.com/engine/reference/commandline/ps/#filtering">filter</a> the outcome of the aforementioned Docker command and that’s the reason I’m using the <strong>–project-name</strong> when starting the compose service.<br />
Using filters, I can limit the search to only those Docker containers belonging to <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/azure-pipelines.job-template.yml#L212">my compose project</a>:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$LsCommandOutput</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">ls</span><span class="w"> </span><span class="nt">-a</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--filter</span><span class="w"> </span><span class="s2">"label=com.docker.compose.project=</span><span class="nv">$ComposeProjectName</span><span class="s2">"</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--format</span><span class="w"> </span><span class="s2">"{{ .ID }}"</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Out-String</span><span class="w">
</span></code></pre></div></div>
<p>The command above will return only the ID of the Docker container running the PostgreSQL database to be targeted by my integration tests. In case my compose workload would have more than one service, there will be returned one container ID per such service.</p>
<h4 id="identify-compose-service-name">Identify compose service name</h4>
<p>In order to identify the name of the compose service as I have declared inside the compose file (db4it), I have to extract the value of the label <strong>com.docker.compose.service</strong> accompanying the Docker container whose ID I already know.<br />
I first need to ask Docker for all labels using <a href="https://docs.docker.com/engine/reference/commandline/container_inspect/">docker container inspect</a> command:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ComposeServiceLabels</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">inspect</span><span class="w"> </span><span class="nt">--format</span><span class="w"> </span><span class="s1">'{{ json .Config.Labels }}'</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nv">$ContainerId</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Out-String</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">ConvertFrom-Json</span><span class="w">
</span></code></pre></div></div>
<p>The <strong>$ComposeServiceLabels</strong> variable will store a dictionary which looks something similar to this:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="w"> </span><span class="p">@{</span><span class="w">
</span><span class="nx">com</span><span class="err">.</span><span class="nx">docker</span><span class="err">.</span><span class="nx">compose</span><span class="err">.</span><span class="nx">config</span><span class="err">-</span><span class="nx">hash</span><span class="o">=</span><span class="nx">f19c28a66cb2fc63a18618d87ae96b532f582acd310b297a2647d2e92c7ab34d</span><span class="p">;</span><span class="w">
</span><span class="nx">com</span><span class="err">.</span><span class="nx">docker</span><span class="err">.</span><span class="nx">compose</span><span class="err">.</span><span class="nx">container</span><span class="err">-</span><span class="nx">number</span><span class="o">=</span><span class="mi">1</span><span class="p">;</span><span class="w">
</span><span class="nx">com</span><span class="err">.</span><span class="nx">docker</span><span class="err">.</span><span class="nx">compose</span><span class="err">.</span><span class="nx">oneoff</span><span class="o">=</span><span class="nx">False</span><span class="p">;</span><span class="w">
</span><span class="nx">com</span><span class="err">.</span><span class="nx">docker</span><span class="err">.</span><span class="nx">compose</span><span class="err">.</span><span class="nx">project</span><span class="o">=</span><span class="nx">aspnet</span><span class="err">-</span><span class="nx">core</span><span class="err">-</span><span class="nx">logging</span><span class="p">;</span><span class="w">
</span><span class="nx">com</span><span class="err">.</span><span class="nx">docker</span><span class="err">.</span><span class="nx">compose</span><span class="err">.</span><span class="nx">project</span><span class="err">.</span><span class="nx">config_files</span><span class="o">=</span><span class="nx">docker</span><span class="err">-</span><span class="nx">compose</span><span class="err">.</span><span class="nx">yml</span><span class="p">;</span><span class="w">
</span><span class="nx">com</span><span class="err">.</span><span class="nx">docker</span><span class="err">.</span><span class="nx">compose</span><span class="err">.</span><span class="nx">project</span><span class="err">.</span><span class="nx">working_dir</span><span class="o">=</span><span class="err">/</span><span class="nx">mnt</span><span class="err">/</span><span class="nx">c</span><span class="err">/</span><span class="nx">Dev</span><span class="err">/</span><span class="nx">Projects</span><span class="err">/</span><span class="nx">aspnet</span><span class="err">-</span><span class="nx">core</span><span class="err">-</span><span class="nx">logging</span><span class="p">;</span><span class="w">
</span><span class="nx">com</span><span class="err">.</span><span class="nx">docker</span><span class="err">.</span><span class="nx">compose</span><span class="err">.</span><span class="nx">service</span><span class="o">=</span><span class="nx">aspnet</span><span class="err">-</span><span class="nx">core</span><span class="err">-</span><span class="nx">logging</span><span class="err">-</span><span class="nx">dev</span><span class="p">;</span><span class="w">
</span><span class="nx">com</span><span class="err">.</span><span class="nx">docker</span><span class="err">.</span><span class="nx">compose</span><span class="err">.</span><span class="nx">version</span><span class="o">=</span><span class="mf">1.26.2</span><span class="p">;</span><span class="w">
</span><span class="nx">desktop</span><span class="err">.</span><span class="nx">docker</span><span class="err">.</span><span class="nx">io</span><span class="err">/</span><span class="nx">wsl</span><span class="err">-</span><span class="nx">distro</span><span class="o">=</span><span class="nx">Ubuntu</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>The output above is just an example as my compose service will have different values for the above keys.<br />
Extracting the name of the compose service from the above dictionary is as simple as:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ComposeServices</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="n">System.Collections.Generic.List</span><span class="p">[</span><span class="n">psobject</span><span class="p">]]::</span><span class="n">new</span><span class="p">()</span><span class="w">
</span><span class="o">...</span><span class="w">
</span><span class="nv">$ComposeServiceNameLabel</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">'com.docker.compose.service'</span><span class="w">
</span><span class="nv">$ComposeServiceName</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$ComposeServiceLabels</span><span class="o">.</span><span class="nv">$ComposeServiceNameLabel</span><span class="w">
</span><span class="nv">$ComposeService</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">New-Object</span><span class="w"> </span><span class="nx">PSObject</span><span class="w"> </span><span class="nt">-Property</span><span class="w"> </span><span class="p">@{</span><span class="w">
</span><span class="nx">ContainerId</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$ContainerId</span><span class="w">
</span><span class="nx">ServiceName</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$ComposeServiceName</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="nv">$ComposeServices</span><span class="o">.</span><span class="nf">Add</span><span class="p">(</span><span class="nv">$ComposeService</span><span class="p">)</span><span class="w">
</span></code></pre></div></div>
<p>The above PowerShell script fragment puts both container ID and service name in a custom object which will be stored in a list ($ComposeServices) for later use.</p>
<h3 id="wait-for-compose-service">Wait for compose service to become healthy</h3>
<p>Once I know the ID of the Docker container running the PostgreSQL database, I can check whether the container has reached its healthy state using something like this:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$IsServiceHealthy</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">inspect</span><span class="w"> </span><span class="s2">"</span><span class="si">$(</span><span class="nv">$ComposeService</span><span class="o">.</span><span class="nf">ContainerId</span><span class="si">)</span><span class="s2">"</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--format</span><span class="w"> </span><span class="s2">"{{.State.Health.Status}}"</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Select-String</span><span class="w"> </span><span class="nt">-Pattern</span><span class="w"> </span><span class="s1">'healthy'</span><span class="w"> </span><span class="nt">-SimpleMatch</span><span class="w"> </span><span class="nt">-Quiet</span><span class="w">
</span></code></pre></div></div>
<p>If the value of the <strong>$IsServiceHealthy</strong> PowerShell variable is <strong>$true</strong>, then the compose service is healthy.<br />
The logic of checking for healthy state is more complex than the above script fragment, but you can always inspect the full version <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/RunComposeServices.ps1#L151-L194">here</a>.</p>
<h3 id="identify-host-port">Identify compose service host port</h3>
<p>At this point, I know that my compose service is healthy, so now I only need to identify the host port Docker has allocated to the container running PostgreSQL, container which has <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-docker-compose-for-it/Build/db4it-compose/docker-compose.yml#L26-L27">exposed</a> port 5432.</p>
<p>Considering that a pipeline might run several compose workloads, I recommend to avoid <a href="https://docs.docker.com/compose/compose-file/#ports">specifying the host port</a> and let Docker <a href="https://docs.docker.com/network/links/#connect-using-network-port-mapping">allocate</a> an ephemeral host port. Once I know the container ID, finding the host port mapped to a container port is not very complicated.</p>
<p>In order to find all port mappings for a given compose service, I’m going to use <a href="https://docs.docker.com/engine/reference/commandline/container_port/">docker container port</a> command:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$PortCommandOutput</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">port</span><span class="w"> </span><span class="s2">"</span><span class="si">$(</span><span class="nv">$ComposeService</span><span class="o">.</span><span class="nf">ContainerId</span><span class="si">)</span><span class="s2">"</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Out-String</span><span class="w">
</span></code></pre></div></div>
<p>Since my compose service <strong>db4it</strong> only exposes one port, the command output will only contain one port mapping, but since the PowerShell script is generic, let’s assume my compose service exposes 5 ports: 5432, 6677, 7788, 8899 and 9900 - in this case, the command above will return 5 mappings as a multi-line string:</p>
<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>5432/tcp -> 0.0.0.0:32772
6677/tcp -> 0.0.0.0:32771
7788/tcp -> 0.0.0.0:32770
8899/tcp -> 0.0.0.0:32769
9900/tcp -> 0.0.0.0:32768
</code></pre></div></div>
<p>Identifying the host port from each of the above mappings is just a matter of correctly splitting the above command output string using some particular delimiters:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$RawPortMappings</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$PortCommandOutput</span><span class="o">.</span><span class="nf">Split</span><span class="p">([</span><span class="n">System.Environment</span><span class="p">]::</span><span class="nx">NewLine</span><span class="p">,</span><span class="w"> </span><span class="p">[</span><span class="n">System.StringSplitOptions</span><span class="p">]::</span><span class="nx">RemoveEmptyEntries</span><span class="p">)</span><span class="w">
</span><span class="kr">foreach</span><span class="w"> </span><span class="p">(</span><span class="nv">$RawPortMapping</span><span class="w"> </span><span class="kr">in</span><span class="w"> </span><span class="nv">$RawPortMappings</span><span class="p">)</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nv">$RawPortMappingParts</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$RawPortMapping</span><span class="o">.</span><span class="nf">Split</span><span class="p">(</span><span class="s1">' -> '</span><span class="p">,</span><span class="w"> </span><span class="p">[</span><span class="n">System.StringSplitOptions</span><span class="p">]::</span><span class="nx">RemoveEmptyEntries</span><span class="p">)</span><span class="w">
</span><span class="nv">$RawContainerPort</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$RawPortMappingParts</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="w">
</span><span class="nv">$RawHostPort</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$RawPortMappingParts</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="w">
</span><span class="nv">$ContainerPort</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$RawContainerPort</span><span class="o">.</span><span class="nf">Split</span><span class="p">(</span><span class="s1">'/'</span><span class="p">,</span><span class="w"> </span><span class="p">[</span><span class="n">System.StringSplitOptions</span><span class="p">]::</span><span class="nx">RemoveEmptyEntries</span><span class="p">)[</span><span class="mi">0</span><span class="p">]</span><span class="w">
</span><span class="bp">$Host</span><span class="n">Port</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$RawHostPort</span><span class="o">.</span><span class="nf">Split</span><span class="p">(</span><span class="s1">':'</span><span class="p">,</span><span class="w"> </span><span class="p">[</span><span class="n">System.StringSplitOptions</span><span class="p">]::</span><span class="nx">RemoveEmptyEntries</span><span class="p">)[</span><span class="mi">1</span><span class="p">]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>For instance, after processing the third entry, <em>5432/tcp -> 0.0.0.0:32771</em>, the Docker container port is <strong>5432</strong>, while the host port is <strong>32771</strong>.</p>
<h3 id="expose-host-port-as-pipeline-variable">Expose host port as a pipeline variable</h3>
<p>In order to connect to the PostgreSQL database running in a Docker container, I need to pass the aforementioned host port to the next build step from my pipeline and the natural way is to use a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#user-defined-variables">user-defined variable</a>. Since my aim is to create a generic solution for running Docker Compose in an Azure DevOps pipeline, I will create one such variable per host port and I’m going to use the following naming convention: <code class="language-plaintext highlighter-rouge">compose.project.<COMPOSE_PROJECT_NAME>.service.<COMPOSE_SERVICE_NAME>.port.<CONTAINER_PORT></code>.</p>
<table>
<thead>
<tr>
<th>Token</th>
<th>Description</th>
<th>Sample value</th>
</tr>
</thead>
<tbody>
<tr>
<td><COMPOSE_PROJECT_NAME></td>
<td>Represents the compose project name</td>
<td>integration-test-prerequisites</td>
</tr>
<tr>
<td><COMPOSE_SERVICE_NAME></td>
<td>Represents the compose service name as declared in compose file</td>
<td>db4it</td>
</tr>
<tr>
<td><CONTAINER_PORT></td>
<td>Represents the Docker container port as declared in compose file</td>
<td>5432</td>
</tr>
</tbody>
</table>
<p>Given that my compose service is named <strong>db4it</strong>, given that it has been started using <strong>integration-test-prerequisites</strong> as compose project and given that it exposes container port <strong>5432</strong>, the variable storing its host port will be named: <strong>compose.project.integration-test-prerequisites.service.db4it.port.5432</strong>.</p>
<p>Assuming my compose service exposes 5 ports: 5432, 6677, 7788, 8899 and 9900, then I would end up with 5 variables in my pipeline:</p>
<table>
<thead>
<tr>
<th>Variable Name</th>
<th>Container port</th>
<th>Host port</th>
</tr>
</thead>
<tbody>
<tr>
<td>compose.project.integration-test-prerequisites.service.db4it.port.5432</td>
<td>5432</td>
<td>32772</td>
</tr>
<tr>
<td>compose.project.integration-test-prerequisites.service.db4it.port.6677</td>
<td>6677</td>
<td>32771</td>
</tr>
<tr>
<td>compose.project.integration-test-prerequisites.service.db4it.port.7788</td>
<td>7788</td>
<td>32770</td>
</tr>
<tr>
<td>compose.project.integration-test-prerequisites.service.db4it.port.8899</td>
<td>8899</td>
<td>32769</td>
</tr>
<tr>
<td>compose.project.integration-test-prerequisites.service.db4it.port.9900</td>
<td>9900</td>
<td>32768</td>
</tr>
</tbody>
</table>
<p>Please note the above host ports might not be the ones you’ll see if you run this pipeline, since Docker might allocate different ephemeral host ports on each run.</p>
<h3 id="run-integration-tests">Run integration tests</h3>
<p>At this point, the Azure DevOps pipeline has started a compose service which is healthy and its host port is now stored in a variable. The next build step is to run integration tests and let them know how can they reach the PostgreSQL database running in a Docker container:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet test $(Build.SourcesDirectory)/Todo.sln</span>
<span class="s">--no-build</span>
<span class="s">--no-restore</span>
<span class="s">--configuration ${{ parameters.build.configuration }}</span>
<span class="s">--test-adapter-path "."</span>
<span class="s">--logger "nunit"</span>
<span class="s">/p:CollectCoverage=True</span>
<span class="s">/p:CoverletOutputFormat=opencover</span>
<span class="s">/p:Include="[Todo.*]*"</span>
<span class="s">/p:Exclude=\"[Todo.*.*Tests]*,[Todo.Persistence]*.TodoDbContextModelSnapshot\"</span>
<span class="s">-- NUnit.Where="cat == IntegrationTests"</span>
<span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">run_integration_tests'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Run</span><span class="nv"> </span><span class="s">integration</span><span class="nv"> </span><span class="s">tests'</span>
<span class="na">env</span><span class="pi">:</span>
<span class="na">CONNECTIONSTRINGS__TODOFORINTEGRATIONTESTS</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">Host=${{ parameters.integrationTests.databaseHost }};</span>
<span class="s">Port=$(compose.project.${{ parameters.integrationTests.composeProjectName }}.service.db4it.port.5432);</span>
<span class="s">Database=${{ parameters.integrationTests.databaseName }};</span>
<span class="s">Username=${{ parameters.integrationTests.databaseUsername }};</span>
<span class="s">Password=${{ parameters.integrationTests.databasePassword }};</span>
<span class="na">GENERATEJWT__SECRET</span><span class="pi">:</span> <span class="s">$(IntegrationTests.GenerateJwt.Secret)</span>
</code></pre></div></div>
<p>Please note the way <strong>Port</strong> property from the PostgreSQL connection string has been set to the aforementioned variable.</p>
<h2 id="issues">Issues</h2>
<h3 id="compose-file-version">Compose file version</h3>
<p>I had to use compose file version 3.7 and not a newer one since macOS-based Azure DevOps agents cannot run Docker versions compatible with v3.8+.<br />
I’m still waiting for being able to install a newer version of Docker on this kind of agent, but until then, I have to resort to an <a href="https://crossprogramming.com/2019/12/27/use-docker-when-running-integration-tests-with-azure-pipelines.html#run-docker-on-macos">older version</a>.</p>
<h3 id="docker-compose-writing-to-sdterr">Docker Compose writes to standard error stream</h3>
<p>Docker Compose commands write to standard error stream, thus tricking Azure DevOps into thinking the PowerShell script running compose service has failed, which isn’t the case. Due to this <a href="https://github.com/docker/compose/issues/5590">known limitation</a>, I need to rely on <a href="https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_automatic_variables?view=powershell-7#section-1"><code class="language-plaintext highlighter-rouge">$?</code> automatic variable</a> in my script to detect failures. Thus, I need to set the <code class="language-plaintext highlighter-rouge">errorActionPreference</code> property of the PowerShell@2 Azure DevOps task to <code class="language-plaintext highlighter-rouge">Continue</code> and set <code class="language-plaintext highlighter-rouge">failOnStderr</code> property to <code class="language-plaintext highlighter-rouge">False</code> to avoid failing the build step and then manually handle the outcome of each command inside the script, like below:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">...</span><span class="w">
</span><span class="nv">$LsCommandOutput</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">ls</span><span class="w"> </span><span class="nt">-a</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--filter</span><span class="w"> </span><span class="s2">"label=com.docker.compose.project=</span><span class="nv">$ComposeProjectName</span><span class="s2">"</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--format</span><span class="w"> </span><span class="s2">"{{ .ID }}"</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Out-String</span><span class="w">
</span><span class="kr">if</span><span class="w"> </span><span class="p">((</span><span class="o">!</span><span class="bp">$?</span><span class="p">)</span><span class="w"> </span><span class="o">-or</span><span class="w"> </span><span class="p">(</span><span class="nv">$LsCommandOutput</span><span class="o">.</span><span class="nf">Length</span><span class="w"> </span><span class="o">-eq</span><span class="w"> </span><span class="mi">0</span><span class="p">))</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="n">Write-Output</span><span class="w"> </span><span class="s2">"##vso[task.LogIssue type=error;]Failed to identify compose services for project: </span><span class="nv">$ComposeProjectName</span><span class="s2">"</span><span class="w">
</span><span class="n">Write-Output</span><span class="w"> </span><span class="s2">"##vso[task.complete result=Failed;]"</span><span class="w">
</span><span class="kr">exit</span><span class="w"> </span><span class="mi">4</span><span class="p">;</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="o">...</span><span class="w">
</span></code></pre></div></div>
<h3 id="unstable-windows-docker-image">Unstable Windows Docker image</h3>
<p>The <a href="https://hub.docker.com/r/stellirin/postgres-windows">stellirin/postgres-windows</a> Docker image I’m using when running Docker Compose on Windows-based agents works <em>almost</em> every time, but not <em>every</em> time, so the jobs running on this agent might fail and I need to re-run them. I’m truly thankful that such image exists and until I find better alternatives, re-running the jobs from time to time seems like a small price to pay.<br />
Unfortunately, finding such alternatives might become crucial in a not so far away future, since the GitHub repository backing this Docker image has been marked as <a href="https://github.com/stellirin/docker-postgres-windows#this-repository-is-archived">archived</a> as the author no longer has the need for PostgreSQL as a Windows container.</p>
<h2 id="other-use-cases">Other use cases</h2>
<p>Other use cases for running Docker Compose in an Azure DevOps pipeline might be:</p>
<ul>
<li>Run several versions of the same database, for instance when trying to test whether your application is compatible with the latest version of SQL Server, but is also backward compatible with older versions, like SQL Server 2005 or 2008 R2</li>
<li>Restore a database backup before running tests against that particular database, where we need one service for the database and another service tasked with (downloading and) restoring the backup</li>
<li>Run functional tests, where we need to start the application with all of its dependencies and optionally run each supported browser in a separate Docker container with the help of a tool like <a href="https://github.com/SeleniumHQ/docker-selenium/blob/trunk/docker-compose-v3.yml">Selenium Hub</a></li>
<li>Provision more services needed to run the tests, e.g. out-of-process cache</li>
<li>Run mock services which simulate the activity of expensive/hard to create and/or use services (e.g. payment provider, credit card validation, etc.)</li>
<li>Any use case where you’re running at least one Docker container ;)</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>I believe using Docker Compose for running various workloads instead of using plain Docker containers is the better choice since it’s easier to use and it’s more flexible. On the other hand, using Docker Compose means sharing the agent resources (CPU, RAM and disk) between the build and the compose services. For lightweight workloads, like the one presented in this post, this is not an issue, but if you want to run more heavyweight workloads, you’ll need to use a more powerful container orchestrator like <a href="https://kubernetes.io/">Kubernetes</a> and run the containers outside Azure DevOps agents. This approach would let Azure DevOps agents use their resources for running builds, but you’ll need extra machines to host your <a href="https://kubernetes.io/docs/concepts/workloads/pods/">Kubernetes pods</a>, thus paying more, but getting more too.</p>Context Why should I use Docker Compose? Solution high-level view Solution low-level view Install Docker on macOS-based agents Run PowerShell script Prepare compose environment variables Start compose service Identify compose service metadata Identify container ID Identify compose service name Wait for compose service to become healthy Identify compose service host port Expose host port as a pipeline variable Run integration tests Issues Compose file version Docker Compose writes to standard error stream Unstable Windows Docker image Other use cases Conclusion [System.Environment]::SetEnvironmentVariable($EnvironmentVariableName, $EnvironmentVariableValue, ‘Process’)Use Docker when running integration tests with Azure Pipelines2019-12-27T16:08:39+00:002019-12-27T16:08:39+00:00https://crossprogramming.com/2019/12/27/use-docker-when-running-integration-tests-with-azure-pipelines<ul>
<li><a href="#context">Context</a></li>
<li><a href="#db-for-azure-pipelines">Provide a database for Azure Pipelines</a></li>
<li><a href="#setup-ef-core-provider">Setup EF Core provider</a></li>
<li><a href="#docker-in-azure-pipelines">Use Docker in Azure Pipelines</a>
<ul>
<li><a href="#service-containers">Service containers</a>
<ul>
<li><a href="#declare-containers">Declare Docker containers</a></li>
<li><a href="#mapping-a-service">Map a service to a Docker container</a></li>
<li><a href="#using-container-port">Use Docker container published port</a></li>
</ul>
</li>
<li><a href="#self-managed-docker-containers">Use self-managed Docker containers</a>
<ul>
<li><a href="#run-docker-on-macos">Install and run Docker on macOS</a></li>
<li><a href="#run-dockerized-database">Run dockerized database</a></li>
<li><a href="#check-database-ready-state">Check database ready-state</a>
<ul>
<li><a href="#check-database-ready-state-using-log-polling">Check database ready-state using log-polling</a></li>
<li><a href="#check-database-ready-state-using-healthcheck">Check database ready-state using Docker healthcheck</a></li>
</ul>
</li>
<li><a href="#identify-database-port">Identify database port</a></li>
<li><a href="#update-database-connection-string">Update database connection string</a></li>
</ul>
</li>
<li><a href="#comparison">Service containers vs. self-managed containers</a></li>
</ul>
</li>
<li><a href="#run-integration-tests">Run integration tests</a></li>
<li><a href="#other-challenges">Other challenges</a>
<ul>
<li><a href="#run-postgresql-locally-using-docker">Run PostgreSQL database locally using Docker</a></li>
<li><a href="#publish-test-results">Fix publishing test results</a></li>
<li><a href="#publish-raw-test-results-as-artifacts">Publish test results raw files as pipeline artifacts</a></li>
</ul>
</li>
<li><a href="#next-steps">Next steps</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ul>
<hr />
<!-- markdownlint-disable MD033 -->
<h2 id="context">Context</h2>
<!-- markdownlint-disable MD033 -->
<p>In my previous Azure DevOps related <a href="https://crossprogramming.com/2019/03/17/build-asp-net-core-app-using-azure-pipelines.html#run-automated-tests">article</a> I was saying that since I’m using the in-memory EF Core <a href="https://docs.microsoft.com/en-us/ef/core/providers/in-memory/">provider</a>, my integration tests were kind of lame, as they were not targeting a real database. On the other hand, this was the perfect opportunity for me to dive deeper into the capabilities of Azure Pipelines and discover a solution for this problem. Thus, the purpose of this post is to present several approaches for provisioning a relational database using Docker when running integration tests with Azure Pipelines.</p>
<!-- markdownlint-disable MD033 -->
<h2 id="db-for-azure-pipelines">Provide a database for Azure Pipelines</h2>
<!-- markdownlint-disable MD033 -->
<p>Since Azure Pipelines is running in the cloud, one could use a database running also in the cloud (AWS, Azure, etc.); a different approach is to run the database inside a Docker container managed by the pipeline - this is the approach presented by this post.<br />
Choosing Docker is a good choice since both the developer and Azure Pipelines can use the same Docker image, thus ensuring the outcomes of running the integration tests on both developer machine and Azure Pipelines will be same. Another reason for using Docker is simplicity: you do not need to install a database server on your development machine, you just run a Docker container.<br />
On the other hand, using Docker does pose its own challenges, as I need to pick a relational database for which I can find Docker images for running Linux containers <em>and</em> Windows containers, as the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops#use-a-microsoft-hosted-agent">windows-2019</a> hosted agent I use for running builds targeting Windows OS cannot run Linux containers, only Windows ones.<br />
A different challenge is to find small enough Docker images so that pulling them will not (greatly) impact the build time. This particular challenge can be resolved by using <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops#install">self-hosted agents</a>, since the agent can be provisioned with the appropriate Docker images, thus eliminating the need of pulling them during the build. On the other hand, since I’m not using such agents, I need to rely on the classic image pulling approach and thus experiencing the longer build times.<br />
After doing some research, I have found the following Docker images which can be used to run both Linux and Windows containers:</p>
<table>
<thead>
<tr>
<th>Database Server</th>
<th>Container Type</th>
<th>Docker Image*</th>
<th>Image Size (MB)**</th>
</tr>
</thead>
<tbody>
<tr>
<td>PostgreSQL</td>
<td>Linux</td>
<td><a href="https://hub.docker.com/_/postgres/">postgres:12-alpine</a></td>
<td>146</td>
</tr>
<tr>
<td>PostgreSQL</td>
<td>Windows</td>
<td><a href="https://hub.docker.com/r/stellirin/postgres-windows">stellirin/postgres-windows:12</a></td>
<td>452</td>
</tr>
<tr>
<td>SQL Server</td>
<td>Linux</td>
<td><a href="https://hub.docker.com/_/microsoft-mssql-server">mcr.microsoft.com/mssql/server:2017-latest-ubuntu</a>***</td>
<td>1333</td>
</tr>
<tr>
<td>SQL Server</td>
<td>Windows</td>
<td><a href="https://hub.docker.com/r/microsoft/mssql-server-windows-developer/">microsoft/mssql-server-windows-developer:1709</a></td>
<td>10800</td>
</tr>
</tbody>
</table>
<p>* Latest version at the time of the investigation (December 2019)<br />
** Rounded values based on the output of the <code class="language-plaintext highlighter-rouge">docker image</code> command<br />
*** I could not find any SQL Server 2019 Docker image for Windows, so I had to stick with SQL Server 2017</p>
<p>SQL Server Docker images are <em>very</em> large when compared to PostgreSQL ones, especially the Windows ones. Considering this aspect and since the purpose of this post is more generic and not dependant on a particular database server, I have decided to use PostgreSQL Docker images for running a database to be targeted by the integration tests.</p>
<!-- markdownlint-disable MD033 -->
<h2 id="setup-ef-core-provider">Setup EF Core provider</h2>
<!-- markdownlint-disable MD033 -->
<p>Since I have chosen PostgreSQL as my database server, I have picked <a href="http://www.npgsql.org/efcore/">Npgsql</a> as my EF Core provider because it’s on the <a href="https://docs.microsoft.com/en-us/ef/core/providers/">official provider list</a> and it’s free.<br />
This provider was specified inside my <a href="https://github.com/satrapu/aspnet-core-logging/blob/cb97b1604549b1519b31cfa6f42c33d545564924/Sources/Todo.WebApi/Startup.cs#L48">Startup class</a> as:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">...</span>
<span class="k">public</span> <span class="k">void</span> <span class="nf">ConfigureServices</span><span class="p">(</span><span class="n">IServiceCollection</span> <span class="n">services</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Other services</span>
<span class="n">services</span><span class="p">.</span><span class="n">AddDbContext</span><span class="p"><</span><span class="n">TodoDbContext</span><span class="p">>((</span><span class="n">serviceProvider</span><span class="p">,</span> <span class="n">dbContextOptionsBuilder</span><span class="p">)</span> <span class="p">=></span>
<span class="p">{</span>
<span class="kt">var</span> <span class="n">connectionString</span> <span class="p">=</span> <span class="n">Configuration</span><span class="p">.</span><span class="nf">GetConnectionString</span><span class="p">(</span><span class="s">"Todo"</span><span class="p">);</span>
<span class="n">dbContextOptionsBuilder</span><span class="p">.</span><span class="nf">UseNpgsql</span><span class="p">(</span><span class="n">connectionString</span><span class="p">)</span>
<span class="p">.</span><span class="nf">EnableSensitiveDataLogging</span><span class="p">()</span>
<span class="p">.</span><span class="nf">UseLoggerFactory</span><span class="p">(</span><span class="n">serviceProvider</span><span class="p">.</span><span class="n">GetRequiredService</span><span class="p"><</span><span class="n">ILoggerFactory</span><span class="p">>());</span>
<span class="p">});</span>
<span class="c1">// Other services</span>
<span class="p">}</span>
<span class="p">...</span>
</code></pre></div></div>
<p>As a side note, the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.entityframeworkcore.dbcontextoptionsbuilder.enablesensitivedatalogging?view=efcore-2.2">EnableSensitiveDataLogging</a> method should be use with care as logging SQL statements with their actual parameter values may leak passwords or any other sensitive data - see more inside the aforementioned documentation.<br />
The application expects to find a <a href="https://www.connectionstrings.com/npgsql/">connection string</a> named <strong>Todo</strong> pointing to a PostgreSQL database inside its configuration. This means the developer might choose to add a connection string inside the <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/?view=aspnetcore-2.2#default-configuration">appsettings.json</a> file, like this:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">Other</span><span class="w"> </span><span class="err">sections</span><span class="w">
</span><span class="nl">"ConnectionStrings"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"Todo"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Server=127.0.0.1;Port=5432;Database=aspnet-core-logging-dev;Username=...;Password=...;"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">Other</span><span class="w"> </span><span class="err">section</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>This is the easy way, but storing credentials in a file put under source control is <strong>not</strong> a good idea as you will leak sensitive data, so my appsettings.json file looks like this:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">Other</span><span class="w"> </span><span class="err">sections</span><span class="w">
</span><span class="nl">"ConnectionStrings"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"Todo"</span><span class="p">:</span><span class="w"> </span><span class="s2">"<DO_NOT_STORE_SENSITIVE_DATA_HERE>"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">Other</span><span class="w"> </span><span class="err">section</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>In order to run the application and integration tests on my local machine, I have defined an environment variable storing the connection string which points to a PostgreSQL database running in a local Docker container (Linux container):</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Display the contents of the "ConnectionStrings__Todo"</span><span class="w">
</span><span class="c"># environment variable via a PowerShell command.</span><span class="w">
</span><span class="c"># The user name and password below have been intentionally replaced with dots.</span><span class="w">
</span><span class="n">Get-ChildItem</span><span class="w"> </span><span class="nx">Env:ConnectionStrings__Todo</span><span class="w">
</span><span class="n">Name</span><span class="w"> </span><span class="nx">Value</span><span class="w">
</span><span class="o">----</span><span class="w"> </span><span class="o">-----</span><span class="w">
</span><span class="n">ConnectionStrings__Todo</span><span class="w"> </span><span class="nx">Host</span><span class="o">=</span><span class="n">localhost</span><span class="p">;</span><span class="n">Port</span><span class="o">=</span><span class="mi">5432</span><span class="p">;</span><span class="n">Database</span><span class="o">=</span><span class="n">aspnet-core-logging-dev</span><span class="p">;</span><span class="n">Username</span><span class="o">=...</span><span class="p">;</span><span class="n">Password</span><span class="o">=...</span><span class="p">;</span><span class="w">
</span></code></pre></div></div>
<p>The Azure Pipelines task used for running the integration tests will use a similar approach for providing the connection string, thus avoiding leaking sensitive data.<br />
When running the application in production, one might store the connection string (and other sensitive data) using <a href="https://docs.microsoft.com/en-us/aspnet/core/security/key-vault-configuration?view=aspnetcore-2.2">Key Vault</a>, <a href="https://docs.microsoft.com/en-us/aspnet/core/security/data-protection/introduction?view=aspnetcore-2.2">Data Protection</a> or something else which meets the application security needs.</p>
<!-- markdownlint-disable MD033 -->
<h2 id="docker-in-azure-pipelines">Use Docker in Azure Pipelines</h2>
<!-- markdownlint-disable MD033 -->
<p>The good news is that Azure Pipelines offer support for running Docker containers out-of-the-box via <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/service-containers?view=azure-devops&tabs=yaml">service containers</a>, the bad news is that they do not work on agents running macOS, only on those running Linux or Windows.<br />
Since my aim is to run integration tests on all 3 operating systems, I had to find a different approach - and I did, as seen <a href="#self-managed-docker-containers">below</a>; on the other hand, I have documented the official one too.</p>
<!-- markdownlint-disable MD033 -->
<h3 id="service-containers">Service containers</h3>
<!-- markdownlint-disable MD033 -->
<blockquote>
<p>A service container enables you to automatically create, network, and manage the lifecycle of your containerized service.</p>
</blockquote>
<p>I believe the quote above taken from official documentation <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/service-containers?view=azure-devops&tabs=yaml">page</a> is pretty clear: Azure Pipelines will manage the containers, you just have to declare them inside the pipeline YAML file in a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema#container-resource">resources</a> section, then one or more jobs will declare a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/service-containers?view=azure-devops&tabs=yaml#single-container-job">service</a> mapped to this container.</p>
<p><strong>IMPORTANT</strong>: The source code used by this approach can be found inside the <a href="https://github.com/satrapu/aspnet-core-logging/tree/feature/use-service-containers">feature/use-service-containers</a> branch.</p>
<h4 id="declare-containers">Declare Docker containers</h4>
<p>The YAML file representing the pipeline needs to declare 2 containers, each one pointing to a Docker image targeting a particular operating system, Linux and Windows, as explained inside the table <a href="#db-for-azure-pipelines">above</a>.</p>
<p>Declaring the containers with their images, ports et al. is done like this:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Fragment found in "azure-pipelines.yml" file</span>
<span class="nn">...</span>
<span class="na">resources</span><span class="pi">:</span>
<span class="na">containers</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">container</span><span class="pi">:</span> <span class="s1">'</span><span class="s">postgres_linux_container_for_integration_tests'</span>
<span class="na">image</span><span class="pi">:</span> <span class="s1">'</span><span class="s">postgres:11.3-alpine'</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">9999:5432/tcp</span>
<span class="na">env</span><span class="pi">:</span>
<span class="na">POSTGRES_DB</span><span class="pi">:</span> <span class="s">$(IntegrationTests.Database.Todo.Name)</span>
<span class="na">POSTGRES_USER</span><span class="pi">:</span> <span class="s">$(IntegrationTests.Database.Todo.Username)</span>
<span class="na">POSTGRES_PASSWORD</span><span class="pi">:</span> <span class="s">$(IntegrationTests.Database.Todo.Password)</span>
<span class="pi">-</span> <span class="na">container</span><span class="pi">:</span> <span class="s1">'</span><span class="s">postgres_windows_container_for_integration_tests'</span>
<span class="na">image</span><span class="pi">:</span> <span class="s1">'</span><span class="s">stellirin/postgres-windows:11.3'</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">5432/tcp</span>
<span class="na">env</span><span class="pi">:</span>
<span class="na">POSTGRES_DB</span><span class="pi">:</span> <span class="s">$(IntegrationTests.Database.Todo.Name)</span>
<span class="na">POSTGRES_USER</span><span class="pi">:</span> <span class="s">$(IntegrationTests.Database.Todo.Username)</span>
<span class="na">POSTGRES_PASSWORD</span><span class="pi">:</span> <span class="s">$(IntegrationTests.Database.Todo.Password)</span>
</code></pre></div></div>
<p>Please notice the 2 different ways PostgreSQL port is being exposed to the Docker host - see <a href="#using-container-port">this section</a> for the full explanation.<br />
The environment variables accompanying the containers point to variables declared inside the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml">variables</a> section from the pipeline file, as seen <a href="https://github.com/satrapu/aspnet-core-logging/blob/232a3d12fe71f427221f0d2f602d41c4bd93ac2b/Build/azure-pipelines.yml#L108">here</a>.</p>
<p>This is how the <a href="https://crossprogramming.com/2019/03/17/build-asp-net-core-app-using-azure-pipelines.html#use-templates">template jobs</a> know about the Docker ports:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Fragment found in "azure-pipelines.yml" file</span>
<span class="nn">...</span>
<span class="na">jobs</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">template</span><span class="pi">:</span> <span class="s1">'</span><span class="s">./azure-pipelines.job-template.yml'</span>
<span class="na">parameters</span><span class="pi">:</span>
<span class="na">job</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">linux'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Build</span><span class="nv"> </span><span class="s">on</span><span class="nv"> </span><span class="s">Linux'</span>
<span class="na">pool</span><span class="pi">:</span>
<span class="c1"># Need a VM capable of running Linux containers</span>
<span class="na">vmImage</span><span class="pi">:</span> <span class="s1">'</span><span class="s">ubuntu-16.04'</span>
<span class="na">services</span><span class="pi">:</span>
<span class="na">db4it</span><span class="pi">:</span>
<span class="na">containerName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">postgres_linux_container_for_integration_tests'</span>
<span class="na">databaseConnectionString</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">Host=localhost;</span>
<span class="s">Port=9999;</span>
<span class="s">Database=$(IntegrationTests.Database.Todo.Name);</span>
<span class="s">Username=$(IntegrationTests.Database.Todo.Username);</span>
<span class="s">Password=$(IntegrationTests.Database.Todo.Password);</span>
<span class="s">...</span>
<span class="pi">-</span> <span class="na">template</span><span class="pi">:</span> <span class="s1">'</span><span class="s">./azure-pipelines.job-template.yml'</span>
<span class="na">parameters</span><span class="pi">:</span>
<span class="na">job</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">windows'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Build</span><span class="nv"> </span><span class="s">on</span><span class="nv"> </span><span class="s">Windows'</span>
<span class="na">pool</span><span class="pi">:</span>
<span class="c1"># Need a VM capable of running Windows containers</span>
<span class="na">vmImage</span><span class="pi">:</span> <span class="s1">'</span><span class="s">windows-2019'</span>
<span class="na">services</span><span class="pi">:</span>
<span class="na">db4it</span><span class="pi">:</span>
<span class="na">containerName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">postgres_windows_container_for_integration_tests'</span>
<span class="na">databaseConnectionString</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">Host=localhost;</span>
<span class="s">Port=$(Agent.Services.db4it.Ports.5432);</span>
<span class="s">Database=$(IntegrationTests.Database.Todo.Name);</span>
<span class="s">Username=$(IntegrationTests.Database.Todo.Username);</span>
<span class="s">Password=$(IntegrationTests.Database.Todo.Password);</span>
<span class="s">...</span>
<span class="nn">...</span>
</code></pre></div></div>
<p>The connection string stored inside <strong>databaseConnectionString</strong> job parameter follows <a href="https://www.npgsql.org/doc/connection-string-parameters.html">Npgsql format</a>.<br />
The database host is set to <strong>localhost</strong> since each build agent will start its own Docker container running a PostgreSQL database.<br />
Once again, please notice the 2 different ways database port is being referenced - see <a href="#using-container-port">this section</a> for the full explanation.</p>
<h4 id="mapping-a-service">Map a service to a Docker container</h4>
<p>The template job will make use of a service named <strong>db4it</strong> which is mapped to one of the previously defined containers:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Fragment found in "azure-pipelines.job-template" file</span>
<span class="na">parameters</span><span class="pi">:</span>
<span class="na">job</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">'</span>
<span class="na">pool</span><span class="pi">:</span> <span class="s1">'</span><span class="s">'</span>
<span class="na">services</span><span class="pi">:</span>
<span class="na">db4it</span><span class="pi">:</span>
<span class="na">containerName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">'</span>
<span class="na">databaseConnectionString</span><span class="pi">:</span> <span class="s1">'</span><span class="s">'</span>
<span class="na">build</span><span class="pi">:</span>
<span class="na">configuration</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Release'</span>
<span class="s">...</span>
<span class="na">jobs</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">job</span><span class="pi">:</span> <span class="s">${{ parameters.job.name }}</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">${{ parameters.job.displayName }}</span>
<span class="na">continueOnError</span><span class="pi">:</span> <span class="s">False</span>
<span class="na">pool</span><span class="pi">:</span> <span class="s">${{ parameters.pool }}</span>
<span class="na">workspace</span><span class="pi">:</span>
<span class="na">clean</span><span class="pi">:</span> <span class="s">all</span>
<span class="na">services</span><span class="pi">:</span>
<span class="c1"># The actual service name is provided as a parameter - see the above YAML fragment</span>
<span class="na">db4it</span><span class="pi">:</span> <span class="s">${{ parameters.services.db4it.containerName }}</span>
<span class="na">steps</span><span class="pi">:</span>
<span class="s">...</span>
</code></pre></div></div>
<h4 id="using-container-port">Use Docker container published port</h4>
<p>In order for the integration tests to be able to access the containerized database, I need to <a href="https://docs.docker.com/config/containers/container-networking/#published-ports">publish</a> the database port <strong>5432</strong> from within the Docker container to the Docker host (the virtual machine on which the job is being executed). Of course, you can publish more than one port per container.<br />
Publishing a port can be done in one of the following ways:</p>
<ul>
<li>Bind container port to a <strong>static host port</strong> (as done <a href="#declare-containers">above</a> when declaring the Linux container)
<ul>
<li>Pros
<ul>
<li>Very easy to configure</li>
<li>Very easy to refer to this port from within a pipeline job</li>
</ul>
</li>
<li>Cons
<ul>
<li>Possible conflicts, as the port may already been used by a different process running on the host (e.g. a different containerized PostgreSQL database whose port 5432 has already been published)</li>
</ul>
</li>
<li>Usage: You bind the container port (e.g. 5432) to a static host port (e.g. 9999) and use the host port anywhere you need to interact with the service container</li>
</ul>
</li>
<li>Bind container port to a <strong>dynamic host port</strong> (as done <a href="#declare-containers">above</a> when declaring the Windows container)
<ul>
<li>Pros
<ul>
<li>Very easy to configure</li>
<li>No more conflicts, as Docker server will automatically pick an available host port</li>
</ul>
</li>
<li>Cons
<ul>
<li>Not so easy to refer to this port from within a pipeline job</li>
</ul>
</li>
<li>Usage: You bind the container port (e.g. 5432) to a dynamic host port and use a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/service-containers?view=azure-devops&tabs=yaml#ports">special way</a> of referencing the host port anywhere you need to interact with the service container
<ul>
<li>Example #1: Based on the aforementioned naming convention, since my service container is named <strong>db4it</strong> and the container port is <strong>5432</strong>, the build variable pointing to the dynamic host port will be named: <strong>Agent.Services.db4it.Ports.5432</strong></li>
<li>Example #2: Assuming I declared a SQL Server based service container named <strong>mssqldb4it</strong> and since the default port for this containerized database is <strong>1433</strong>, the build variable would be named: <strong>Agent.Services.mssqldb4it.Ports.1433</strong></li>
</ul>
</li>
</ul>
</li>
</ul>
<p>For the sake of both learning and teaching, I have used <em>both</em> approaches; on the other hand, I recommend always choosing <em>binding to a dynamic host port</em> to avoid any port conflicts.</p>
<!-- markdownlint-disable MD033 -->
<h3 id="self-managed-docker-containers">Use self-managed Docker containers</h3>
<!-- markdownlint-disable MD033 -->
<p>As stated <a href="#docker-in-azure-pipelines">above</a>, service containers work only on Linux and Windows based agents, so in order to run a Docker container on a macOS based agent, I had to find a different approach.
I remembered seeing that <a href="https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1604-README.md">Linux</a> and <a href="https://github.com/actions/virtual-environments/blob/master/images/win/Windows2019-Readme.md">Windows</a> based agents come with Docker pre-installed and that made me think: if I’ll succeed in installing Docker on a macOS based agent, then I’ll be able to manually pull a Docker image and run a container using some hand-crafted scripts.</p>
<p><strong>IMPORTANT</strong>: The source code used by this approach can be found inside the <a href="https://github.com/satrapu/aspnet-core-logging/tree/feature/use-self-managed-docker-containers">feature/use-self-managed-docker-containers</a> branch.</p>
<!-- markdownlint-disable MD033 -->
<h4 id="run-docker-on-macos">Install and run Docker on macOS</h4>
<!-- markdownlint-disable MD033 -->
<p>Due to some license constraints, as stated inside a GitHub issue <a href="https://github.com/microsoft/azure-pipelines-image-generation/issues/738#issuecomment-516481268">here</a> and <a href="https://github.com/microsoft/azure-pipelines-image-generation/issues/738#issuecomment-519571491">here</a>, Microsoft cannot provision macOS based agents with Docker. On the other hand, the same GitHub issue provides a <a href="https://github.com/microsoft/azure-pipelines-image-generation/issues/738#issuecomment-496211237">nice little script</a> one can use to install Docker on such agents - this script has its own issues and it doesn’t work each time, as I have <a href="https://github.com/microsoft/azure-pipelines-image-generation/issues/738#issuecomment-506980416">discovered</a>, but <a href="https://github.com/microsoft/azure-pipelines-image-generation/issues/738#issuecomment-527013065">this one</a> seem to work with a higher rate of success, even though it’s installing an older version of Docker, 2.0.0.3, released on January 16th 2019 (the latest version at the time of writing this post is 2.1.0.5, released on November 18th 2019).<br />
Since I want to install Docker only on macOS, as it is already present on Linux and Windows, I need to run the aforementioned script only in case the current agent runs macOS:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">|</span>
<span class="s">chmod +x $(Build.SourcesDirectory)/Build/start-docker-on-macOS.sh</span>
<span class="s">$(Build.SourcesDirectory)/Build/start-docker-on-macOS.sh</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">install_and_start_docker</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Install and start Docker</span>
<span class="na">condition</span><span class="pi">:</span> <span class="pi">|</span>
<span class="s">and</span>
<span class="s">(</span>
<span class="s">succeeded()</span>
<span class="s">, eq( variables['Agent.OS'], 'Darwin')</span>
<span class="s">)</span>
</code></pre></div></div>
<p>Please note the <em>eq( variables[‘Agent.OS’], ‘Darwin’)</em> condition used for ensuring the <a href="https://github.com/satrapu/aspnet-core-logging/blob/8176a9569da56934f83e01b37648d58300198d1e/Build/start-docker-on-macOS.sh#L1">start-docker-on-macOS.sh</a> script runs on macOS only.<br />
This shell script uses <a href="https://brew.sh/">Homebrew</a> package manager for macOS in order to download a <a href="https://github.com/Homebrew/homebrew-cask/blob/8ce4e89d10716666743b28c5a46cd54af59a9cc2/Casks/docker.rb">particular version</a> of the <a href="https://formulae.brew.sh/cask/docker">Docker Desktop Community Edition cask</a>; after downloading the installation media file, the script will install Docker service using unattended mode, will start the service and finally will periodically poll its status to check whether it has started or not. Polling will be performed each 5 seconds for 30 times before considering that the Docker service hasn’t started and thus failing this build step and the entire Azure DevOps pipeline.</p>
<!-- markdownlint-disable MD033 -->
<h4 id="run-dockerized-database">Run dockerized database</h4>
<!-- markdownlint-disable MD033 -->
<p>Running the database to be targeted by integration tests in a Docker container requires several steps:</p>
<ul>
<li>Pulling the appropriate Docker image</li>
<li>Starting a Docker container based on this image</li>
<li>Checking that the database is ready to accept incoming connections</li>
<li>Identify the Docker host port mapped to the container port (the default PostgreSQL port 5432)</li>
<li>Ensure the PostgreSQL connection string used by the integration tests knows about this host port</li>
</ul>
<p>Once my Azure DevOps pipeline has finished running the integration tests, there’s no need to remove the Docker container and its image since I’m using <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#microsoft-hosted-agents">Microsoft-hosted agents</a> which will have their state refreshed before each build.<br />
In case of using <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#install">self-hosted agents</a>, I would only remove the container and not the image to ensure the next builds will not pay the price of pulling the Docker image once again; on the other hand, one could periodically run a script on these agents to remove old Docker images to ensure disk space is not being wasted.</p>
<!-- markdownlint-disable MD033 -->
<h4 id="check-database-ready-state">Check database ready-state</h4>
<!-- markdownlint-disable MD033 -->
<p>Docker server has no way of knowing what’s inside each container it runs, so it cannot wait for the dockerized database to reach its <em>ready state</em>, meaning reaching the phase where it can handle incoming connections. If one tries to run the integration tests right after starting the dockerized database, most likely the tests will fail as they will not find any database to connect to, since the database bootstrapping process hasn’t completed yet. In order to avoid this issue, one must ensure the tests will be run <em>after</em> the database has reached its ready state.<br />
There are several ways of achieving this goal and this post will present 2 of them: <a href="#check-database-ready-state-using-log-polling">using log-polling</a> and <a href="#check-database-ready-state-using-healthcheck">using Docker healthcheck</a>.</p>
<!-- markdownlint-disable MD033 -->
<h4 id="check-database-ready-state-using-log-polling">Check database ready-state using log-polling</h4>
<!-- markdownlint-disable MD033 -->
<p>This approach means starting the dockerized database and periodically checking its logs for a particular line which is printed when the database has reached its ready state.<br />
To my surprise, each PostgreSQL database Docker image I used in this post uses a different log message to signal that the database has reached its ready state:</p>
<ul>
<li>Linux container prints <code class="language-plaintext highlighter-rouge">database system is ready to accept connections</code></li>
<li>Windows container prints <code class="language-plaintext highlighter-rouge">PostgreSQL init process complete; ready for start up</code></li>
</ul>
<p>Anyway, this difference doesn’t pose too much of a challenge, since I’m using job templates and thus I can easily parameterize this log message when invoking the script used for log-polling.<br />
The <a href="https://github.com/satrapu/aspnet-core-logging/blob/65f8f8d8b81432c45197dea5dbdf0bb4711dd5dd/Build/azure-pipelines.job-template.yml#L168">azure-pipelines.job-template.yml</a> file contains a build step which invokes a PowerShell script for checking container logs:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Runs a PowerShell script to start a Docker container hosting the database</span>
<span class="c1"># to be targeted by the integration tests.</span>
<span class="c1"># Checking whether the database is ready for processing incoming queries is done</span>
<span class="c1"># using Docker logs command (https://docs.docker.com/engine/reference/commandline/logs/).</span>
<span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">PowerShell@2</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">provision_db4it_docker_container_using_log_polling</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Provision db4it Docker container using log-polling</span>
<span class="na">inputs</span><span class="pi">:</span>
<span class="na">targetType</span><span class="pi">:</span> <span class="s1">'</span><span class="s">filePath'</span>
<span class="na">filePath</span><span class="pi">:</span> <span class="s1">'</span><span class="s">$(Build.SourcesDirectory)/Build/Provision-Docker-container-using-log-polling.ps1'</span>
<span class="na">arguments</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">-DockerImageName '${{ parameters.db4it.dockerImage }}'</span>
<span class="s">-DockerImageTag '${{ parameters.db4it.dockerImageTag }}'</span>
<span class="s">-ContainerName '${{ parameters.db4it.dockerContainerName }}'</span>
<span class="s">-PortMapping '${{ parameters.db4it.dockerPortMapping }}'</span>
<span class="s">-DockerHostPortBuildVariableName '${{ parameters.db4it.dockerHostPortBuildVariableName}}'</span>
<span class="s">-ContainerEnvironmentVariables '${{ parameters.db4it.dockerContainerEnvironmentVariables }}'</span>
<span class="s">-ContainerLogPatternForDatabaseReady '${{ parameters.db4it.dockerContainerLogPatternForDatabaseReady }}'</span>
<span class="s">-SleepingTimeInMillis 250</span>
<span class="s">-MaxNumberOfTries 120</span>
<span class="na">errorActionPreference</span><span class="pi">:</span> <span class="s1">'</span><span class="s">stop'</span>
<span class="na">failOnStderr</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">workingDirectory</span><span class="pi">:</span> <span class="s">$(Build.SourcesDirectory)</span>
<span class="na">condition</span><span class="pi">:</span> <span class="pi">|</span>
<span class="s">and</span>
<span class="s">(</span>
<span class="s">succeeded()</span>
<span class="s">, eq( '${{ parameters.db4it.databaseReadinessStrategy }}', 'log-polling')</span>
<span class="s">)</span>
</code></pre></div></div>
<p>Please note that this build step will be run only in case the job parameter <strong>parameters.db4it.databaseReadinessStrategy</strong> has been set to <strong>log-polling</strong> value.<br />
The <a href="https://github.com/satrapu/aspnet-core-logging/blob/feature/use-self-managed-docker-containers/Build/Provision-Docker-container-using-log-polling.ps1">PowerShell script</a> checks container logs via <a href="https://docs.docker.com/engine/reference/commandline/logs/">docker logs</a> command:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">...</span><span class="w">
</span><span class="nv">$isDatabaseReady</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">docker</span><span class="w"> </span><span class="nx">logs</span><span class="w"> </span><span class="nt">--tail</span><span class="w"> </span><span class="nx">50</span><span class="w"> </span><span class="nv">$ContainerName</span><span class="w"> </span><span class="nx">2</span><span class="err">></span><span class="o">&</span><span class="nx">1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Select-String</span><span class="w"> </span><span class="nt">-Pattern</span><span class="w"> </span><span class="nv">$ContainerLogPatternForDatabaseReady</span><span class="w"> </span><span class="nt">-SimpleMatch</span><span class="w"> </span><span class="nt">-Quiet</span><span class="w">
</span><span class="kr">if</span><span class="w"> </span><span class="p">(</span><span class="nv">$isDatabaseReady</span><span class="w"> </span><span class="o">-eq</span><span class="w"> </span><span class="bp">$true</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="n">Write-Output</span><span class="w"> </span><span class="s2">"</span><span class="se">`n`n</span><span class="s2">Database running inside container ""</span><span class="nv">$ContainerName</span><span class="s2">"" is ready to accept incoming connections"</span><span class="w">
</span><span class="o">...</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="o">...</span><span class="w">
</span></code></pre></div></div>
<p>In the above PowerShell script, I check the last 50 lines of the Docker container log file to see whether they contain the expected line whose presence means that the database ready state has been reached. Of course, the script does a lot more, but these lines are the most important ones in regards to log-polling approach.</p>
<!-- markdownlint-disable MD033 -->
<h4 id="check-database-ready-state-using-healthcheck">Check database ready-state using Docker healthcheck</h4>
<!-- markdownlint-disable MD033 -->
<p>This approach means starting the dockerized database and periodically checking its Docker health state which relies on either <a href="https://docs.docker.com/engine/reference/builder/#healthcheck">HEALTHCHECK instruction</a> added inside the <a href="https://docs.docker.com/engine/reference/builder/">Dockerfile</a> used for building the database Docker image or on the <a href="https://docs.docker.com/engine/reference/run/#healthcheck">health checking command</a> used when starting the Docker container. Health checks are available starting with <a href="https://docs.docker.com/release-notes/docker-engine/#1120-2016-07-28">Docker v1.12</a>.<br />
Since neither PostgreSQL <a href="https://github.com/docker-library/postgres/blob/0d0485cb02e526f5a240b7740b46c35404aaf13f/12/Dockerfile">Linux Dockerfile</a>, nor <a href="https://github.com/stellirin/docker-postgres-windows/blob/e41cdc60ca318ec218168e5cb1c51fa33e8be8e0/Dockerfile">Windows Dockerfile</a> make use of <strong>HEALTHCHECK</strong> instruction, I can make use of the health check command and consider that the container has reached its healthy state based on the outcome of the PostgreSQL <a href="https://www.postgresql.org/docs/12/app-pg-isready.html">pg_isready</a> function:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Fragment taken from azure-pipelines.yml file.</span>
<span class="c1"># See more here: https://github.com/satrapu/aspnet-core-logging/blob/65f8f8d8b81432c45197dea5dbdf0bb4711dd5dd/Build/azure-pipelines.yml#L152.</span>
<span class="nn">...</span>
<span class="pi">-</span> <span class="na">template</span><span class="pi">:</span> <span class="s1">'</span><span class="s">./azure-pipelines.job-template.yml'</span>
<span class="na">parameters</span><span class="pi">:</span>
<span class="na">job</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">macOS'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Run</span><span class="nv"> </span><span class="s">on</span><span class="nv"> </span><span class="s">macOS'</span>
<span class="na">pool</span><span class="pi">:</span>
<span class="na">vmImage</span><span class="pi">:</span> <span class="s1">'</span><span class="s">macOS-10.14'</span>
<span class="na">db4it</span><span class="pi">:</span>
<span class="na">dockerImage</span><span class="pi">:</span> <span class="s1">'</span><span class="s">postgres'</span>
<span class="na">dockerImageTag</span><span class="pi">:</span> <span class="s1">'</span><span class="s">12-alpine'</span>
<span class="s">...</span>
<span class="na">dockerContainerHealthcheckCommand</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">pg_isready</span>
<span class="s">--host=localhost</span>
<span class="s">--port=5432</span>
<span class="s">--dbname=$(IntegrationTests.Database.Todo.Name)</span>
<span class="s">--username=$(IntegrationTests.Database.Todo.Username)</span>
<span class="s">--quiet</span>
<span class="s">...</span>
</code></pre></div></div>
<p>The <strong>pg_isready</strong> function will check whether the PostgreSQL server running on <em>localhost</em> (the database server is running as the top-most process inside the Docker container) and using port <em>5432</em> (the default PostgreSQL port) is ready to accept incoming connections or not.<br />
The <a href="https://github.com/satrapu/aspnet-core-logging/blob/65f8f8d8b81432c45197dea5dbdf0bb4711dd5dd/Build/azure-pipelines.job-template.yml#L198">azure-pipelines.job-template.yml</a> file contains a build step which invokes a PowerShell script for checking container health state:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Runs a PowerShell script to start a Docker container hosting the database</span>
<span class="c1"># to be targeted by the integration tests.</span>
<span class="c1"># Checking whether the database is ready for processing incoming queries is done</span>
<span class="c1"># using Docker healthcheck support (https://docs.docker.com/engine/reference/run/#healthcheck).</span>
<span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">PowerShell@2</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">provision_db4it_docker_container_using_healthcheck</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Provision db4it Docker container using healthcheck</span>
<span class="na">inputs</span><span class="pi">:</span>
<span class="na">targetType</span><span class="pi">:</span> <span class="s1">'</span><span class="s">filePath'</span>
<span class="na">filePath</span><span class="pi">:</span> <span class="s1">'</span><span class="s">$(Build.SourcesDirectory)/Build/Provision-Docker-container-using-healthcheck.ps1'</span>
<span class="na">arguments</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">-DockerImageName '${{ parameters.db4it.dockerImage }}'</span>
<span class="s">-DockerImageTag '${{ parameters.db4it.dockerImageTag }}'</span>
<span class="s">-ContainerName '${{ parameters.db4it.dockerContainerName }}'</span>
<span class="s">-PortMapping '${{ parameters.db4it.dockerPortMapping }}'</span>
<span class="s">-DockerHostPortBuildVariableName '${{ parameters.db4it.dockerHostPortBuildVariableName}}'</span>
<span class="s">-ContainerEnvironmentVariables '${{ parameters.db4it.dockerContainerEnvironmentVariables }}'</span>
<span class="s">-HealthCheckCommand '${{ parameters.db4it.dockerContainerHealthcheckCommand }}'</span>
<span class="s">-HealthCheckIntervalInMilliseconds 250</span>
<span class="s">-MaxNumberOfTries 120</span>
<span class="na">errorActionPreference</span><span class="pi">:</span> <span class="s1">'</span><span class="s">stop'</span>
<span class="na">failOnStderr</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">workingDirectory</span><span class="pi">:</span> <span class="s">$(Build.SourcesDirectory)</span>
<span class="na">condition</span><span class="pi">:</span> <span class="pi">|</span>
<span class="s">and</span>
<span class="s">(</span>
<span class="s">succeeded()</span>
<span class="s">, eq( '${{ parameters.db4it.databaseReadinessStrategy }}', 'healthcheck')</span>
<span class="s">)</span>
</code></pre></div></div>
<p>Please note that this build step will be run only in case the job parameter <strong>parameters.db4it.databaseReadinessStrategy</strong> has been set to <strong>healthcheck</strong> value.<br />
The <a href="https://github.com/satrapu/aspnet-core-logging/blob/65f8f8d8b81432c45197dea5dbdf0bb4711dd5dd/Build/Provision-Docker-container-using-healthcheck.ps1#L55">PowerShell script</a> used for running the Docker container needs to specify the aforementioned health check command when invoking <code class="language-plaintext highlighter-rouge">docker container run</code> command:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">...</span><span class="w">
</span><span class="n">Write-Output</span><span class="w"> </span><span class="s2">"Starting Docker container '</span><span class="nv">$ContainerName</span><span class="s2">' ..."</span><span class="w">
</span><span class="n">Invoke-Expression</span><span class="w"> </span><span class="nt">-Command</span><span class="w"> </span><span class="s2">"docker container run --name </span><span class="nv">$ContainerName</span><span class="s2"> --health-cmd '</span><span class="nv">$HealthCheckCommand</span><span class="s2">' --health-interval </span><span class="nv">${healthCheckIntervalInSeconds}</span><span class="s2">s --detach --publish </span><span class="nv">${PortMapping}</span><span class="s2"> </span><span class="nv">$ContainerEnvironmentVariables</span><span class="s2"> </span><span class="nv">${DockerImageName}</span><span class="s2">:</span><span class="nv">${DockerImageTag}</span><span class="s2">"</span><span class="w"> </span><span class="nx">1</span><span class="err">></span><span class="bp">$null</span><span class="w">
</span><span class="n">Write-Output</span><span class="w"> </span><span class="s2">"Docker container '</span><span class="nv">$ContainerName</span><span class="s2">' has been started"</span><span class="w">
</span><span class="o">...</span><span class="w">
</span></code></pre></div></div>
<p>Checking the container health state is done via <a href="https://docs.docker.com/engine/reference/commandline/inspect/">docker inspect</a> command:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">...</span><span class="w">
</span><span class="nv">$isDatabaseReady</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">docker</span><span class="w"> </span><span class="nx">inspect</span><span class="w"> </span><span class="nv">$ContainerName</span><span class="w"> </span><span class="nt">--format</span><span class="w"> </span><span class="s2">"{{.State.Health.Status}}"</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Select-String</span><span class="w"> </span><span class="nt">-Pattern</span><span class="w"> </span><span class="s1">'healthy'</span><span class="w"> </span><span class="nt">-SimpleMatch</span><span class="w"> </span><span class="nt">-Quiet</span><span class="w">
</span><span class="kr">if</span><span class="w"> </span><span class="p">(</span><span class="nv">$isDatabaseReady</span><span class="w"> </span><span class="o">-eq</span><span class="w"> </span><span class="bp">$true</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="n">Write-Output</span><span class="w"> </span><span class="s2">"</span><span class="se">`n`n</span><span class="s2">Database running inside container ""</span><span class="nv">$ContainerName</span><span class="s2">"" is ready to accept incoming connections"</span><span class="w">
</span><span class="o">...</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="o">...</span><span class="w">
</span></code></pre></div></div>
<p>In the above PowerShell script, I inspect the Docker container low-level information and extract just the <strong>State.Health.Status</strong> property and check whether its value is the <em>healthy</em> string; if it is, this means the container has entered the healthy state which means that the database has reached its ready state. Of course, the script does a lot more, but these lines are the most important ones in regards to health checking approach.</p>
<!-- markdownlint-disable MD033 -->
<h4 id="identify-database-port">Identify database port</h4>
<!-- markdownlint-disable MD033 -->
<p>Service containers <a href="#using-container-port">offer</a> the means of identifying the Docker host ports associated with a container, but when using self-managed containers, well, one must also self-identify these ports.<br />
Both aforementioned PowerShell scripts used for running Docker containers also include the logic of identifying the Docker host ports by making use of the <a href="https://docs.docker.com/engine/reference/commandline/port/">docker port</a> command.<br />
This command requires the Docker container port and the container name and will return the host port:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span><span class="w"> </span><span class="nx">port</span><span class="w"> </span><span class="nx">db4it</span><span class="w"> </span><span class="nx">5432/tcp</span><span class="w">
</span><span class="c"># 0.0.0.0:50108</span><span class="w">
</span></code></pre></div></div>
<p>In the above command, I’m asking Docker sever to provide the host port allocated for the container named <em>db4it</em> which processes <em>TCP</em> packets received on port <em>5432</em> - the response is <em>50108</em>.<br />
This means any process running on that Docker host needs to communicate with the dockerized process via port 50108 - thus, the <code class="language-plaintext highlighter-rouge">dotnet test</code> command used for running integration tests needs to use a <a href="https://www.npgsql.org/doc/connection-string-parameters.html">PostgreSQL connection string</a> where the port is set to <strong>50108</strong>:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">Host</span><span class="o">=</span><span class="n">localhost</span><span class="p">;</span><span class="w"> </span><span class="n">Port</span><span class="o">=</span><span class="mi">50108</span><span class="p">;</span><span class="w"> </span><span class="n">Database</span><span class="o">=</span><span class="n">db4it</span><span class="p">;</span><span class="w"> </span><span class="n">Username</span><span class="o">=</span><span class="n">satrapu</span><span class="p">;</span><span class="w"> </span><span class="n">Password</span><span class="o">=***</span><span class="p">;</span><span class="w">
</span></code></pre></div></div>
<p>Both log-polling and healthcheck related PowerShell scripts make use of a parameter named <em>$PortMapping</em> used for <a href="https://docs.docker.com/config/containers/container-networking/#published-ports">publishing ports</a>; knowing the Docker container port (e.g. 5432), one can identify the Docker host port:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">...</span><span class="w">
</span><span class="nv">$dockerContainerPort</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$PortMapping</span><span class="w">
</span><span class="kr">if</span><span class="w"> </span><span class="p">(</span><span class="nv">$PortMapping</span><span class="w"> </span><span class="o">-like</span><span class="w"> </span><span class="s1">'*:*'</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nv">$dockerContainerPort</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$PortMapping</span><span class="w"> </span><span class="o">-split</span><span class="w"> </span><span class="s1">':'</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Select-Object</span><span class="w"> </span><span class="nt">-Skip</span><span class="w"> </span><span class="nx">1</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="nv">$dockerHostPort</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">docker</span><span class="w"> </span><span class="nx">port</span><span class="w"> </span><span class="nv">$ContainerName</span><span class="w"> </span><span class="nv">$dockerContainerPort</span><span class="w">
</span><span class="nv">$dockerHostPort</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$dockerHostPort</span><span class="w"> </span><span class="o">-split</span><span class="w"> </span><span class="s1">':'</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Select-Object</span><span class="w"> </span><span class="nt">-Skip</span><span class="w"> </span><span class="nx">1</span><span class="w">
</span><span class="n">Write-Output</span><span class="w"> </span><span class="s2">"##vso[task.setvariable variable=</span><span class="nv">$DockerHostPortBuildVariableName</span><span class="s2">]</span><span class="nv">$dockerHostPort</span><span class="s2">"</span><span class="w">
</span><span class="o">...</span><span class="w">
</span></code></pre></div></div>
<p>Since <code class="language-plaintext highlighter-rouge">docker port</code> command is able to handle various forms of Docker container ports, the above script must also consider them:</p>
<table>
<thead>
<tr>
<th>$PortMapping Format</th>
<th>$PortMapping Example</th>
<th>Docker Command Example</th>
</tr>
</thead>
<tbody>
<tr>
<td><JUST_THE_CONTAINER_PORT></td>
<td>5432</td>
<td>docker port db4it <em>5432</em></td>
</tr>
<tr>
<td><CONTAINER_PORT_AND_NETWORK_PROTOCOL></td>
<td>5432/tcp</td>
<td>docker port db4it <em>5432/tcp</em></td>
</tr>
<tr>
<td><HOST_PORT>:<CONTAINER_PORT_AND_NETWORK_PROTOCOL></td>
<td>9876:5432/tcp</td>
<td>docker port db4it <em>5432/tcp</em></td>
</tr>
</tbody>
</table>
<p><strong>IMPORTANT:</strong> The above script fragment makes use of the so-called Azure DevOps <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/scripts/logging-commands?view=azure-devops&tabs=powershell">logging commands</a> in order to <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/scripts/logging-commands?view=azure-devops&tabs=powershell#setvariable-initialize-or-modify-the-value-of-a-variable">set a build variable</a> to the Docker host port so that the next build steps might use it - for instance the build step used for replacing the port placeholder found inside the database connection string with an actual port value. More precisely, the build variable whose name is stored inside the script parameter <em>$DockerHostPortBuildVariableName</em> will be set to the actual Docker host port (e.g. 50108).<br />
Please note each build agent will map Docker container port 5432 to a different value!</p>
<!-- markdownlint-disable MD033 -->
<h4 id="update-database-connection-string">Update database connection string</h4>
<!-- markdownlint-disable MD033 -->
<p>Now that the Docker host port allocated to the dockerized database is known, I can update the database connection string so that its port will be correctly set by replacing the placeholder <em>__DockerHostPort__</em> with the value of the appropriate build variable:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">...</span>
<span class="c1"># Runs a PowerShell script to ensure the connection string pointing to the database</span>
<span class="c1"># to be targeted by the integration tests uses the appropriate port.</span>
<span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">PowerShell@2</span>
<span class="na">inputs</span><span class="pi">:</span>
<span class="na">targetType</span><span class="pi">:</span> <span class="s1">'</span><span class="s">inline'</span>
<span class="na">errorActionPreference</span><span class="pi">:</span> <span class="s1">'</span><span class="s">stop'</span>
<span class="na">script</span><span class="pi">:</span> <span class="pi">|</span>
<span class="s">Write-Output "The Docker host port mapped to container '${{ parameters.db4it.dockerContainerName }}' is: $(${{ parameters.db4it.dockerHostPortBuildVariableName }})"</span>
<span class="s">$normalizedDatabaseConnectionString = "${{ parameters.db4it.databaseConnectionString.value }}" -replace '${{ parameters.db4it.databaseConnectionString.portPlaceholder }}', $(${{ parameters.db4it.dockerHostPortBuildVariableName }})</span>
<span class="s">Write-Output "##vso[task.setvariable variable=DatabaseConnectionStrings.Todo]$normalizedDatabaseConnectionString"</span>
<span class="s">Write-Output "The normalized database connection string is: $normalizedDatabaseConnectionString"</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">normalize_db_connection_string_pointing_to_db4it</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Normalize database connection string pointing to db4it Docker container</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
<span class="nn">...</span>
</code></pre></div></div>
<p>The above <strong>$(${{ parameters.db4it.dockerHostPortBuildVariableName }})</strong> notation points to a build variable whose name is given by the job parameter <strong>parameters.db4it.dockerHostPortBuildVariableName</strong>. The same job parameter was used as an input parameter to the PowerShell scripts running the dockerized databases to ensure the Docker host port is passed from one build step to another without using a hard-coded build variable name (yes, I know this looks like over-engineering, but I <em>really</em> wanted to experiment with Azure DevOps pipelines, especially with their build variables).</p>
<p>The above inline PowerShell script will create a new build variable <em>DatabaseConnectionStrings.Todo</em> and set its value to the updated database connection string, e.g. <code class="language-plaintext highlighter-rouge">Host=localhost; Port=50108; Database=db4it; Username=satrapu; Password=***;</code>; then, this variable will be used by the next build step - running integration tests.</p>
<!-- markdownlint-disable MD033 -->
<h2 id="run-integration-tests">Run integration tests</h2>
<!-- markdownlint-disable MD033 -->
<p>In order to run integration tests, I need to ensure the connection string pointing to the containerized PostgreSQL database is available as an environment variable under the same name as the code expects it - and that is <strong>Todo</strong>, as seen below:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">...</span>
<span class="n">services</span><span class="p">.</span><span class="n">AddDbContext</span><span class="p"><</span><span class="n">TodoDbContext</span><span class="p">>((</span><span class="n">serviceProvider</span><span class="p">,</span> <span class="n">dbContextOptionsBuilder</span><span class="p">)</span> <span class="p">=></span>
<span class="p">{</span>
<span class="kt">var</span> <span class="n">connectionString</span> <span class="p">=</span> <span class="n">Configuration</span><span class="p">.</span><span class="nf">GetConnectionString</span><span class="p">(</span><span class="s">"Todo"</span><span class="p">);</span>
<span class="n">dbContextOptionsBuilder</span><span class="p">.</span><span class="nf">UseNpgsql</span><span class="p">(</span><span class="n">connectionString</span><span class="p">)</span>
<span class="p">.</span><span class="nf">EnableSensitiveDataLogging</span><span class="p">()</span>
<span class="p">.</span><span class="nf">UseLoggerFactory</span><span class="p">(</span><span class="n">serviceProvider</span><span class="p">.</span><span class="n">GetRequiredService</span><span class="p"><</span><span class="n">ILoggerFactory</span><span class="p">>());</span>
<span class="p">});</span>
<span class="p">...</span>
</code></pre></div></div>
<p>The pipeline will run integration tests by using a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/scripts/cross-platform-scripting?view=azure-devops&tabs=yaml">script</a> accompanied by the appropriate environment variables, among them being <strong>CONNECTIONSTRINGS__TODO</strong>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet test $(Build.SourcesDirectory)/Todo.sln</span>
<span class="s">--no-build</span>
<span class="s">--configuration ${{ parameters.build.configuration }}</span>
<span class="s">--filter "Category=IntegrationTests"</span>
<span class="s">--test-adapter-path "."</span>
<span class="s">--logger "nunit"</span>
<span class="s">/p:CollectCoverage=True</span>
<span class="s">/p:CoverletOutputFormat=opencover</span>
<span class="s">/p:Include="[Todo.*]*"</span>
<span class="s">/p:Exclude="[Todo.*.*Tests]*"</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">run_integration_tests</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Run integration tests</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">env</span><span class="pi">:</span>
<span class="na">DOTNET_SKIP_FIRST_TIME_EXPERIENCE</span><span class="pi">:</span> <span class="s">$(DotNetSkipFirstTimeExperience)</span>
<span class="na">DOTNET_CLI_TELEMETRY_OPTOUT</span><span class="pi">:</span> <span class="s">$(DotNetCliTelemetryOptOut)</span>
<span class="na">COREHOST_TRACE</span><span class="pi">:</span> <span class="s">$(CoreHostTrace)</span>
<span class="na">CONNECTIONSTRINGS__TODO</span><span class="pi">:</span> <span class="s">$(DatabaseConnectionStrings.Todo)</span>
</code></pre></div></div>
<p>In case you’re wondering why the environment variable is named <strong>CONNECTIONSTRINGS__TODO</strong>, while the application expects a connection string named <strong>Todo</strong>, the answer is that this is an ASP.NET Core naming convention, as detailed inside the <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/?view=aspnetcore-2.2#keys">official documentation</a>.</p>
<!-- markdownlint-disable MD033 -->
<h3 id="comparison">Service containers vs. self-managed containers</h3>
<!-- markdownlint-disable MD033 -->
<p>OK, so now we can use either <a href="#service-containers">service containers</a> or <a href="#self-managed-docker-containers">self-managed containers</a>, but when and why should we favor one over the other?<br />
Below I have assembled several scenarios and my personal view over the recommended solutions, along with their trade-offs.</p>
<table>
<thead>
<tr>
<th>Scenario</th>
<th>Service Containers</th>
<th>Self-Managed Containers</th>
</tr>
</thead>
<tbody>
<tr>
<td>Use Microsoft-hosted agents</td>
<td>Recommended</td>
<td>Recommended*</td>
</tr>
<tr>
<td>Use self-hosted agents</td>
<td>Recommended**</td>
<td>Recommended</td>
</tr>
<tr>
<td>Favor simplicity</td>
<td>Recommended</td>
<td>Not recommended</td>
</tr>
<tr>
<td>Favor more control</td>
<td>Not recommended</td>
<td>Recommended</td>
</tr>
<tr>
<td>Favor fail fast builds***</td>
<td>Not recommended</td>
<td>Recommended</td>
</tr>
<tr>
<td>The build <strong>must</strong> run on macOS</td>
<td>Not supported</td>
<td>Recommended</td>
</tr>
<tr>
<td>I only use Windows based agents</td>
<td>Recommended</td>
<td>Recommended*</td>
</tr>
<tr>
<td>I only use Linux based agents</td>
<td>Recommended</td>
<td>Recommended*</td>
</tr>
<tr>
<td>I use both Linux and Windows based agents</td>
<td>Recommended</td>
<td>Recommended*</td>
</tr>
<tr>
<td>Whatever, just run some containers during the build</td>
<td>Recommended</td>
<td>Recommended*</td>
</tr>
</tbody>
</table>
<p>* Need extra setup effort, as explained above, but keep in mind using Docker Compose will greatly simplify things!<br />
** Linux and Windows based agents only<br />
*** A fail fast build means I will not first pull a Docker image (usually a slow build step), like service containers do, and then see unit tests fail (usually a very fast build step); I will rather run unit tests and <em>if</em> they pass, only then will I pull the Docker image and run integration tests</p>
<!-- markdownlint-disable MD033 -->
<h2 id="other-challenges">Other challenges</h2>
<!-- markdownlint-disable MD033 -->
<p>While developing the Azure DevOps pipeline presented during this post, I have encountered several challenges and I believe documenting them might actually help others too.</p>
<!-- markdownlint-disable MD033 -->
<h3 id="run-postgresql-locally-using-docker">Run PostgreSQL database locally using Docker</h3>
<!-- markdownlint-disable MD033 -->
<p>As mentioned by me <a href="#db-for-azure-pipelines">earlier</a>, <em>“the developer and Azure Pipelines can use the same Docker image”</em>, so I run the following line to start a Docker container hosting PostgreSQL 12:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--name</span><span class="w"> </span><span class="nx">db4it</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-d</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--restart</span><span class="w"> </span><span class="nx">unless-stopped</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-e</span><span class="w"> </span><span class="nx">POSTGRES_DB</span><span class="o">=</span><span class="n">aspnet-core-logging-dev</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-e</span><span class="w"> </span><span class="nx">POSTGRES_USER</span><span class="o">=</span><span class="n">satrapu</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-e</span><span class="w"> </span><span class="nx">POSTGRES_PASSWORD</span><span class="o">=</span><span class="n">F</span><span class="o">*</span><span class="nx">ZMNJDWfr4</span><span class="o">%</span><span class="nx">RFM</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-p</span><span class="w"> </span><span class="nx">9876:5432</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:\Satrapu\Programming\docker-volumes\db4it_data:/var/lib/postgresql/data</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">postgres:12-alpine</span><span class="w">
</span></code></pre></div></div>
<p>I was sure that this command will start the database, but to my surprise, the database was not accessible. Checking the container log file I’ve stumbled upon the following lines:</p>
<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
creating configuration files ... ok
2019-12-27 15:29:34.575 UTC [50] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-12-27 15:29:34.575 UTC [50] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
</code></pre></div></div>
<p>This is a <a href="https://github.com/docker-library/postgres/issues/435">known issue</a> and the fix is to use a Docker volume created via <a href="https://docs.docker.com/engine/reference/commandline/volume_create/">docker volume create</a> command and then start the container with this newly volume, like this:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span><span class="w"> </span><span class="nx">volume</span><span class="w"> </span><span class="nx">create</span><span class="w"> </span><span class="nx">db4it_data</span><span class="p">;</span><span class="w"> </span><span class="err">`</span><span class="w">
</span><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--name</span><span class="w"> </span><span class="nx">db4it</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-d</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--restart</span><span class="w"> </span><span class="nx">unless-stopped</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-e</span><span class="w"> </span><span class="nx">POSTGRES_DB</span><span class="o">=</span><span class="n">aspnet-core-logging-dev</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-e</span><span class="w"> </span><span class="nx">POSTGRES_USER</span><span class="o">=</span><span class="n">satrapu</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-e</span><span class="w"> </span><span class="nx">POSTGRES_PASSWORD</span><span class="o">=</span><span class="n">F</span><span class="o">*</span><span class="nx">ZMNJDWfr4</span><span class="o">%</span><span class="nx">RFM</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-p</span><span class="w"> </span><span class="nx">9876:5432</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">db4it_data:/var/lib/postgresql/data</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">postgres:12-alpine</span><span class="w">
</span></code></pre></div></div>
<p>I haven’t encounter this issue while running the build on Azure DevOps since I’m not using any Docker volumes as I have decided to simplify my Docker container setup. On the other hand, if I were to use self-hosted agents and re-use persistent data between builds (e.g. run a Docker container to populate the database schema once per build and then run a container per test using the volume with the already created schema), I would have definitely used Docker volumes!</p>
<!-- markdownlint-disable MD033 -->
<h3 id="publish-test-results">Fix publishing test results</h3>
<!-- markdownlint-disable MD033 -->
<p>After refactoring the application to replace the in-memory EF Core provider with Npgsql, I was stunned seeing that even if I had added more tests and thus had increased the code coverage percentage, Sonar would complain that my changes have less than the expected code coverage threshold of 80%. After investigating for a while, I’ve discovered that the xUnit test results files for the unit tests did not contain any method and they all look like this:</p>
<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp"><?xml version="1.0" encoding="utf-8"?></span>
<span class="nt"><test-run</span> <span class="na">id=</span><span class="s">"2"</span> <span class="na">duration=</span><span class="s">"0"</span> <span class="na">testcasecount=</span><span class="s">"0"</span> <span class="na">total=</span><span class="s">"0"</span>
<span class="na">passed=</span><span class="s">"0"</span> <span class="na">failed=</span><span class="s">"0"</span> <span class="na">inconclusive=</span><span class="s">"0"</span> <span class="na">skipped=</span><span class="s">"0"</span> <span class="na">result=</span><span class="s">"Passed"</span>
<span class="na">start-time=</span><span class="s">"2019-06-09T 18:39:42Z"</span>
<span class="na">end-time=</span><span class="s">"2019-06-09T 18:39:45Z"</span> <span class="nt">/></span>
</code></pre></div></div>
<p>Initially I thought using xUnit is the culprit and I have refactored all my tests to use <a href="https://nunit.org/">NUnit</a> instead, but to no avail. By the way, this is the reason why the <a href="https://crossprogramming.com/2019/03/17/build-asp-net-core-app-using-azure-pipelines.html#run-automated-tests">previous article</a> was mentioning xUnit, while this one mentions NUnit.<br />
To keep the story short, I have <a href="https://twitter.com/satrapu/status/1137795889085538304">contacted</a> Azure DevOps Twitter account and got to the bottom of the issue: since I was running 2 test sessions, one for unit tests and another one for integration tests, my publish test results <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/publish-test-results?view=azure-devops&tabs=yaml">task</a> would only publish the results of the <strong>last</strong> session - the one containing the integration tests. The fix was pretty easy: call this task after <strong>each</strong> test session:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">...</span>
<span class="c1"># Run unit tests</span>
<span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet test $(Build.SourcesDirectory)/Todo.sln</span>
<span class="s">--no-build</span>
<span class="s">--configuration ${{ parameters.build.configuration }}</span>
<span class="s">--filter "Category=UnitTests"</span>
<span class="s">...</span>
<span class="c1"># Publish unit tests results</span>
<span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">PublishTestResults@2</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Publish unit test results</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">publish_unit_test_results</span>
<span class="na">condition</span><span class="pi">:</span> <span class="s">succeededOrFailed()</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">inputs</span><span class="pi">:</span>
<span class="na">testResultsFormat</span><span class="pi">:</span> <span class="s1">'</span><span class="s">NUnit'</span>
<span class="na">testResultsFiles</span><span class="pi">:</span> <span class="s1">'</span><span class="s">**/UnitTests/**/TestResults/*'</span>
<span class="na">mergeTestResults</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">buildConfiguration</span><span class="pi">:</span> <span class="s">${{ parameters.build.configuration }}</span>
<span class="na">publishRunAttachments</span><span class="pi">:</span> <span class="s">True</span>
<span class="c1"># Run integration tests</span>
<span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet test $(Build.SourcesDirectory)/Todo.sln</span>
<span class="s">--no-build</span>
<span class="s">--configuration ${{ parameters.build.configuration }}</span>
<span class="s">--filter "Category=IntegrationTests"</span>
<span class="s">...</span>
<span class="c1"># Publish integration tests results</span>
<span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">PublishTestResults@2</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Publish integration test results</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">publish_integration_test_results</span>
<span class="na">condition</span><span class="pi">:</span> <span class="s">succeededOrFailed()</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">inputs</span><span class="pi">:</span>
<span class="na">testResultsFormat</span><span class="pi">:</span> <span class="s1">'</span><span class="s">NUnit'</span>
<span class="na">testResultsFiles</span><span class="pi">:</span> <span class="s1">'</span><span class="s">**/IntegrationTests/**/TestResults/*'</span>
<span class="na">mergeTestResults</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">buildConfiguration</span><span class="pi">:</span> <span class="s">${{ parameters.build.configuration }}</span>
<span class="na">publishRunAttachments</span><span class="pi">:</span> <span class="s">True</span>
<span class="nn">...</span>
</code></pre></div></div>
<p>Moral of the story: do not waste way too much time looking for a solution, show some courage and use professional help.<br />
Many thanks to <a href="https://twitter.com/AzureDevOps">AzureDevOps</a> for their quick help!</p>
<!-- markdownlint-disable MD033 -->
<h3 id="publish-raw-test-results-as-artifacts">Publish test results raw files as pipeline artifacts</h3>
<!-- markdownlint-disable MD033 -->
<p>As mentioned above, after adding more tests, my code coverage was below the expected minimum threshold, so I decided to check the raw XML file containing the test results, files which are generated when running <code class="language-plaintext highlighter-rouge">dotnet test</code> command with the appropriate parameters.<br />
In order to access these files, I had to publish them as pipeline artifacts using the approach described <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/artifacts/pipeline-artifacts?view=azure-devops&tabs=yaml-task#publish-a-pipeline-artifact">here</a> - since in the future I might desire publishing other build artifacts as well, I’ve decided to document this process for future references.<br />
The pipeline YAML file contains a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/artifacts/pipeline-artifacts?view=azure-devops&tabs=yaml-task#publish-a-pipeline-artifact">task</a> which will handle publishing artifacts:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">...</span>
<span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">PublishPipelineArtifact@1</span>
<span class="s">displayName</span><span class="pi">:</span> <span class="s">Publish test results as pipeline artifacts</span>
<span class="s">name</span><span class="pi">:</span> <span class="s">publish_test_results_as_pipeline_artifacts</span>
<span class="s">condition</span><span class="pi">:</span> <span class="pi">|</span>
<span class="s">and</span>
<span class="s">(</span>
<span class="s">succeededOrFailed()</span>
<span class="s">, eq( ${{ parameters.publishPipelineArtifacts }}, True)</span>
<span class="s">)</span>
<span class="na">inputs</span><span class="pi">:</span>
<span class="na">artifact</span><span class="pi">:</span> <span class="s1">'</span><span class="s">test-results-$(Agent.OS)-$(Agent.OSArchitecture)'</span>
<span class="na">path</span><span class="pi">:</span> <span class="s1">'</span><span class="s">$(Build.SourcesDirectory)/Tests'</span>
<span class="nn">...</span>
</code></pre></div></div>
<p>The artifacts would be named like this (see more here: <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml#agent-variables">Agent variables</a>):</p>
<table>
<thead>
<tr>
<th>OS</th>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>Linux</td>
<td>test-results-Linux-X64.zip</td>
</tr>
<tr>
<td>macOS</td>
<td>test-results-Darwin-X64.zip</td>
</tr>
<tr>
<td>Windows</td>
<td>test-results-Windows_NT-X64.zip</td>
</tr>
</tbody>
</table>
<p>Since the tests might fail, I have ensured the raw test result files will always be published by specifying the <em><a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#succeededorfailed">succeededOrFailed()</a></em> condition.<br />
Secondly, I had to specify the exact files to publish by using an <a href="https://github.com/satrapu/aspnet-core-logging/blob/cb97b1604549b1519b31cfa6f42c33d545564924/Tests/.artifactignore#L1">.artifactignore</a> file placed under the <a href="https://github.com/satrapu/aspnet-core-logging/tree/master/Tests">Tests</a> folder:</p>
<pre><code class="language-gitignore">**/*
!**/TestResults/*
!**/coverage.opencover.xml
</code></pre>
<p>This file instruct Azure Pipeline to ignore <em>all</em> files and to publish only those located inside the <strong>TestResults</strong> folder or whose name is <strong>coverage.opencover.xml</strong>, as I wanted to check the Coverlet output too.<br />
See more about the .artifactignore file <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/artifacts/pipeline-artifacts?view=azure-devops&tabs=yaml-task#limiting-which-files-are-included">here</a>.</p>
<p><strong>IMPORTANT</strong>: Using this approach, one might publish <em>any</em> kind of files as pipeline artifacts, as long as they are located under the current workspace, no matter whether they are static (part of the checked-out repository) or dynamic (generated during the build).</p>
<!-- markdownlint-disable MD033 -->
<h2 id="next-steps">Next steps</h2>
<!-- markdownlint-disable MD033 -->
<p>As seen above, running just <em>one</em> Docker container needs a not trivial amount of effort, so what happens in case the build needs to use more containers?<br />
The answer is using an <em>orchestration engine</em> and what better option than using <a href="https://docs.docker.com/compose/">Docker Compose</a>?<br />
The Linux and Windows based agents already come with Docker Compose installed, while Docker Desktop for Mac <a href="https://docs.docker.com/compose/install/#install-compose">contains</a> it too. Docker Compose will greatly simplify the whole container setup, as starting several containers will be reduce to something as simple as: <code class="language-plaintext highlighter-rouge">docker-compose up</code>.<br />
This post is already <em>very</em> long, so most probably I will demonstrate using Docker Compose inside an Azure DevOps pipeline in a future post.</p>
<!-- markdownlint-disable MD033 -->
<h2 id="conclusion">Conclusion</h2>
<!-- markdownlint-disable MD033 -->
<p>Ensuring your application behaves as expected is crucial to everybody involved in developing it - be it a developer, tester, business analyst, you name it - and writing automated tests is just one step for achieving this goal - you still need other types of testing, like: exploratory testing, security testing, performance testing and others, and let’s not forget about deploying the application to several environments (e.g. integration, test, pre-production) and running smoke tests on each one of them before finally pushing the release to production and running the smoke tests there too.<br />
Being able to run integration tests as part of the build which validates each commit is a <strong>must</strong> and employing Docker to host all your test dependencies is easier and cheaper than using virtual machines or other external resources and lowers the integration tests adoption barrier for all team members. On the other hand, this doesn’t eliminate the need of testing the application in a close-to or cloned production environment, but will provide important feedback about its behavior way before reaching that point.</p>Context Provide a database for Azure Pipelines Setup EF Core provider Use Docker in Azure Pipelines Service containers Declare Docker containers Map a service to a Docker container Use Docker container published port Use self-managed Docker containers Install and run Docker on macOS Run dockerized database Check database ready-state Check database ready-state using log-polling Check database ready-state using Docker healthcheck Identify database port Update database connection string Service containers vs. self-managed containers Run integration tests Other challenges Run PostgreSQL database locally using Docker Fix publishing test results Publish test results raw files as pipeline artifacts Next steps ConclusionBuild an ASP.NET Core application using Azure Pipelines2019-03-17T20:32:13+00:002019-03-17T20:32:13+00:00https://crossprogramming.com/2019/03/17/build-asp-net-core-app-using-azure-pipelines<ul>
<li><a href="#context">Context</a></li>
<li><a href="#setup-pipeline">Setup pipeline</a>
<ul>
<li><a href="#sign-up-for-azure-devops">Sign up for Azure DevOps</a></li>
<li><a href="#create-an-azure-devops-organization">Create an Azure DevOps organization</a></li>
<li><a href="#create-a-public-project">Create a public project</a></li>
<li><a href="#create-pipeline">Create pipeline</a></li>
<li><a href="#paths-in-pipeline">Paths in pipeline</a></li>
<li><a href="#use-yaml-block-chomping-indicator">Use YAML block chomping indicator</a></li>
<li><a href="#run-jobs-on-different-operating-systems">Run jobs on different operating systems</a></li>
<li><a href="#use-templates">Use templates</a></li>
<li><a href="#use-variables-and-variable-groups">Use variables and variable groups</a></li>
<li><a href="#use-secrets">Use secrets</a></li>
</ul>
</li>
<li><a href="#build-application">Build application</a>
<ul>
<li><a href="#install-net-core-sdk">Install .NET Core SDK</a></li>
<li><a href="#compile-source-code">Compile source code</a></li>
</ul>
</li>
<li><a href="#run-automated-tests">Run automated tests</a>
<ul>
<li><a href="#setup-test-logger">Setup test logger</a></li>
<li><a href="#run-unit-tests">Run unit tests</a></li>
<li><a href="#run-integration-tests">Run integration tests</a></li>
<li><a href="#publish-test-results">Publish test results</a></li>
</ul>
</li>
<li><a href="#code-coverage-using-coverlet">Code coverage using Coverlet</a>
<ul>
<li><a href="#collect-code-coverage-data">Collect code coverage data</a></li>
<li><a href="#install-reportgenerator-net-core-tool">Install ReportGenerator .NET Core tool</a></li>
<li><a href="#generate-code-coverage-html-report-using-reportgenerator">Generate code coverage HTML report using ReportGenerator</a></li>
<li><a href="#publish-code-coverage-report">Publish code coverage report</a></li>
</ul>
</li>
<li><a href="#static-code-analysis-using-sonarqube">Static code analysis using SonarQube</a>
<ul>
<li><a href="#sonarcloud">SonarCloud</a></li>
<li><a href="#setup-sonarcloud-account">Setup SonarCloud account</a></li>
<li><a href="#use-sonarcloud-token-during-build">Use SonarCloud token during build</a></li>
<li><a href="#install-dotnet-sonarscanner-tool">Install dotnet-sonarscanner tool</a></li>
<li><a href="#run-dotnet-sonarscanner-tool">Run dotnet-sonarscanner tool</a></li>
<li><a href="#upload-static-code-analysis-report">Upload static code analysis report</a></li>
<li><a href="#use-sonarlint">Use SonarLint</a></li>
<li><a href="#use-sonarqube-build-breaker">Use SonarQube build breaker</a></li>
</ul>
</li>
<li><a href="#badges">Badges</a>
<ul>
<li><a href="#azure-pipeline-status-badge">Azure Pipeline status badge</a></li>
<li><a href="#sonar-quality-gate-badge">Sonar quality gate badge</a></li>
</ul>
</li>
<li><a href="#conclusion">Conclusion</a></li>
<li><a href="#references">References</a></li>
</ul>
<hr />
<!-- markdownlint-disable MD022 -->
<!-- markdownlint-disable MD002 -->
<h2 id="context">Context</h2>
<!-- markdownlint-disable MD002 -->
<!-- markdownlint-disable MD022 -->
<p>Developing free and open-source software (aka <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">FOSS</a>) and hosting it on <a href="https://help.github.com/articles/create-a-repo/">GitHub</a> is fun and provides the freedom to learn and experiment on your own terms, but at the same time this activity should be done carefully, as the source code may be viewed by lots of people and its quality level should be as high as possible (not saying as close as a closed-source one, since companies have a lot more resources than your ordinary FOSS developer), so what better way of reaching this level than using an automated build? What’s better than this? Free automated builds!<br />
I was very happy to learn about Microsoft Azure Pipelines offering free builds to open source projects - see the original announcement <a href="https://azure.microsoft.com/en-us/blog/announcing-azure-pipelines-with-unlimited-ci-cd-minutes-for-open-source/">here</a>. The icing on the top was that the same announcement mentioned that GitHub <a href="https://github.com/marketplace/azure-pipelines">integrates</a> with this service.</p>
<p>The purpose of this article is to present how to build a .NET Core application <a href="https://github.com/satrapu/aspnet-core-logging">hosted on GitHub</a> with Azure Pipelines. I have first talked about this .NET Core application in my previous post, <a href="https://crossprogramming.com/2018/12/27/logging-http-context-in-asp-net-core.html">Logging HTTP context in ASP.NET Core</a>.</p>
<h2 id="setup-pipeline">Setup pipeline</h2>
<p>The term <strong>pipeline</strong> used throughout this post means an Azure Pipelines instance made out of different jobs, each job containing one or more steps.
The pipeline can be created using either a visual designer or a YAML file - Microsoft <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started-designer?view=azure-devops&tabs=new-nav">recommends</a> using the latter approach, and, just coincidentally, so do I.<br />
I would use the visual designer approach to perform quick experiments and to discover the YAML fragment equivalent of a particular step.<br />
On the other hand, the YAML file offers several benefits over the visual designer:</p>
<ul>
<li>The file can be put under source control
<ul>
<li>We’re now able to understand who’s done what and - more important! - <em>why</em></li>
<li>The changes can go through an official code-review process before they impact the build</li>
<li>We can also quickly rollback to a specific version in case of a bug requiring extensive fixing</li>
<li>The code can be easily shared via a link to the hosted YAML file</li>
</ul>
</li>
<li>Coolness factor - we’re developers, so we get to <em>write code</em> to <em>build code</em>!</li>
</ul>
<h3 id="sign-up-for-azure-devops">Sign up for Azure DevOps</h3>
<p>In case you already have signed up for Azure DevOps, skip this section; otherwise, follow <a href="https://docs.microsoft.com/en-us/azure/devops/user-guide/sign-up-invite-teammates?view=azure-devops">these steps</a> to sign up for Azure DevOps.</p>
<h3 id="create-an-azure-devops-organization">Create an Azure DevOps organization</h3>
<p>In case you already have access to such an organization, skip this section; otherwise, follow <a href="https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/create-organization?view=azure-devops">these steps</a> to create a new organization.</p>
<h3 id="create-a-public-project">Create a public project</h3>
<p>In case you already have such a project, skip this section; otherwise, follow <a href="https://docs.microsoft.com/en-us/azure/devops/organizations/public/create-public-project?view=azure-devops">these steps</a> to create a new public project.</p>
<h3 id="create-pipeline">Create pipeline</h3>
<p>Follow <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started-yaml?view=azure-devops">these steps</a> to create a YAML file based pipeline or follow <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started-designer?view=azure-devops&tabs=new-nav">these ones</a> to create a pipeline using the visual designer.</p>
<h3 id="paths-in-pipeline">Paths in pipeline</h3>
<p>In order to correctly reference a file or folder found inside the repository or generated during the current build, use one of the following <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml">predefined variables</a>:</p>
<ul>
<li><strong>$(Build.SourcesDirectory)</strong>: the local path on the agent where your source code is downloaded</li>
<li><strong>$(Agent.BuildDirectory)</strong>: the local path on the agent where all folders for a given build pipeline are created</li>
</ul>
<p>For instance, the Visual Studio solution file is hosted on GitHub at this path: <a href="https://github.com/satrapu/aspnet-core-logging/blob/master/Todo.sln">https://github.com/satrapu/aspnet-core-logging/blob/master/Todo.sln</a>; the pipeline steps referencing this file should then use: <strong>$(Build.SourcesDirectory)/Todo.sln</strong>.<br />
Similarly, accessing the <strong>.sonarqube</strong> folder which contains the static code analysis related artifacts and which is generated inside the solution root folder should be referenced using: <strong>$(Agent.BuildDirectory)/.sonarqube</strong>.</p>
<h3 id="use-yaml-block-chomping-indicator">Use YAML block chomping indicator</h3>
<p>Using .NET Core CLI tools sometimes requires specifying several parameters which may lead to long lines in the Azure Pipeline YAML files, like this one:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">script</span><span class="pi">:</span>
<span class="s">dotnet test $(Build.SourcesDirectory)/Todo.sln --no-build --configuration $ --filter "FullyQualifiedName~UnitTests" --test-adapter-path "." --logger "xunit;LogFilePath=TodoWebApp.UnitTests.xunit.xml" /p:CollectCoverage=True /p:CoverletOutputFormat=opencover /p:CoverletOutput="TodoWebApp.UnitTests.opencover.xml" /p:Include="[TodoWebApp]*"</span>
</code></pre></div></div>
<p>When trying to read and understand someone else’s code is not fun having to do horizontal scrolling, so the YAML block chomping indicator is a real life saver - see more <a href="https://stackoverflow.com/a/3790497">here</a>.<br />
The above line becomes:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet test $(Build.SourcesDirectory)/Todo.sln</span>
<span class="s">--no-build</span>
<span class="s">--configuration $</span>
<span class="s">--filter "FullyQualifiedName~UnitTests"</span>
<span class="s">--test-adapter-path "."</span>
<span class="s">--logger "xunit;LogFilePath=TodoWebApp.UnitTests.xunit.xml"</span>
<span class="s">/p:CollectCoverage=True</span>
<span class="s">/p:CoverletOutputFormat=opencover</span>
<span class="s">/p:CoverletOutput="TodoWebApp.UnitTests.opencover.xml"</span>
<span class="s">/p:Include="[TodoWebApp]*"</span>
</code></pre></div></div>
<h3 id="run-jobs-on-different-operating-systems">Run jobs on different operating systems</h3>
<p>Azure Pipelines has the ability of running the jobs found in a pipeline on different operating systems in parallel: Linux, macOS and Windows.
Each pipeline job must declare a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema#pool">pool</a> with one specific virtual machine image; Microsoft provides several images, as documented <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#use-a-microsoft-hosted-agent">here</a>.</p>
<p>Here is a YAML fragment declaring several jobs to be run on the aforementioned operating systems:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">...</span>
<span class="pi">-</span> <span class="na">job</span><span class="pi">:</span> <span class="s1">'</span><span class="s">build-on-linux'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Build</span><span class="nv"> </span><span class="s">on</span><span class="nv"> </span><span class="s">Linux'</span>
<span class="na">pool</span><span class="pi">:</span>
<span class="na">vmImage</span><span class="pi">:</span> <span class="s1">'</span><span class="s">ubuntu-16.04'</span>
<span class="s">...</span>
<span class="pi">-</span> <span class="na">job</span><span class="pi">:</span> <span class="s1">'</span><span class="s">build-on-mac'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Build</span><span class="nv"> </span><span class="s">on</span><span class="nv"> </span><span class="s">macOS'</span>
<span class="na">pool</span><span class="pi">:</span>
<span class="na">vmImage</span><span class="pi">:</span> <span class="s1">'</span><span class="s">macOS-10.13'</span>
<span class="s">...</span>
<span class="pi">-</span> <span class="na">job</span><span class="pi">:</span> <span class="s1">'</span><span class="s">build-on-windows'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Build</span><span class="nv"> </span><span class="s">on</span><span class="nv"> </span><span class="s">Windows'</span>
<span class="na">pool</span><span class="pi">:</span>
<span class="na">vmImage</span><span class="pi">:</span> <span class="s1">'</span><span class="s">vs2017-win2016'</span>
<span class="s">...</span>
</code></pre></div></div>
<h3 id="use-templates">Use templates</h3>
<p>Since I’m building a .NET Core application, I would like to ensure it will run on each operating system supported by .NET Core: Linux, macOS and Windows; on the other hand, I would like to avoid creating 3 pipelines with the only difference between them being the virtual machine image they use. The good news is that I can employ the concept of <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=example&viewFallbackFrom=vsts#job-templates">job template</a>, a way of reusing code.<br />
Instead of having to create and maintain 3 almost identical YAML files (one per pipeline per OS), I’m authoring just 2 files:</p>
<ul>
<li>the <a href="https://github.com/satrapu/aspnet-core-logging/blob/master/Build/azure-pipelines.yml">pipeline YAML file</a>, dealing with the repository hosting the application source code, pipeline triggers and variables</li>
<li>the <a href="https://github.com/satrapu/aspnet-core-logging/blob/master/Build/azure-pipelines.job-template.yml">job templates YAML file</a>, containing the <strong>parameterizable</strong> reusable code used for building the .NET Core application</li>
</ul>
<p>The pipeline file references the job templates file(s) which must be located inside the same repository.
The <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#passing-parameters">parameters</a> declared by the job templates file can be set to default values, so when the pipeline file includes the job templates one, it doesn’t have to provide them all. The parameters are referenced inside the job templates file using a JSON-like notation, as seen in the example below.<br />
Pipeline file fragment:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">jobs</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">template</span><span class="pi">:</span> <span class="s1">'</span><span class="s">./azure-pipelines.job-template.yml'</span>
<span class="na">parameters</span><span class="pi">:</span>
<span class="na">job</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">linux'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Build</span><span class="nv"> </span><span class="s">on</span><span class="nv"> </span><span class="s">Linux'</span>
<span class="na">pool</span><span class="pi">:</span>
<span class="na">vmImage</span><span class="pi">:</span> <span class="s1">'</span><span class="s">ubuntu-16.04'</span>
<span class="na">sonar</span><span class="pi">:</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">False</span>
<span class="na">buildBreaker</span><span class="pi">:</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">False</span>
<span class="pi">-</span> <span class="na">template</span><span class="pi">:</span> <span class="s1">'</span><span class="s">./azure-pipelines.job-template.yml'</span>
<span class="na">parameters</span><span class="pi">:</span>
<span class="na">job</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">macOS'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Build</span><span class="nv"> </span><span class="s">on</span><span class="nv"> </span><span class="s">macOS'</span>
<span class="na">pool</span><span class="pi">:</span>
<span class="na">vmImage</span><span class="pi">:</span> <span class="s1">'</span><span class="s">macOS-10.13'</span>
<span class="na">sonar</span><span class="pi">:</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">False</span>
<span class="na">buildBreaker</span><span class="pi">:</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">False</span>
<span class="pi">-</span> <span class="na">template</span><span class="pi">:</span> <span class="s1">'</span><span class="s">./azure-pipelines.job-template.yml'</span>
<span class="na">parameters</span><span class="pi">:</span>
<span class="na">job</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">windows'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Build</span><span class="nv"> </span><span class="s">on</span><span class="nv"> </span><span class="s">Windows'</span>
<span class="na">pool</span><span class="pi">:</span>
<span class="na">vmImage</span><span class="pi">:</span> <span class="s1">'</span><span class="s">vs2017-win2016'</span>
</code></pre></div></div>
<p>Job templates file fragment:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">parameters</span><span class="pi">:</span>
<span class="na">job</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">'</span>
<span class="na">pool</span><span class="pi">:</span> <span class="s1">'</span><span class="s">'</span>
<span class="na">build</span><span class="pi">:</span>
<span class="na">configuration</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Release'</span>
<span class="na">sonar</span><span class="pi">:</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">buildBreaker</span><span class="pi">:</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">jobs</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">job</span><span class="pi">:</span> <span class="s">${{ parameters.job.name }}</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">${{ parameters.job.displayName }}</span>
<span class="na">continueOnError</span><span class="pi">:</span> <span class="s">False</span>
<span class="na">pool</span><span class="pi">:</span> <span class="s">${{ parameters.pool }}</span>
<span class="na">workspace</span><span class="pi">:</span>
<span class="na">clean</span><span class="pi">:</span> <span class="s">all</span>
</code></pre></div></div>
<p>The concept of job template is applicable to steps too, as documented <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#step-re-use">here</a>.</p>
<h3 id="use-variables-and-variable-groups">Use variables and variable groups</h3>
<p>A pipeline can declare and reference variables inside the YAML file or by importing external ones; these variables can be associated into groups.<br />
Variables and variable groups are declared in this way:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Pipeline YAML file</span>
<span class="na">variables</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">group</span><span class="pi">:</span> <span class="s1">'</span><span class="s">GlobalVariables'</span>
<span class="pi">-</span> <span class="na">group</span><span class="pi">:</span> <span class="s1">'</span><span class="s">SonarQube'</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">DotNetSkipFirstTimeExperience'</span>
<span class="na">value</span><span class="pi">:</span> <span class="m">1</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">DotNetCliTelemetryOptOut'</span>
<span class="na">value</span><span class="pi">:</span> <span class="m">1</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">CoreHostTrace'</span>
<span class="na">value</span><span class="pi">:</span> <span class="m">0</span>
</code></pre></div></div>
<p>And are referenced like this:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Job templates YAML file</span>
<span class="c1"># The "env" property denotes the environment variables passed</span>
<span class="c1"># to the "dotnet build" command</span>
<span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet build $(Build.SourcesDirectory)/Todo.sln</span>
<span class="s">--configuration $</span>
<span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">build_sources'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Build</span><span class="nv"> </span><span class="s">sources'</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">env</span><span class="pi">:</span>
<span class="na">DOTNET_SKIP_FIRST_TIME_EXPERIENCE</span><span class="pi">:</span> <span class="s">$(DotNetSkipFirstTimeExperience)</span>
<span class="na">DOTNET_CLI_TELEMETRY_OPTOUT</span><span class="pi">:</span> <span class="s">$(DotNetCliTelemetryOptOut)</span>
<span class="na">COREHOST_TRACE</span><span class="pi">:</span> <span class="s">$(CoreHostTrace)</span>
</code></pre></div></div>
<p>The variable groups are declared outside the YAML files; follow <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops&tabs=yaml">these steps</a> to add a new group.<br />
In order to use this variable group in your pipeline, you have to link it:</p>
<ul>
<li>Go to Azure DevOps project home page (e.g. <a href="https://dev.azure.com/satrapu/aspnet-core-logging">https://dev.azure.com/satrapu/aspnet-core-logging</a>)</li>
<li>Click <strong>Pipelines</strong> menu item from the left side</li>
<li>Click <strong>Builds</strong> menu item under <strong>Pipelines</strong></li>
<li>Select the appropriate pipeline and click the top right <strong>Edit</strong> button</li>
<li>Inside the pipeline editor, click the <strong>…</strong> button and then <strong>Pipeline settings</strong> button</li>
<li>Go to <strong>Variables</strong> tab and click <strong>Link variable groups</strong> button</li>
<li>Choose the appropriate group and click <strong>Link</strong> button</li>
</ul>
<p>Once the variable groups has been linked to your pipeline and declared inside the pipeline YAML file, use its variables like the ordinary ones.</p>
<h3 id="use-secrets">Use secrets</h3>
<p>A variable group may contain variables marked as secret (click the lock icon on the right side of the appropriate variable editor found under Pipelines -> Library -> Variable Groups menu).<br />
These variables may contain sensitive data like passwords, tokens, etc.; they may also be mapped to secrets stored in Azure KeyVault, as documented <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops&tabs=yaml#link-secrets-from-an-azure-key-vault">here</a>.</p>
<p>The pipeline presented in this article uses a variable marked as secret for storing the <a href="#use-sonarcloud-token-during-build">token</a> used for authenticating against the SonarCloud project.</p>
<h2 id="build-application">Build application</h2>
<p>Building this .NET Core application means compiling its source code, running automated tests with code coverage, publishing test results and code coverage report, performing and then publishing the results of the static code analysis and finally (and debatably) checking whether the quality gate has been passed or not.</p>
<h3 id="install-net-core-sdk">Install .NET Core SDK</h3>
<p>It’s always a good idea to use the same tools on both your development machine and the CI server to avoid the “Works on my machine!” syndrome, so installing the same version of the .NET Core SDK is a good start. Azure Pipelines provides a task for this purpose - check its documentation <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/tool/dotnet-core-tool-installer?view=azure-devops">here</a>.<br />
The example below installs version <strong><a href="https://github.com/dotnet/core/blob/master/release-notes/2.2/2.2.101-SDK/2.2.101.md">2.2.101</a></strong>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">variables</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">DotNetCore_SDK_Version'</span>
<span class="na">value</span><span class="pi">:</span> <span class="s1">'</span><span class="s">2.2.101'</span>
<span class="nn">...</span>
<span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">DotNetCoreInstaller@0</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">install_dotnetcore_sdk</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Install .NET Core SDK</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">inputs</span><span class="pi">:</span>
<span class="na">packageType</span><span class="pi">:</span> <span class="s1">'</span><span class="s">sdk'</span>
<span class="na">version</span><span class="pi">:</span> <span class="s">$(DotNetCore_SDK_Version)</span>
</code></pre></div></div>
<p>An additional reason for installing a particular .NET Core SDK version is fixing an issue occurring when trying to install a .NET Core tool, like ReportGenerator - see more details <a href="https://github.com/Microsoft/azure-pipelines-tasks/issues/8291">here</a>.</p>
<h3 id="compile-source-code">Compile source code</h3>
<p>Compiling .NET Core source code is done using the <a href="https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-build?tabs=netcore2x">dotnet build</a> command invoked from a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/scripts/cross-platform-scripting?view=azure-devops&tabs=yaml">cross-platform script</a> task:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet build $(Build.SourcesDirectory)/Todo.sln</span>
<span class="s">--configuration ${{ parameters.build.configuration }}</span>
<span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">build_sources'</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Build</span><span class="nv"> </span><span class="s">sources'</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
</code></pre></div></div>
<p>Please note the parameter reference above, <strong>${{ parameters.build.configuration }}</strong> - I could’ve used <strong>Release</strong> value instead, but it’s always a good idea to use parameters instead of hard-coded values.</p>
<h2 id="run-automated-tests">Run automated tests</h2>
<p>Running unit and integration tests on each commit is <strong>crucial</strong> as this is one of the most important ways of spotting bugs way before reaching production.<br />
I have used <a href="https://xunit.github.io/">xUnit.net framework</a> for writing these tests, but at the moment, my application uses the <a href="https://docs.microsoft.com/en-us/ef/core/providers/in-memory/">in-memory Entity Framework Core provider</a>, so the integration tests are kind of lame; on the other hand, this is a good reason to explore in the near future how Azure Pipelines can be used to start a relational database to be targeted by the integration tests and then blog about it.</p>
<p>Azure Pipelines provides the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/vstest?view=azure-devops">VSTest@2 task</a> for running tests, but since I have encountered several issues using it and since I also wanted more control over this operation, I have decided to use the aforementioned <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/scripts/cross-platform-scripting?view=azure-devops&tabs=yaml">cross-platform script</a> task for calling the <a href="https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-test?tabs=netcore21">dotnet test</a> command.</p>
<h3 id="setup-test-logger">Setup test logger</h3>
<p>Azure Pipelines displays test results generated by various frameworks (e.g.: NUnit, xUnit, etc.) in a dedicated tab. The <strong>dotnet test</strong> command can be configured to generate such results via the <strong>–logger</strong> parameter. Since my tests have been written using xUnit.net framework, I have used <a href="https://github.com/spekt/xunit.testlogger">xUnit Test Logger</a> by adding a reference to the appropriate <a href="https://www.nuget.org/packages/XunitXml.TestLogger/">NuGet package</a>.</p>
<h3 id="run-unit-tests">Run unit tests</h3>
<p>The unit tests related classes reside in a separate folder, <a href="https://github.com/satrapu/aspnet-core-logging/tree/master/Tests/UnitTests">Tests/UnitTests</a>, so when using the <em>dotnet test</em> command, I need to specify a <a href="https://docs.microsoft.com/en-us/dotnet/core/testing/selective-unit-tests#xunit">filter</a> to ensure only this kind of tests are run.<br />
The command contains other parameters as well, since I’m also collecting code coverage data.</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet test $(Build.SourcesDirectory)/Todo.sln</span>
<span class="s">--no-build</span>
<span class="s">--configuration ${{ parameters.build.configuration }}</span>
<span class="s">--filter "FullyQualifiedName~UnitTests"</span>
<span class="s">--test-adapter-path "."</span>
<span class="s">--logger "xunit;LogFilePath=TodoWebApp.UnitTests.xunit.xml"</span>
<span class="s">/p:CollectCoverage=True</span>
<span class="s">/p:CoverletOutputFormat=opencover</span>
<span class="s">/p:CoverletOutput="TodoWebApp.UnitTests.opencover.xml"</span>
<span class="s">/p:Include="[TodoWebApp]*"</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">run_unit_tests</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Run unit tests</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
</code></pre></div></div>
<p>The <strong>–logger</strong> parameter specifies that the test results files will be generated using xUnit format; the <strong>LogFilePath</strong> property specifies the name of the test results file, name which will be used when publishing these results. Each solution project located under the UnitTests folder will generate a test results file named <strong>TodoWebApp.UnitTests.xunit.xml</strong>.</p>
<h3 id="run-integration-tests">Run integration tests</h3>
<p>Running the integration tests residing inside the <a href="https://github.com/satrapu/aspnet-core-logging/tree/master/Tests/IntegrationTests">Tests/IntegrationTests</a> folder is done in a similar way, the only differences being the filter and the result file names:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet test $(Build.SourcesDirectory)/Todo.sln</span>
<span class="s">--no-build</span>
<span class="s">--configuration ${{ parameters.build.configuration }}</span>
<span class="s">--filter "FullyQualifiedName~IntegrationTests"</span>
<span class="s">--test-adapter-path "."</span>
<span class="s">--logger "xunit;LogFilePath=TodoWebApp.IntegrationTests.xunit.xml"</span>
<span class="s">/p:CollectCoverage=True</span>
<span class="s">/p:CoverletOutputFormat=opencover</span>
<span class="s">/p:CoverletOutput="TodoWebApp.IntegrationTests.opencover.xml"</span>
<span class="s">/p:Include="[TodoWebApp]*"</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">run_integration_tests</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Run integration tests</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
</code></pre></div></div>
<h3 id="publish-test-results">Publish test results</h3>
<p>Once the tests have been run, the pipeline publishes their results via the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/publish-test-results?view=azure-devops&tabs=yaml">PublishTestResults@2 task</a>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">PublishTestResults@2</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Publish test results</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">publish_test_results</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">inputs</span><span class="pi">:</span>
<span class="na">testResultsFormat</span><span class="pi">:</span> <span class="s1">'</span><span class="s">xUnit'</span>
<span class="na">testResultsFiles</span><span class="pi">:</span> <span class="s1">'</span><span class="s">$(Build.SourcesDirectory)/Tests/**/*.xunit.xml'</span>
<span class="na">mergeTestResults</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">buildConfiguration</span><span class="pi">:</span> <span class="s">${{ parameters.build.configuration }}</span>
<span class="na">publishRunAttachments</span><span class="pi">:</span> <span class="s">True</span>
</code></pre></div></div>
<p>Both unit and integration test results files are located under the <strong>Tests</strong> folder and their names end in <strong>.xunit.xml</strong>, thus the need to set the <strong>testResultsFiles</strong> YAML property to the above expression.</p>
<h2 id="code-coverage-using-coverlet">Code coverage using Coverlet</h2>
<p>Last year I have read <a href="https://www.hanselman.com/blog/NETCoreCodeCoverageAsAGlobalToolWithCoverlet.aspx">an article</a> by Scott Hanselman about code coverage using <a href="https://github.com/tonerdo/coverlet">Coverlet</a> and this really got under my skin, so I <em>had</em> to use this tool in my next .NET Core project!</p>
<p>Before describing how Coverlet can be integrated with Azure Pipelines, I have to say this: code coverage should <strong>not</strong> be used as a quality metric in a project, since reaching a high percentage of coverage does not necessarily mean your code is bug free; on the other hand, coverage can help you in identifying those parts of your application which are <strong>not</strong> tested.</p>
<p>Coverlet <a href="https://github.com/tonerdo/coverlet#usage">GitHub project page</a> states that:</p>
<blockquote>
<p>Coverlet can be used either as a .NET Core global tool that can be invoked from a terminal or as a NuGet package that integrates with the MSBuild system of your test project.</p>
</blockquote>
<p>Considering the above statement, I have chosen to <a href="https://github.com/tonerdo/coverlet/blob/master/Documentation/MSBuildIntegration.md#coverlet-integration-with-msbuild">integrate Coverlet with MSBuild</a> by adding a reference to the <a href="https://www.nuget.org/packages/coverlet.msbuild/">coverlet.msbuild</a> NuGet package and set specific MSBuild properties to the appropriate values when running <a href="#run-unit-tests">unit</a> and <a href="#run-integration-tests">integration</a> tests,.</p>
<h3 id="collect-code-coverage-data">Collect code coverage data</h3>
<p>In order to collect code coverage, one must use several MSBuild properties:</p>
<ul>
<li><a href="https://github.com/tonerdo/coverlet/blob/master/Documentation/MSBuildIntegration.md#code-coverage">CollectCoverage</a> - used for enabling or disabling collecting coverage data</li>
<li><a href="https://github.com/tonerdo/coverlet/blob/master/Documentation/MSBuildIntegration.md#coverage-output">CoverletOutputFormat</a> - used for specifying the format of the coverage data (e.g. OpenCover, Cobertura, etc.)</li>
<li>CoverletOutput - used for specifying the path where the coverage data file will be generated</li>
<li><a href="https://github.com/tonerdo/coverlet/blob/master/Documentation/MSBuildIntegration.md#filters">Include</a> - used for specifying for which assemblies and classes to collect coverage data</li>
<li>Many others</li>
</ul>
<p>I have chosen OpenCover as the coverage data format since it’s <a href="https://docs.sonarqube.org/pages/viewpage.action?pageId=6389770">supported by SonarQube</a>, a code quality tool also used by my pipeline:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet test $(Build.SourcesDirectory)/Todo.sln</span>
<span class="s">...</span>
<span class="s">/p:CollectCoverage=True</span>
<span class="s">/p:CoverletOutputFormat=opencover</span>
<span class="s">/p:CoverletOutput="TodoWebApp.UnitTests.opencover.xml"</span>
<span class="s">/p:Include="[TodoWebApp]*"</span>
</code></pre></div></div>
<h3 id="install-reportgenerator-net-core-tool">Install ReportGenerator .NET Core tool</h3>
<p>Azure Pipelines provides a task for publishing code coverage, <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/publish-code-coverage-results?view=azure-devops">PublishCodeCoverageResults@1</a>, but since this task only supports coverage data files in Cobertura or JaCoco formats, I had to use <a href="https://github.com/danielpalme/ReportGenerator">ReportGenerator</a> for converting files from OpenCover format to Cobertura. This tool can be installed as a <a href="https://docs.microsoft.com/en-us/dotnet/core/tools/global-tools">.NET Core global tool</a>, so it was easy to integrate it with my pipeline:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">variables</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">ReportGenerator_Version'</span>
<span class="na">value</span><span class="pi">:</span> <span class="s1">'</span><span class="s">4.0.7'</span>
<span class="nn">...</span>
<span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet tool install dotnet-reportgenerator-globaltool</span>
<span class="s">--global</span>
<span class="s">--version $(ReportGenerator_Version)</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">install_code_coverage_report_generator</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Install code coverage report generator tool</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
</code></pre></div></div>
<h3 id="generate-code-coverage-html-report-using-reportgenerator">Generate code coverage HTML report using ReportGenerator</h3>
<p>ReportGenerator is capable of converting coverage data files in OpenCover format into several formats, all at once:</p>
<ul>
<li>Cobertura: used for calculating coverage metrics</li>
<li>HTML optimized for Azure Pipelines: used for displaying coverage results as HTML</li>
</ul>
<p>Once ReportGenerator has been installed as a .NET Core global tool, it can be invoked from command line like this:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">reportgenerator</span>
<span class="s">"-reports:$(Build.SourcesDirectory)/Tests/**/*.opencover.xml"</span>
<span class="s">"-targetdir:$(Build.SourcesDirectory)/.CoverageResults/Report"</span>
<span class="s">"-reporttypes:Cobertura;HtmlInline_AzurePipelines"</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">generate_code_coverage_report</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Generate code coverage report</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
</code></pre></div></div>
<p>The tools will scan all test projects for coverage data files in OpenCover format and will generate both Cobertura and HTML files, the output folder being <strong>.CoverageResults/Report</strong>.<br />
This folder contains a Cobertura.xml file storing all coverage metrics, and several HTML files containing the source code with coverage related highlighted lines:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>│ Cobertura.xml
│ index.htm
│ index.html
│ TodoWebApp_LoggingMiddleware.htm
│ TodoWebApp_LoggingMiddlewareExtensions.htm
│ TodoWebApp_LoggingService.htm
│ TodoWebApp_Program.htm
│ TodoWebApp_Startup.htm
│ TodoWebApp_StreamExtensions.htm
│ TodoWebApp_TodoController.htm
│ TodoWebApp_TodoDbContext.htm
│ TodoWebApp_TodoItem.htm
│ TodoWebApp_TodoService.htm
│
└───summary236
Cobertura.xml
</code></pre></div></div>
<p>Click <a href="https://satrapu.visualstudio.com/2407d56f-dabc-4301-8ac1-cab15e9e9b20/_apis/build/builds/236/artifacts?artifactName=Code%20Coverage%20Report_236&api-version=5.1-preview.5&%24format=zip">here</a> to download a sample.</p>
<h3 id="publish-code-coverage-report">Publish code coverage report</h3>
<p>Once the code coverage report has been generated, the pipeline will use the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/publish-code-coverage-results?view=azure-devops">PublishCodeCoverageResults@1 task</a> to publish it:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">PublishCodeCoverageResults@1</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">publish_code_coverage_report</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Publish code coverage report</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">inputs</span><span class="pi">:</span>
<span class="na">codeCoverageTool</span><span class="pi">:</span> <span class="s1">'</span><span class="s">Cobertura'</span>
<span class="na">summaryFileLocation</span><span class="pi">:</span> <span class="s1">'</span><span class="s">$(Build.SourcesDirectory)/.CoverageResults/Report/Cobertura.xml'</span>
<span class="na">reportDirectory</span><span class="pi">:</span> <span class="s1">'</span><span class="s">$(Build.SourcesDirectory)/.CoverageResults/Report'</span>
</code></pre></div></div>
<h2 id="static-code-analysis-using-sonarqube">Static code analysis using SonarQube</h2>
<p><a href="https://www.sonarqube.org/">SonarQube</a> is a product developed by <a href="https://www.sonarsource.com/">SonarSource</a> which helps developers write higher quality code. This tool supports many programming languages, C# being one of them. SonarQube can be deployed on premise, but it can also be used as a service and since my pipeline runs on Azure, I have used the latter model.</p>
<p>Doing static code analysis against a .NET Core solution usually consists of:</p>
<ul>
<li>Installing a specific .NET Core tool which performs the static analysis</li>
<li>Begin the analysis</li>
<li>Build the solution</li>
<li>Run automated tests with code coverage</li>
<li>Upload the analysis results to SonarCloud</li>
<li>Check quality gate</li>
</ul>
<h3 id="sonarcloud">SonarCloud</h3>
<p><a href="https://sonarcloud.io/about">SonarCloud</a> is <em>SonarQube as a Service</em> and it’s <a href="https://sonarcloud.io/about/pricing">free</a> for open source projects, like <a href="https://github.com/satrapu/aspnet-core-logging">mine</a>.<br />
Any SonarCloud project comes with a predefined quality gate which is read-only, but you can use it as a template to create your own, as documented <a href="https://sonarcloud.io/documentation/user-guide/quality-gates/">here</a>.<br />
SonarCloud is an active product, so expect features to pop on almost weekly basis, like this one: <a href="https://community.sonarsource.com/t/pull-requests-get-a-real-quality-gate-status/7814">Pull Requests get a real Quality Gate status</a>.</p>
<h3 id="setup-sonarcloud-account">Setup SonarCloud account</h3>
<p>Creating an account is simple as navigating to SonarCloud <a href="https://sonarcloud.io/sessions/new">home page</a> and pick your login account type - in my case this is GitHub.</p>
<h3 id="use-sonarcloud-token-during-build">Use SonarCloud token during build</h3>
<p>In order to have SonarCloud analyze the code quality report generated by the pipeline, one has to provide a token which acts as a username and password - follow <a href="https://docs.sonarqube.org/latest/user-guide/user-token/">these steps</a> to create a token.<br />
Treat this token as you would a password, so don’t share it, nor store it as plain text in your source code. The pipeline will use it via Azure Pipelines support for <a href="#use-secrets">secrets</a>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet-sonarscanner ...</span>
<span class="s">/d:sonar.login="$(CurrentProject.Sonar.Token)"</span>
<span class="s">...</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">prepare_sonarqube_analysis</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Prepare SonarQube analysis</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">${{ parameters.sonar.enabled }}</span>
</code></pre></div></div>
<p>The <strong>/d:sonar.login=”$(CurrentProject.Sonar.Token)”</strong> parameter instructs the static code analyzer to use the token passed as a variable reference.</p>
<h3 id="install-dotnet-sonarscanner-tool">Install dotnet-sonarscanner tool</h3>
<p><a href="https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner+for+MSBuild">This page</a> states:</p>
<blockquote>
<p>The SonarScanner for MSBuild is the recommended way to launch a SonarQube or SonarCloud analysis for projects/solutions using MSBuild or dotnet command as build tool.</p>
</blockquote>
<p>In other words, there is a .NET Core tool, <a href="https://www.nuget.org/packages/dotnet-sonarscanner">dotnet-sonarscanner</a>, which can be used for doing static code analysis from within a pipeline:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">variables</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s1">'</span><span class="s">SonarScanner_Version'</span>
<span class="na">value</span><span class="pi">:</span> <span class="s1">'</span><span class="s">4.5.0'</span>
<span class="nn">...</span>
<span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet tool install dotnet-sonarscanner</span>
<span class="s">--global</span>
<span class="s">--version $(SonarScanner_Version)</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">install_sonarscanner</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Install SonarQube static code analyzing CLI tool</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">${{ parameters.sonar.enabled }}</span>
</code></pre></div></div>
<h3 id="run-dotnet-sonarscanner-tool">Run dotnet-sonarscanner tool</h3>
<p>The static code analysis must be initiated via the <strong>begin</strong> verb of the dotnet-sonarscanner tool:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet-sonarscanner begin</span>
<span class="s">/k:"$(CurrentProject.Sonar.ProjectKey)"</span>
<span class="s">/v:"$(CurrentProject.Version)"</span>
<span class="s">/s:"$(Build.SourcesDirectory)/Build/SonarQubeAnalysis.xml"</span>
<span class="s">/d:sonar.login="$(CurrentProject.Sonar.Token)"</span>
<span class="s">/d:sonar.branch.name="$(Build.SourceBranchName)"</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">prepare_sonarqube_analysis</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Prepare SonarQube analysis</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">${{ parameters.sonar.enabled }}</span>
</code></pre></div></div>
<p>The command above uses several parameters:</p>
<ul>
<li><strong>/k:”$(CurrentProject.Sonar.ProjectKey)”</strong> specifies the key of the project currently being analyzed
<ul>
<li>The project key is represented by the value of the <strong>id</strong> query string parameter present on the <a href="https://sonarcloud.io/projects">SonarCloud projects page</a></li>
<li>For <a href="https://sonarcloud.io/dashboard?id=aspnet-core-logging">my project</a>, the project key is <strong>aspnet-core-logging</strong></li>
</ul>
</li>
<li><strong>/v:”$(CurrentProject.Version)”</strong> specifies the version to be associated with the current analysis
<ul>
<li>This means you can track the quality history of your project over a period of time; visit this history on the SonarCloud <a href="https://sonarcloud.io/project/activity?id=aspnet-core-logging">activity page</a></li>
</ul>
</li>
<li><strong>/s:”$(Build.SourcesDirectory)/Build/SonarQubeAnalysis.xml”</strong> specifies the XML settings file used to customize the current analysis
<ul>
<li>See the settings applicable to my project <a href="https://github.com/satrapu/aspnet-core-logging/blob/master/Build/SonarQubeAnalysis.xml">here</a></li>
<li>Most analysis parameters can be found <a href="https://docs.sonarqube.org/latest/analysis/analysis-parameters/">here</a></li>
</ul>
</li>
<li><strong>/d:sonar.login=”$(CurrentProject.Sonar.Token)”</strong> represents the authentication token</li>
<li><strong>/d:sonar.branch.name=”$(Build.SourceBranchName)”</strong> represents the source control branch containing the code currently being analyzed
<ul>
<li><strong>$(Build.SourceBranchName)</strong> is an Azure Pipeline built-in variable</li>
</ul>
</li>
</ul>
<h3 id="upload-static-code-analysis-report">Upload static code analysis report</h3>
<p>Once the analysis has been done, the local data collected during this operation must be uploaded to the cloud using the <strong>end</strong> verb of the dotnet-sonarscanner tool:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">script</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">dotnet-sonarscanner end </span>
<span class="s">/d:sonar.login="$(CurrentProject.Sonar.Token)"</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">upload_sonarqube_report</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Upload SonarQube report</span>
<span class="na">enabled</span><span class="pi">:</span> <span class="s">${{ parameters.sonar.enabled }}</span>
</code></pre></div></div>
<p>Only the <strong>token</strong> must be specified at this point.</p>
<p>Since the pipeline is run on several operating systems, I have disabled Sonar analysis on both Linux and macOS, so the report to be uploaded will contain data collected on Windows only. As a direct consequence, the build running on Windows will take significantly more time when compared to the builds running on Linux and macOS.</p>
<h3 id="use-sonarlint">Use SonarLint</h3>
<p>What if you’d like to know whether your changes will pass the quality gate <em>before</em> committing them? Enter <a href="https://www.sonarlint.org/">SonarLint</a>. This <strong>free</strong> tool is installed in your favorite IDE and can be connected to your SonarCloud project. For instance, check <a href="https://www.sonarlint.org/visualstudio/">this page</a> for the steps needed to integrate SonarLint with Visual Studio.</p>
<h3 id="use-sonarqube-build-breaker">Use SonarQube build breaker</h3>
<p>By <em>build breaker</em> I mean the ability of failing the pipeline in case the SonarQube quality gate did not pass due to some issues like duplicated code or a security flaw. Such feature looks very appealing, but it seems there is a catch: starting with <a href="https://blog.sonarsource.com/sonarqube-5-2-in-screenshots/">version 5.2</a>, SonarQube asynchronously analyzes the report it receives from a scanner. Such analysis can take a while, so if a build polls SonarQube server for the results, some resources may be blocked (e.g. the machine running the build), as stated <a href="https://blog.sonarsource.com/breaking-the-sonarqube-analysis-with-jenkins-pipelines/">here</a>.</p>
<p>Anyway, for the sake of experimenting and out of curiosity, I have investigated how can I implement such build breaker and I’ve stumbled upon a <a href="https://github.com/michaelcostabr/SonarQubeBuildBreaker/blob/master/SonarQubeBuildBreaker.ps1">PowerShell script</a> and after some tweaking, <a href="https://github.com/satrapu/aspnet-core-logging/blob/master/Build/SonarBuildBreaker.ps1">I was able</a> to query the SonarCloud server for the status of the quality gate and break the build if the gate did not pass:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">PowerShell@2</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">sonar_build_breaker</span>
<span class="na">displayName</span><span class="pi">:</span> <span class="s">Run Sonar build breaker</span>
<span class="na">condition</span><span class="pi">:</span> <span class="pi">|</span>
<span class="s">and</span>
<span class="s">(</span>
<span class="s">eq( ${{ parameters.sonar.enabled }}, True),</span>
<span class="s">eq( ${{ parameters.sonar.buildBreaker.enabled }}, True)</span>
<span class="s">)</span>
<span class="na">inputs</span><span class="pi">:</span>
<span class="na">targetType</span><span class="pi">:</span> <span class="s1">'</span><span class="s">filePath'</span>
<span class="na">filePath</span><span class="pi">:</span> <span class="s1">'</span><span class="s">$(Build.SourcesDirectory)/Build/SonarBuildBreaker.ps1'</span>
<span class="na">arguments</span><span class="pi">:</span> <span class="pi">>-</span>
<span class="s">-SonarToken "$(CurrentProject.Sonar.Token)"</span>
<span class="s">-DotSonarQubeFolder "$(Agent.BuildDirectory)/.sonarqube"</span>
<span class="na">errorActionPreference</span><span class="pi">:</span> <span class="s">stop</span>
<span class="na">failOnStderr</span><span class="pi">:</span> <span class="s">True</span>
<span class="na">workingDirectory</span><span class="pi">:</span> <span class="s">$(Build.SourcesDirectory)</span>
</code></pre></div></div>
<p>The task above, <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/powershell?view=azure-devops">PowerShell@2</a>, queries the SonarCloud server only if both the SonarQube analysis and build breaker are enabled; for this purpose, I had to resort to custom expressions, like the ones documented <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml#examples">here</a>.</p>
<p>Feel free to disable this task if you’re not OK with this build breaker!</p>
<h2 id="badges">Badges</h2>
<h3 id="azure-pipeline-status-badge">Azure Pipeline status badge</h3>
<p>Follow <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started-yaml?view=azure-devops#get-the-status-badge">these steps</a> to display a build status badge on your GitHub README.md file.</p>
<h3 id="sonar-quality-gate-badge">Sonar quality gate badge</h3>
<p>In order to display the quality gate badge on your GitHub README.md file, go to your SonarCloud project dashboard (e.g. <a href="https://sonarcloud.io/dashboard?id=aspnet-core-logging">https://sonarcloud.io/dashboard?id=aspnet-core-logging</a>) and click the <strong>Get project badges</strong> button from bottom right and choose one of the many available badges; the <em>quality gate</em> Markdown fragment looks like this:</p>
<div class="language-markdown highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">[</span><span class="nv">![Quality Gate Status</span><span class="p">](</span><span class="sx">https://sonarcloud.io/api/project_badges/measure?project=aspnet-core-logging&metric=alert_status</span><span class="p">)</span>](https://sonarcloud.io/dashboard?id=aspnet-core-logging)
</code></pre></div></div>
<h2 id="conclusion">Conclusion</h2>
<p>Setting up my first instance of Azure Pipelines was not easy, but now that I’ve reached the point where each of my code changes triggers an automated build, I know there are so many ways to extend it:</p>
<ul>
<li>Use Docker for running a database to be targeted by some real integration tests</li>
<li>Run a code quality tool like <a href="https://www.jetbrains.com/help/resharper/InspectCode.html">InspectCode</a></li>
<li>Run a security scanner like <a href="https://snyk.io/">Snyk</a></li>
<li>Run a license management tool like <a href="https://www.whitesourcesoftware.com/">Whitesource</a></li>
<li>Integrate many other tools to ensure my OSS code is top of the line</li>
</ul>
<p>Azure Pipelines is not the only way of achieving CI for your OSS project, <a href="https://www.appveyor.com/">AppVeyor</a> being one of the alternatives, but having the ability of building my code against all major operating systems, having access to <a href="https://azure.microsoft.com/en-us/services/devops/pipelines/">so many features</a>, it’s really nice, so most definitely I will invest more in learning and experimenting with Azure Pipelines!</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://azure.microsoft.com/en-us/services/devops/">Azure DevOps</a>
<ul>
<li><a href="https://docs.microsoft.com/en-us/azure/devops/?view=azure-devops">Documentation</a></li>
<li><a href="https://azure.microsoft.com/en-us/pricing/details/devops/azure-pipelines/">Pricing for Azure DevOps</a>, including a free plan</li>
</ul>
</li>
<li><a href="https://azure.microsoft.com/en-us/services/devops/pipelines/">Azure Pipelines</a>
<ul>
<li><a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/index?view=azure-devops">Documentation</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema">YAML schema reference</a></li>
<li>Other people’s pipelines
<ul>
<li><a href="https://github.com/dotnet/BenchmarkDotNet/blob/master/azure-pipelines.Windows.yml">BenchmarkDotNet</a></li>
<li><a href="https://github.com/dotnet/roslyn/blob/master/azure-pipelines.yml">Roslyn</a></li>
<li><a href="https://github.com/danielpalme/ReportGenerator/blob/master/azure-pipelines.yml">ReportGenerator</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://marketplace.visualstudio.com/azuredevops/">Azure DevOps Extensions</a>
<ul>
<li><a href="https://marketplace.visualstudio.com/items?itemName=ms-azure-devops.azure-pipelines">Azure Pipelines for Visual Studio Code</a></li>
</ul>
</li>
<li><a href="https://www.sonarsource.com/">SonarSource</a>
<ul>
<li><a href="https://www.sonarsource.com/plans-and-pricing/">Plans & Pricing</a></li>
<li><a href="https://sonarcloud.io/documentation/">SonarCloud Documentation</a></li>
</ul>
</li>
</ul>Context Setup pipeline Sign up for Azure DevOps Create an Azure DevOps organization Create a public project Create pipeline Paths in pipeline Use YAML block chomping indicator Run jobs on different operating systems Use templates Use variables and variable groups Use secrets Build application Install .NET Core SDK Compile source code Run automated tests Setup test logger Run unit tests Run integration tests Publish test results Code coverage using Coverlet Collect code coverage data Install ReportGenerator .NET Core tool Generate code coverage HTML report using ReportGenerator Publish code coverage report Static code analysis using SonarQube SonarCloud Setup SonarCloud account Use SonarCloud token during build Install dotnet-sonarscanner tool Run dotnet-sonarscanner tool Upload static code analysis report Use SonarLint Use SonarQube build breaker Badges Azure Pipeline status badge Sonar quality gate badge Conclusion ReferencesLogging HTTP context in ASP.NET Core2018-12-27T18:15:24+00:002018-12-27T18:15:24+00:00https://crossprogramming.com/2018/12/27/logging-http-context-in-asp-net-core<ul>
<li><a href="#context">Context</a></li>
<li><a href="#logging-in-aspnet-core">Logging support in ASP.NET Core</a>
<ul>
<li><a href="#general-info">General information</a></li>
<li><a href="#built-in-logging-providers">Built-in logging providers</a></li>
<li><a href="#configure-logging-provider">Configure a logging provider</a></li>
<li><a href="#using-an-ilogger">Using an ILogger</a></li>
<li><a href="#correlation-id">Correlation identifier</a></li>
</ul>
</li>
<li><a href="#logging-the-current-http-context">Logging the current HTTP context</a></li>
<li><a href="#implementation">Implementation</a>
<ul>
<li><a href="#when-to-log">When to log</a></li>
<li><a href="#what-to-log">What to log</a></li>
<li><a href="#how-to-log">How to log</a></li>
</ul>
</li>
<li><a href="#conclusion">Conclusion</a></li>
</ul>
<hr />
<!-- markdownlint-disable MD033 -->
<h2 id="context">Context</h2>
<p>Logging is an important feature each application <strong>must</strong> have. When deploying a multi-threaded and multi-user application in production, logging becomes <strong>crucial</strong>, as it tends to be the go-to approach for understanding what has happened in case of a production error - I’m not saying this is the <em>best</em> way, just the most <em>common</em> one I’ve seen. To have a complete picture over an application running in production, logging should be accompanied by <strong>monitoring</strong> (e.g. <a href="https://prometheus.io/">Prometheus</a>, etc.) and a <strong>centralized log aggregation solution</strong> (e.g. <a href="https://www.elastic.co/elk-stack">ELK stack</a>, <a href="https://www.digitalocean.com/community/tutorials/elasticsearch-fluentd-and-kibana-open-source-log-search-and-visualization">EFK stack</a>, etc.).</p>
<p>The purpose of this article is to explain how to log the HTTP requests and responses handled by an ASP.NET Core web application using Log4Net. The application is based on one of the official ASP.NET Core tutorials, <a href="https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-web-api?view=aspnetcore-2.2&tabs=visual-studio">Create a web API with ASP.NET Core MVC</a>.<br />
The source code is hosted on <a href="https://github.com/satrapu/aspnet-core-logging">GitHub</a>, while the automatic builds are provided by <a href="https://dev.azure.com/satrapu/aspnet-core-logging/_build?definitionId=2&_a=summary">Azure DevOps</a>.</p>
<p>Development environment</p>
<ul>
<li>.NET Core SDK v2.2.101</li>
<li>Visual Studio 2017 Community Edition v15.9.4</li>
<li>Windows 10 Pro x64, version 1809</li>
</ul>
<p>When running <em>dotnet –info</em> command from any terminal, I see:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dotnet <span class="nt">--info</span>
.NET Core SDK <span class="o">(</span>reflecting any global.json<span class="o">)</span>:
Version: 2.2.101
Commit: 236713b0b7
Runtime Environment:
OS Name: Windows
OS Version: 10.0.17763
OS Platform: Windows
RID: win10-x64
Base Path: C:<span class="se">\P</span>rogram Files<span class="se">\d</span>otnet<span class="se">\s</span>dk<span class="se">\2</span>.2.101<span class="se">\</span>
Host <span class="o">(</span>useful <span class="k">for </span>support<span class="o">)</span>:
Version: 2.2.0
Commit: 1249f08fed
...
</code></pre></div></div>
<h2 id="logging-in-aspnet-core">Logging support in ASP.NET Core</h2>
<h3 id="general-info">General information</h3>
<p>ASP.NET Core provides logging support integrated with its dependency injection mechanism via several NuGet packages, the important ones being:</p>
<ul>
<li><a href="https://www.nuget.org/packages/Microsoft.Extensions.Logging.Abstractions">Microsoft.Extensions.Logging.Abstractions</a>
<ul>
<li>Contains logging infrastructure, like: <a href="https://github.com/aspnet/Extensions/blob/master/src/Logging/Logging.Abstractions/src/ILogger.cs">ILogger</a>, <a href="https://github.com/aspnet/Extensions/blob/master/src/Logging/Logging.Abstractions/src/LoggerExtensions.cs">LoggerExtensions</a> or <a href="https://github.com/aspnet/Extensions/blob/master/src/Logging/Logging.Abstractions/src/NullLogger.cs">NullLogger</a></li>
</ul>
</li>
<li><a href="https://www.nuget.org/packages/Microsoft.Extensions.Logging">Microsoft.Extensions.Logging</a>
<ul>
<li>Contains default implementations, like: <a href="https://github.com/aspnet/Extensions/blob/master/src/Logging/Logging/src/LoggerFactory.cs">LoggerFactory</a></li>
</ul>
</li>
</ul>
<p>Microsoft has done a pretty good job on documenting .NET Core and ASP.NET Core and logging is not an exception - see more here:<a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-2.2">Logging in ASP.NET Core</a>.<br />
A good companion for the aforementioned link is this one: <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/loggermessage?view=aspnetcore-2.2">High-performance logging with LoggerMessage in ASP.NET Core</a>.</p>
<h3 id="built-in-logging-providers">Built-in logging providers</h3>
<p>By default, when instantiating a web host builder via the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.webhost.createdefaultbuilder?view=aspnetcore-2.1">WebHost.CreateDefaultBuilder</a> method in order to start a web application, ASP.NET Core will configure several of its <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-2.2#built-in-logging-providers">built-in logging providers</a>:</p>
<ul>
<li>Console - found inside <a href="https://www.nuget.org/packages/Microsoft.Extensions.Logging.Console">Microsoft.Extensions.Logging.Console</a> NuGet package</li>
<li>Debug - found inside <a href="https://www.nuget.org/packages/Microsoft.Extensions.Logging.Debug">Microsoft.Extensions.Logging.Debug</a> NuGet package</li>
<li>Event Source - found inside <a href="https://www.nuget.org/packages/Microsoft.Extensions.Logging.EventSource">Microsoft.Extensions.Logging.EventSource</a> NuGet package</li>
</ul>
<p>Here’s the code fragment used for configuring these logging providers (as seen inside the decompiled code):</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">namespace</span> <span class="nn">Microsoft.AspNetCore</span>
<span class="p">{</span>
<span class="k">public</span> <span class="k">static</span> <span class="k">class</span> <span class="nc">WebHost</span>
<span class="p">{</span>
<span class="p">...</span>
<span class="k">public</span> <span class="k">static</span> <span class="n">IWebHostBuilder</span> <span class="nf">CreateDefaultBuilder</span><span class="p">(</span><span class="kt">string</span><span class="p">[]</span> <span class="n">args</span><span class="p">)</span>
<span class="p">{</span>
<span class="p">...</span><span class="nf">ConfigureLogging</span><span class="p">((</span><span class="n">Action</span><span class="p"><</span><span class="n">WebHostBuilderContext</span><span class="p">,</span> <span class="n">ILoggingBuilder</span><span class="p">>)</span> <span class="p">((</span><span class="n">hostingContext</span><span class="p">,</span> <span class="n">logging</span><span class="p">)</span> <span class="p">=></span>
<span class="p">{</span>
<span class="n">logging</span><span class="p">.</span><span class="nf">AddConfiguration</span><span class="p">((</span><span class="n">IConfiguration</span><span class="p">)</span> <span class="n">hostingContext</span><span class="p">.</span><span class="n">Configuration</span><span class="p">.</span><span class="nf">GetSection</span><span class="p">(</span><span class="s">"Logging"</span><span class="p">));</span>
<span class="n">logging</span><span class="p">.</span><span class="nf">AddConsole</span><span class="p">();</span>
<span class="n">logging</span><span class="p">.</span><span class="nf">AddDebug</span><span class="p">();</span>
<span class="n">logging</span><span class="p">.</span><span class="nf">AddEventSourceLogger</span><span class="p">();</span>
<span class="p">}))...</span>
<span class="p">}</span>
<span class="p">...</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The <strong>console</strong> logging provider will display log message inside the terminal used for running the web application, so if you start the web app from Visual Studio, this provider is kind of useless.<br />
Here’s several ways of starting an ASP.NET Core web app from CLI (they assume you are inside the solution root folder and you have opened a non-admin PowerShell terminal):</p>
<ul>
<li>Using compiled solution:</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="w"> </span><span class="n">dotnet</span><span class="w"> </span><span class="o">.</span><span class="nx">\Sources\TodoWebApp\bin\Debug\netcoreapp2.2\TodoWebApp.dll</span><span class="w">
</span></code></pre></div></div>
<ul>
<li>Using source code:</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="w"> </span><span class="n">dotnet</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="nt">--project</span><span class="w"> </span><span class="o">.</span><span class="nx">\Sources\TodoWebApp\TodoWebApp.csproj</span><span class="w">
</span></code></pre></div></div>
<ul>
<li>Using published output:</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Generate a self-contained deployment for any Windows OS running on 64 bits</span><span class="w">
</span><span class="n">dotnet</span><span class="w"> </span><span class="nx">publish</span><span class="w"> </span><span class="o">.</span><span class="nx">\Sources\TodoWebApp\TodoWebApp.csproj</span><span class="w"> </span><span class="nt">--self-contained</span><span class="w"> </span><span class="nt">--runtime</span><span class="w"> </span><span class="nx">win-x64</span><span class="w">
</span><span class="c"># Run the native EXE</span><span class="w">
</span><span class="o">.</span><span class="n">\Sources\TodoWebApp\bin\Debug\netcoreapp2.2\win-x64\publish\TodoWebApp.exe</span><span class="w">
</span></code></pre></div></div>
<p>The <strong>debug</strong> logging provider will display log messages inside the Visual Studio when starting the web app in debug mode (by default pressing F5 button). I suggest configuring your IDE to redirect all output to the <strong>Immediate Window</strong> by going to Visual Studio menu -> Tools -> Options -> Debugging -> General and then checking <em>Redirect all Output Window text to the Immediate Window</em> option.</p>
<p>I haven’t directly used <strong>event source</strong> logging provider until now, so please read more about it on its dedicated section found inside the official documentation page, <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-2.2#eventsource-provider">Logging in ASP.NET Core</a>.</p>
<p>In case you don’t need these logging providers, just call the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.logging.loggingbuilderextensions.clearproviders?view=aspnetcore-2.1">ClearProviders method</a> before configuring your own, e.g. Log4Net:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">void</span> <span class="nf">ConfigureServices</span><span class="p">(</span><span class="n">IServiceCollection</span> <span class="n">services</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Configure logging</span>
<span class="n">services</span><span class="p">.</span><span class="nf">AddLogging</span><span class="p">(</span><span class="n">loggingBuilder</span> <span class="p">=></span>
<span class="p">{</span>
<span class="c1">// Ensure the built-in logging providers are no longer in use</span>
<span class="n">loggingBuilder</span><span class="p">.</span><span class="nf">ClearProviders</span><span class="p">();</span>
<span class="c1">// Configure Log4Net logging provider</span>
<span class="n">loggingBuilder</span><span class="p">.</span><span class="nf">AddLog4Net</span><span class="p">();</span>
<span class="p">});</span>
<span class="p">...</span>
<span class="p">}</span>
</code></pre></div></div>
<h3 id="configure-logging-provider">Configure a logging provider</h3>
<p>The .NET community has provided many logging providers, like: <a href="https://github.com/huorswords/Microsoft.Extensions.Logging.Log4Net.AspNetCore">Log4Net</a>, <a href="https://github.com/NLog/NLog.Web">NLog</a>, <a href="https://github.com/serilog/serilog-aspnetcore">Serilog</a>, etc.<br />
Pick your favorite one, install the appropriate NuGet package(s), then configure it by adding the proper code in one of <strong>several</strong> places:</p>
<!-- markdownlint-disable MD029 -->
<!-- markdownlint-disable MD031 -->
<!-- markdownlint-disable MD032 -->
<ol>
<li>Program class, which builds the host running the web application:
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">static</span> <span class="k">class</span> <span class="nc">Program</span>
<span class="p">{</span>
<span class="k">public</span> <span class="k">static</span> <span class="k">void</span> <span class="nf">Main</span><span class="p">(</span><span class="kt">string</span><span class="p">[]</span> <span class="n">args</span><span class="p">)</span>
<span class="p">{</span>
<span class="nf">CreateWebHostBuilder</span><span class="p">(</span><span class="n">args</span><span class="p">).</span><span class="nf">Build</span><span class="p">()</span>
<span class="p">.</span><span class="nf">Run</span><span class="p">();</span>
<span class="p">}</span>
<span class="k">public</span> <span class="k">static</span> <span class="n">IWebHost</span> <span class="nf">CreateWebHostBuilder</span><span class="p">(</span><span class="kt">string</span><span class="p">[]</span> <span class="n">args</span><span class="p">)</span> <span class="p">=></span>
<span class="n">WebHost</span><span class="p">.</span><span class="nf">CreateDefaultBuilder</span><span class="p">(</span><span class="n">args</span><span class="p">)</span>
<span class="p">.</span><span class="n">UseStartup</span><span class="p"><</span><span class="n">Startup</span><span class="p">>()</span>
<span class="p">.</span><span class="nf">ConfigureLogging</span><span class="p">((</span><span class="n">hostingContext</span><span class="p">,</span> <span class="n">logging</span><span class="p">)</span> <span class="p">=></span>
<span class="p">{</span>
<span class="c1">// Logging provider setup goes here</span>
<span class="p">})</span>
<span class="p">.</span><span class="nf">Build</span><span class="p">();</span>
<span class="p">}</span>
</code></pre></div> </div>
</li>
<li><a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/startup?view=aspnetcore-2.2">Startup class</a><br />
2.1 <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/startup?view=aspnetcore-2.2#the-configureservices-method">ConfigureServices method</a>:
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">void</span> <span class="nf">ConfigureServices</span><span class="p">(</span><span class="n">IServiceCollection</span> <span class="n">services</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">services</span><span class="p">.</span><span class="nf">AddLogging</span><span class="p">(</span><span class="n">loggingBuilder</span> <span class="p">=></span>
<span class="p">{</span>
<span class="c1">// Logging provider setup goes here</span>
<span class="p">});</span>
<span class="p">...</span>
<span class="p">}</span>
</code></pre></div> </div>
<p>2.2 <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/startup?view=aspnetcore-2.2#the-configure-method">Configure method</a>:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">void</span> <span class="nf">Configure</span><span class="p">(</span><span class="n">IApplicationBuilder</span> <span class="n">applicationBuilder</span>
<span class="p">,</span> <span class="n">IHostingEnvironment</span> <span class="n">environment</span>
<span class="p">,</span> <span class="n">ILoggerFactory</span> <span class="n">loggerFactory</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Use extensions methods against "loggerFactory" parameter</span>
<span class="c1">// to setup your logging provider</span>
<span class="p">...</span>
<span class="p">}</span>
</code></pre></div> </div>
</li>
</ol>
<p>I would personally use either <strong>ConfigureServices</strong> or <strong>Configure</strong> method of the Startup class for configuring logging since they allow very complex customizations; on the other hand, I would use the Program class for basic setup and optionally enhancing it with the aforementioned methods.</p>
<p>Here’s an example how to configure <strong>Log4Net</strong> via the <strong>Startup.ConfigureServices</strong> method:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">void</span> <span class="nf">ConfigureServices</span><span class="p">(</span><span class="n">IServiceCollection</span> <span class="n">services</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Configure logging</span>
<span class="n">services</span><span class="p">.</span><span class="nf">AddLogging</span><span class="p">(</span><span class="n">loggingBuilder</span> <span class="p">=></span>
<span class="p">{</span>
<span class="kt">var</span> <span class="n">log4NetProviderOptions</span> <span class="p">=</span> <span class="n">Configuration</span><span class="p">.</span><span class="nf">GetSection</span><span class="p">(</span><span class="s">"Log4NetCore"</span><span class="p">)</span>
<span class="p">.</span><span class="n">Get</span><span class="p"><</span><span class="n">Log4NetProviderOptions</span><span class="p">>();</span>
<span class="n">loggingBuilder</span><span class="p">.</span><span class="nf">AddLog4Net</span><span class="p">(</span><span class="n">log4NetProviderOptions</span><span class="p">);</span>
<span class="n">loggingBuilder</span><span class="p">.</span><span class="nf">SetMinimumLevel</span><span class="p">(</span><span class="n">LogLevel</span><span class="p">.</span><span class="n">Debug</span><span class="p">);</span>
<span class="p">});</span>
<span class="p">...</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The code above will read the section named <strong>Log4NetCore</strong> from the current configuration and will map it to a class, <a href="https://github.com/huorswords/Microsoft.Extensions.Logging.Log4Net.AspNetCore/blob/develop/src/Microsoft.Extensions.Logging.Log4Net.AspNetCore/Log4NetProviderOptions.cs">Log4NetProviderOptions</a>, in order to have access to one of the most important Log4Net configuration properties, like the path to the <a href="https://logging.apache.org/log4net/release/manual/configuration.html">XML file</a> declaring loggers and appenders.<br />
Then, the <strong>loggingBuilder.AddLog4Net</strong> method call will instruct ASP.NET Core to use <a href="https://github.com/huorswords/Microsoft.Extensions.Logging.Log4Net.AspNetCore/blob/develop/src/Microsoft.Extensions.Logging.Log4Net.AspNetCore/Log4NetProvider.cs">Log4NetProvider</a> class from now on as a factory for all needed <a href="https://github.com/aspnet/Extensions/blob/master/src/Logging/Logging.Abstractions/src/ILogger.cs">ILogger</a> objects.<br />
The <strong>loggingBuilder.SetMinimumLevel</strong> method call is explained <a href="https://github.com/huorswords/Microsoft.Extensions.Logging.Log4Net.AspNetCore#net-core-20---logging-debug-level-messages">here</a>.</p>
<h3 id="using-an-ilogger">Using an ILogger</h3>
<p>When a class needs to log a message, it has to declare a dependency upon the <strong>ILogger<T></strong> interface; the built-in ASP.NET Core dependency injection will resolve it automatically:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">class</span> <span class="nc">TodoService</span> <span class="p">:</span> <span class="n">ITodoService</span>
<span class="p">{</span>
<span class="k">private</span> <span class="k">readonly</span> <span class="n">TodoDbContext</span> <span class="n">todoDbContext</span><span class="p">;</span>
<span class="k">private</span> <span class="k">readonly</span> <span class="n">ILogger</span> <span class="n">logger</span><span class="p">;</span>
<span class="k">public</span> <span class="nf">TodoService</span><span class="p">(</span><span class="n">TodoDbContext</span> <span class="n">todoDbContext</span><span class="p">,</span> <span class="n">ILogger</span><span class="p"><</span><span class="n">TodoService</span><span class="p">></span> <span class="n">logger</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">this</span><span class="p">.</span><span class="n">todoDbContext</span> <span class="p">=</span> <span class="n">todoDbContext</span> <span class="p">??</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">todoDbContext</span><span class="p">));</span>
<span class="k">this</span><span class="p">.</span><span class="n">logger</span> <span class="p">=</span> <span class="n">logger</span> <span class="p">??</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">logger</span><span class="p">));</span>
<span class="p">}</span>
<span class="p">...</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Then, the class can use its <strong>logger</strong> field to log a message:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="n">IList</span><span class="p"><</span><span class="n">TodoItem</span><span class="p">></span> <span class="nf">GetAll</span><span class="p">()</span>
<span class="p">{</span>
<span class="k">if</span> <span class="p">(</span><span class="n">logger</span><span class="p">.</span><span class="nf">IsEnabled</span><span class="p">(</span><span class="n">LogLevel</span><span class="p">.</span><span class="n">Debug</span><span class="p">))</span>
<span class="p">{</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">LogDebug</span><span class="p">(</span><span class="s">"GetAll() - BEGIN"</span><span class="p">);</span>
<span class="p">}</span>
<span class="kt">var</span> <span class="n">result</span> <span class="p">=</span> <span class="n">todoDbContext</span><span class="p">.</span><span class="n">TodoItems</span><span class="p">.</span><span class="nf">ToList</span><span class="p">();</span>
<span class="k">if</span> <span class="p">(</span><span class="n">logger</span><span class="p">.</span><span class="nf">IsEnabled</span><span class="p">(</span><span class="n">LogLevel</span><span class="p">.</span><span class="n">Debug</span><span class="p">))</span>
<span class="p">{</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">LogDebug</span><span class="p">(</span><span class="s">"GetAll() - END"</span><span class="p">);</span>
<span class="p">}</span>
<span class="k">return</span> <span class="n">result</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>
<h2 id="logging-the-current-http-context">Logging the current HTTP context</h2>
<p>One of the best ways offered by ASP.NET Core to handle the current HTTP context and manipulate it it’s via a <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware/?view=aspnetcore-2.2">middleware</a>.<br />
The <a href="https://github.com/satrapu/aspnet-core-logging/blob/master/Sources/TodoWebApp/Logging/LoggingMiddleware.cs">middleware</a> found inside this particular web application will check whether a particular HTTP context must be logged and exactly what to log via several of its dependencies - to be more exact 2 interfaces - since deciding <a href="https://github.com/satrapu/aspnet-core-logging/blob/master/Sources/TodoWebApp/Logging/IHttpContextLoggingHandler.cs">when to log</a> should not depend on <a href="https://github.com/satrapu/aspnet-core-logging/blob/master/Sources/TodoWebApp/Logging/IHttpObjectConverter.cs">what and how to log</a>.<br />
The middleware constructor looks like this:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="nf">LoggingMiddleware</span><span class="p">(</span><span class="n">RequestDelegate</span> <span class="n">nextRequestDelegate</span>
<span class="p">,</span> <span class="n">IHttpContextLoggingHandler</span> <span class="n">httpContextLoggingHandler</span>
<span class="p">,</span> <span class="n">IHttpObjectConverter</span> <span class="n">httpObjectConverter</span>
<span class="p">,</span> <span class="n">ILogger</span><span class="p"><</span><span class="n">LoggingMiddleware</span><span class="p">></span> <span class="n">logger</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">this</span><span class="p">.</span><span class="n">nextRequestDelegate</span> <span class="p">=</span> <span class="n">nextRequestDelegate</span> <span class="p">??</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">nextRequestDelegate</span><span class="p">));</span>
<span class="k">this</span><span class="p">.</span><span class="n">httpContextLoggingHandler</span> <span class="p">=</span> <span class="n">httpContextLoggingHandler</span> <span class="p">??</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">httpContextLoggingHandler</span><span class="p">));</span>
<span class="k">this</span><span class="p">.</span><span class="n">httpObjectConverter</span> <span class="p">=</span> <span class="n">httpObjectConverter</span> <span class="p">??</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">httpObjectConverter</span><span class="p">));</span>
<span class="k">this</span><span class="p">.</span><span class="n">logger</span> <span class="p">=</span> <span class="n">logger</span> <span class="p">??</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">logger</span><span class="p">));</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The middleware logic looks like this:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="k">public</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">Invoke</span><span class="p">(</span><span class="n">HttpContext</span> <span class="n">httpContext</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">if</span> <span class="p">(</span><span class="n">httpContextLoggingHandler</span><span class="p">.</span><span class="nf">ShouldLog</span><span class="p">(</span><span class="n">httpContext</span><span class="p">))</span>
<span class="p">{</span>
<span class="k">await</span> <span class="nf">Log</span><span class="p">(</span><span class="n">httpContext</span><span class="p">);</span>
<span class="p">}</span>
<span class="k">else</span>
<span class="p">{</span>
<span class="k">await</span> <span class="nf">nextRequestDelegate</span><span class="p">(</span><span class="n">httpContext</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="k">private</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">Log</span><span class="p">(</span><span class="n">HttpContext</span> <span class="n">httpContext</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Code needed to log the given httpContext object</span>
<span class="p">}</span>
</code></pre></div></div>
<p>To enable this middleware, I have created an <a href="https://github.com/satrapu/aspnet-core-logging/blob/master/Sources/TodoWebApp/Logging/LoggingMiddlewareExtensions.cs">extension method</a> and called it from the <a href="https://github.com/satrapu/aspnet-core-logging/blob/master/Sources/TodoWebApp/Startup.cs">Startup class</a>:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">void</span> <span class="nf">Configure</span><span class="p">(</span><span class="n">IApplicationBuilder</span> <span class="n">applicationBuilder</span><span class="p">,</span> <span class="n">IHostingEnvironment</span> <span class="n">environment</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Ensure logging middleware is invoked as early as possible</span>
<span class="n">applicationBuilder</span><span class="p">.</span><span class="nf">UseHttpLogging</span><span class="p">();</span>
<span class="p">...</span>
<span class="p">}</span>
</code></pre></div></div>
<p>To keep things simple, I have written just one service class implementing <strong>IHttpContextLoggingHandler</strong> (when to log) and <strong>IHttpObjectConverter</strong> (what & how to log) interfaces - see <a href="https://github.com/satrapu/aspnet-core-logging/blob/master/Sources/TodoWebApp/Logging/LoggingService.cs">LoggingService class</a>.</p>
<p>The <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.httpcontext?view=aspnetcore-2.1">HttpContext class</a> contains the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.httpcontext.request?view=aspnetcore-2.1">Request</a> and <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.httpcontext.response?view=aspnetcore-2.1">Response</a> properties which will be converted to log messages by LoggingService class.<br />
The log message generated for the request contains the path, query string, headers and the body, while the log message generated for the response contains the headers and the body.</p>
<p>An <strong>HTTP request</strong> log message looks like this:</p>
<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>2018-12-26 18:18:57,895 [11] DEBUG TodoWebApp.Logging.LoggingMiddleware
--- REQUEST 0HLJ82DDDSHQS: BEGIN ---
POST /api/todo HTTP/2.0
Host: localhost
Content-Type: application/json; charset=utf-8
{"Id":-100,"Name":"todo-item-4-testing-ceaa5446ff7940138b8da9a9b7c52b9d","IsComplete":false}
--- REQUEST 0HLJ82DDDSHQS: END ---
</code></pre></div></div>
<p>The accompanying <strong>HTTP response</strong> log message looks like this:</p>
<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>2018-12-26 18:18:58,323 [11] DEBUG TodoWebApp.Logging.LoggingMiddleware
--- RESPONSE 0HLJ82DDDSHQS: BEGIN ---
HTTP/2.0 400 BadRequest
Content-Type: application/json; charset=utf-8
{"Id":["The field Id must be between 1 and 9.22337203685478E+18."]}
--- RESPONSE 0HLJ82DDDSHQS: END ---
</code></pre></div></div>
<h3 id="correlation-id">Correlation identifier</h3>
<p>The <strong>0HLJ82DDDSHQS</strong> string represents the so called <em><a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/CorrelationIdentifier.html">correlation identifier</a></em>, which helps in understanding the user journeys inside the application.<br />
An HTTP request and its accompanying response will use the same correlation identifier; furthermore, any log message related to this pair should use this identifier too.<br />
ASP.NET Core offers such correlation identifiers via the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.httpcontext.traceidentifier?view=aspnetcore-2.1">HttpContext.TraceIdentifier</a> property, available on both the current HTTP request (the <strong>HttpRequest.HttpContext.TraceIdentifier</strong> property) and response (the <strong>HttpResponse.HttpContext.TraceIdentifier</strong> property).</p>
<h2 id="implementation">Implementation</h2>
<h3 id="when-to-log">When to log</h3>
<p>My approach to deciding when to log is based on whether the HTTP request is text-based, expects a text-based response or the request path starts with a given string. The HTTP protocol allows the request to signal its body content type via the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type">Content-Type</a> header, while the expected content type is signaled via the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept">Accept</a> header.<br />
For instance, in case the <em>Accept</em> or <em>Content-Type</em> header is set to <em>text/plain</em>, <em>application/json</em> or <em>application/xml</em>, the request is seen as <em>text-based</em> and thus will be logged; otherwise, the request path is checked and if it starts with <em>/api/</em> (e.g. <em>/api/todo</em>), the request will also be seen as <em>text-based</em> and will be logged.</p>
<p>Here’s the simplified version of my implementation for when to log:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">class</span> <span class="nc">LoggingService</span> <span class="p">:</span> <span class="n">IHttpContextLoggingHandler</span><span class="p">,</span> <span class="n">IHttpObjectConverter</span>
<span class="p">{</span>
<span class="p">...</span>
<span class="k">private</span> <span class="k">static</span> <span class="k">readonly</span> <span class="kt">string</span><span class="p">[]</span> <span class="n">textBasedHeaderNames</span> <span class="p">=</span> <span class="p">{</span> <span class="s">"Accept"</span><span class="p">,</span> <span class="s">"Content-Type"</span> <span class="p">};</span>
<span class="k">private</span> <span class="k">static</span> <span class="k">readonly</span> <span class="kt">string</span><span class="p">[]</span> <span class="n">textBasedHeaderValues</span> <span class="p">=</span> <span class="p">{</span> <span class="s">"application/json"</span><span class="p">,</span> <span class="s">"application/xml"</span><span class="p">,</span> <span class="s">"text/"</span> <span class="p">};</span>
<span class="k">private</span> <span class="k">const</span> <span class="kt">string</span> <span class="n">ACCEPTABLE_REQUEST_URL_PREFIX</span> <span class="p">=</span> <span class="s">"/api/"</span><span class="p">;</span>
<span class="k">public</span> <span class="kt">bool</span> <span class="nf">ShouldLog</span><span class="p">(</span><span class="n">HttpContext</span> <span class="n">httpContext</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">return</span> <span class="nf">IsTextBased</span><span class="p">(</span><span class="n">httpContext</span><span class="p">.</span><span class="n">Request</span><span class="p">);</span>
<span class="p">}</span>
<span class="k">private</span> <span class="k">static</span> <span class="kt">bool</span> <span class="nf">IsTextBased</span><span class="p">(</span><span class="n">HttpRequest</span> <span class="n">httpRequest</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">return</span> <span class="n">textBasedHeaderNames</span><span class="p">.</span><span class="nf">Any</span><span class="p">(</span><span class="n">headerName</span> <span class="p">=></span> <span class="nf">IsTextBased</span><span class="p">(</span><span class="n">httpRequest</span><span class="p">,</span> <span class="n">headerName</span><span class="p">))</span>
<span class="p">||</span> <span class="n">httpRequest</span><span class="p">.</span><span class="n">Path</span><span class="p">.</span><span class="nf">ToUriComponent</span><span class="p">().</span><span class="nf">StartsWith</span><span class="p">(</span><span class="n">ACCEPTABLE_REQUEST_URL_PREFIX</span><span class="p">);</span>
<span class="p">}</span>
<span class="k">private</span> <span class="k">static</span> <span class="kt">bool</span> <span class="nf">IsTextBased</span><span class="p">(</span><span class="n">HttpRequest</span> <span class="n">httpRequest</span><span class="p">,</span> <span class="kt">string</span> <span class="n">headerName</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">return</span> <span class="n">httpRequest</span><span class="p">.</span><span class="n">Headers</span><span class="p">.</span><span class="nf">TryGetValue</span><span class="p">(</span><span class="n">headerName</span><span class="p">,</span> <span class="k">out</span> <span class="kt">var</span> <span class="n">headerValues</span><span class="p">)</span>
<span class="p">&&</span> <span class="n">textBasedHeaderValues</span><span class="p">.</span><span class="nf">Any</span><span class="p">(</span><span class="n">textBasedHeaderValue</span> <span class="p">=></span> <span class="n">headerValues</span><span class="p">.</span><span class="nf">Any</span><span class="p">(</span><span class="n">headerValue</span> <span class="p">=></span> <span class="n">headerValue</span><span class="p">.</span><span class="nf">StartsWith</span><span class="p">(</span><span class="n">textBasedHeaderValue</span><span class="p">)));</span>
<span class="p">}</span>
<span class="p">...</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Please note the logic above is just for demo purposes, as most probably <em>your</em> production-grade application is very different than this one, so please make sure you provide your own implementation for when to log <em>before</em> deploying to production!</p>
<h3 id="what-to-log">What to log</h3>
<p>The <strong>LoggingService</strong> class is converting both the HTTP request and response following the official HTTP message structure, as documented <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Messages">here</a>.<br />
Converting an HTTP request looks like this:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">private</span> <span class="k">const</span> <span class="kt">int</span> <span class="n">REQUEST_SIZE</span> <span class="p">=</span> <span class="m">1000</span><span class="p">;</span>
<span class="p">...</span>
<span class="k">public</span> <span class="kt">string</span> <span class="nf">ToLogMessage</span><span class="p">(</span><span class="n">HttpRequest</span> <span class="n">httpRequest</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">if</span> <span class="p">(</span><span class="n">httpRequest</span> <span class="p">==</span> <span class="k">null</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">httpRequest</span><span class="p">));</span>
<span class="p">}</span>
<span class="k">if</span> <span class="p">(</span><span class="n">logger</span><span class="p">.</span><span class="nf">IsEnabled</span><span class="p">(</span><span class="n">LogLevel</span><span class="p">.</span><span class="n">Debug</span><span class="p">))</span>
<span class="p">{</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">LogDebug</span><span class="p">(</span><span class="s">$"Converting HTTP request </span><span class="p">{</span><span class="n">httpRequest</span><span class="p">.</span><span class="n">HttpContext</span><span class="p">.</span><span class="n">TraceIdentifier</span><span class="p">}</span><span class="s"> ..."</span><span class="p">);</span>
<span class="p">}</span>
<span class="kt">var</span> <span class="n">stringBuilder</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">StringBuilder</span><span class="p">(</span><span class="n">REQUEST_SIZE</span><span class="p">);</span>
<span class="n">stringBuilder</span><span class="p">.</span><span class="nf">AppendLine</span><span class="p">(</span><span class="s">$"--- REQUEST </span><span class="p">{</span><span class="n">httpRequest</span><span class="p">.</span><span class="n">HttpContext</span><span class="p">.</span><span class="n">TraceIdentifier</span><span class="p">}</span><span class="s">: BEGIN ---"</span><span class="p">);</span>
<span class="n">stringBuilder</span><span class="p">.</span><span class="nf">AppendLine</span><span class="p">(</span><span class="s">$"</span><span class="p">{</span><span class="n">httpRequest</span><span class="p">.</span><span class="n">Method</span><span class="p">}</span><span class="s"> </span><span class="p">{</span><span class="n">httpRequest</span><span class="p">.</span><span class="n">Path</span><span class="p">}{</span><span class="n">httpRequest</span><span class="p">.</span><span class="n">QueryString</span><span class="p">.</span><span class="nf">ToUriComponent</span><span class="p">()}</span><span class="s"> </span><span class="p">{</span><span class="n">httpRequest</span><span class="p">.</span><span class="n">Protocol</span><span class="p">}</span><span class="s">"</span><span class="p">);</span>
<span class="k">if</span> <span class="p">(</span><span class="n">httpRequest</span><span class="p">.</span><span class="n">Headers</span><span class="p">.</span><span class="nf">Any</span><span class="p">())</span>
<span class="p">{</span>
<span class="k">foreach</span> <span class="p">(</span><span class="kt">var</span> <span class="n">header</span> <span class="k">in</span> <span class="n">httpRequest</span><span class="p">.</span><span class="n">Headers</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">stringBuilder</span><span class="p">.</span><span class="nf">AppendLine</span><span class="p">(</span><span class="s">$"</span><span class="p">{</span><span class="n">header</span><span class="p">.</span><span class="n">Key</span><span class="p">}</span><span class="s">: </span><span class="p">{</span><span class="n">header</span><span class="p">.</span><span class="n">Value</span><span class="p">}</span><span class="s">"</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="n">stringBuilder</span><span class="p">.</span><span class="nf">AppendLine</span><span class="p">();</span>
<span class="n">stringBuilder</span><span class="p">.</span><span class="nf">AppendLine</span><span class="p">(</span><span class="n">httpRequest</span><span class="p">.</span><span class="n">Body</span><span class="p">.</span><span class="nf">ReadAndReset</span><span class="p">());</span>
<span class="n">stringBuilder</span><span class="p">.</span><span class="nf">AppendLine</span><span class="p">(</span><span class="s">$"--- REQUEST </span><span class="p">{</span><span class="n">httpRequest</span><span class="p">.</span><span class="n">HttpContext</span><span class="p">.</span><span class="n">TraceIdentifier</span><span class="p">}</span><span class="s">: END ---"</span><span class="p">);</span>
<span class="kt">var</span> <span class="n">result</span> <span class="p">=</span> <span class="n">stringBuilder</span><span class="p">.</span><span class="nf">ToString</span><span class="p">();</span>
<span class="k">return</span> <span class="n">result</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Converting an HTTP response looks like this:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">private</span> <span class="k">const</span> <span class="kt">int</span> <span class="n">RESPONSE_SIZE</span> <span class="p">=</span> <span class="m">1000</span><span class="p">;</span>
<span class="p">...</span>
<span class="k">public</span> <span class="kt">string</span> <span class="nf">ToLogMessage</span><span class="p">(</span><span class="n">HttpResponse</span> <span class="n">httpResponse</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">if</span> <span class="p">(</span><span class="n">httpResponse</span> <span class="p">==</span> <span class="k">null</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">httpResponse</span><span class="p">));</span>
<span class="p">}</span>
<span class="k">if</span> <span class="p">(</span><span class="n">logger</span><span class="p">.</span><span class="nf">IsEnabled</span><span class="p">(</span><span class="n">LogLevel</span><span class="p">.</span><span class="n">Debug</span><span class="p">))</span>
<span class="p">{</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">LogDebug</span><span class="p">(</span><span class="s">$"Converting HTTP response </span><span class="p">{</span><span class="n">httpResponse</span><span class="p">.</span><span class="n">HttpContext</span><span class="p">.</span><span class="n">TraceIdentifier</span><span class="p">}</span><span class="s"> ..."</span><span class="p">);</span>
<span class="p">}</span>
<span class="kt">var</span> <span class="n">stringBuilder</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">StringBuilder</span><span class="p">(</span><span class="n">RESPONSE_SIZE</span><span class="p">);</span>
<span class="n">stringBuilder</span><span class="p">.</span><span class="nf">AppendLine</span><span class="p">(</span><span class="s">$"--- RESPONSE </span><span class="p">{</span><span class="n">httpResponse</span><span class="p">.</span><span class="n">HttpContext</span><span class="p">.</span><span class="n">TraceIdentifier</span><span class="p">}</span><span class="s">: BEGIN ---"</span><span class="p">);</span>
<span class="n">stringBuilder</span><span class="p">.</span><span class="nf">AppendLine</span><span class="p">(</span><span class="s">$"</span><span class="p">{</span><span class="n">httpResponse</span><span class="p">.</span><span class="n">HttpContext</span><span class="p">.</span><span class="n">Request</span><span class="p">.</span><span class="n">Protocol</span><span class="p">}</span><span class="s"> </span><span class="p">{</span><span class="n">httpResponse</span><span class="p">.</span><span class="n">StatusCode</span><span class="p">}</span><span class="s"> </span><span class="p">{((</span><span class="n">HttpStatusCode</span><span class="p">)</span><span class="n">httpResponse</span><span class="p">.</span><span class="n">StatusCode</span><span class="p">).</span><span class="nf">ToString</span><span class="p">()}</span><span class="s">"</span><span class="p">);</span>
<span class="k">if</span> <span class="p">(</span><span class="n">httpResponse</span><span class="p">.</span><span class="n">Headers</span><span class="p">.</span><span class="nf">Any</span><span class="p">())</span>
<span class="p">{</span>
<span class="k">foreach</span> <span class="p">(</span><span class="kt">var</span> <span class="n">header</span> <span class="k">in</span> <span class="n">httpResponse</span><span class="p">.</span><span class="n">Headers</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">stringBuilder</span><span class="p">.</span><span class="nf">AppendLine</span><span class="p">(</span><span class="s">$"</span><span class="p">{</span><span class="n">header</span><span class="p">.</span><span class="n">Key</span><span class="p">}</span><span class="s">: </span><span class="p">{</span><span class="n">header</span><span class="p">.</span><span class="n">Value</span><span class="p">}</span><span class="s">"</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="n">stringBuilder</span><span class="p">.</span><span class="nf">AppendLine</span><span class="p">();</span>
<span class="n">stringBuilder</span><span class="p">.</span><span class="nf">AppendLine</span><span class="p">(</span><span class="n">httpResponse</span><span class="p">.</span><span class="n">Body</span><span class="p">.</span><span class="nf">ReadAndReset</span><span class="p">());</span>
<span class="n">stringBuilder</span><span class="p">.</span><span class="nf">AppendLine</span><span class="p">(</span><span class="s">$"--- RESPONSE </span><span class="p">{</span><span class="n">httpResponse</span><span class="p">.</span><span class="n">HttpContext</span><span class="p">.</span><span class="n">TraceIdentifier</span><span class="p">}</span><span class="s">: END ---"</span><span class="p">);</span>
<span class="kt">var</span> <span class="n">result</span> <span class="p">=</span> <span class="n">stringBuilder</span><span class="p">.</span><span class="nf">ToString</span><span class="p">();</span>
<span class="k">return</span> <span class="n">result</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>
<h3 id="how-to-log">How to log</h3>
<p>The <strong>LoggingMiddleware</strong> class must read both HTTP request and response in order to log them and must ensure that it will do these things without affecting any following middleware which might also need to read them too.
Both <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.httprequest.body?view=aspnetcore-2.1">HttpRequest.Body</a> and <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.httpresponse.body?view=aspnetcore-2.1">HttpResponse.Body</a> properties are streams which once read, cannot be reset to their initial position; this means that once a middleware has read the stream of any of these properties, the following middleware will have nothing to read from.<br />
In order to bypass this problem, one can use the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.internal.bufferinghelper.enablerewind?view=aspnetcore-2.1">EnableRewind method</a> against the HTTP request and replace the response body stream with a seekable one.</p>
<p>Enable rewinding the HTTP request stream:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">httpContext</span><span class="p">.</span><span class="n">Request</span><span class="p">.</span><span class="nf">EnableRewind</span><span class="p">();</span>
</code></pre></div></div>
<p>Replacing the HTTP response stream with a seekable one:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">private</span> <span class="k">const</span> <span class="kt">int</span> <span class="n">RESPONSE_BUFFER_SIZE_IN_BYTES</span> <span class="p">=</span> <span class="m">1024</span> <span class="p">*</span> <span class="m">1024</span><span class="p">;</span>
<span class="p">...</span>
<span class="kt">var</span> <span class="n">originalResponseBodyStream</span> <span class="p">=</span> <span class="n">httpContext</span><span class="p">.</span><span class="n">Response</span><span class="p">.</span><span class="n">Body</span><span class="p">;</span>
<span class="k">using</span> <span class="p">(</span><span class="kt">var</span> <span class="n">stream</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">MemoryStream</span><span class="p">(</span><span class="n">RESPONSE_BUFFER_SIZE_IN_BYTES</span><span class="p">))</span>
<span class="p">{</span>
<span class="c1">// Replace response body stream with a seekable one, like a MemoryStream, to allow logging it</span>
<span class="n">httpContext</span><span class="p">.</span><span class="n">Response</span><span class="p">.</span><span class="n">Body</span> <span class="p">=</span> <span class="n">stream</span><span class="p">;</span>
<span class="c1">// Process current request</span>
<span class="k">await</span> <span class="nf">nextRequestDelegate</span><span class="p">(</span><span class="n">httpContext</span><span class="p">);</span>
<span class="c1">// Logs the current HTTP response</span>
<span class="kt">var</span> <span class="n">httpResponseAsLogMessage</span> <span class="p">=</span> <span class="n">httpObjectConverter</span><span class="p">.</span><span class="nf">ToLogMessage</span><span class="p">(</span><span class="n">httpContext</span><span class="p">.</span><span class="n">Response</span><span class="p">);</span>
<span class="n">logger</span><span class="p">.</span><span class="nf">LogDebug</span><span class="p">(</span><span class="n">httpResponseAsLogMessage</span><span class="p">);</span>
<span class="c1">// Ensure the original HTTP response is sent to the next middleware</span>
<span class="k">await</span> <span class="n">stream</span><span class="p">.</span><span class="nf">CopyToAsync</span><span class="p">(</span><span class="n">originalResponseBodyStream</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Both above code fragments can be found inside the <a href="https://github.com/satrapu/aspnet-core-logging/blob/master/Sources/TodoWebApp/Logging/LoggingMiddleware.cs#L62L89">LoggingMiddleware.Log method</a>.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Logging an HTTP context in ASP.NET Core is not an easy task, but as seen above, it can be done.<br />
Care must be taken when trying to log any large request and response bodies - I’m waiting for David Fowler’s <a href="https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AspNetCoreGuidance.md#avoid-reading-the-entire-request-body-or-response-body-into-memory">recommendation</a> for avoiding reading the entire request body or response body into memory - unfortunately, at the time of writing this paragraph, it’s still not done!<br />
This article has only scratch the surface of the logging iceberg, but hopefully I will come back with more information helping tackling this very important topic.</p>Context Logging support in ASP.NET Core General information Built-in logging providers Configure a logging provider Using an ILogger Correlation identifier Logging the current HTTP context Implementation When to log What to log How to log ConclusionHow did I become a software developer2018-08-19T22:05:33+00:002018-08-19T22:05:33+00:00https://crossprogramming.com/2018/08/19/how-did-i-become-a-software-developer<ul>
<li><a href="#intro">Introduction</a></li>
<li><a href="#first-contact">First contact with a computer</a></li>
<li><a href="#my-first-line-of-code">Writing my first line of code</a></li>
<li><a href="#high-school">High school</a></li>
<li><a href="#computer-science-1">First attempt to Computer Science</a></li>
<li><a href="#computer-science-2">Second attempt to Computer Science</a></li>
<li><a href="#living-the-dream">Living the dream</a></li>
<li><a href="#the-shift">The shift</a></li>
<li><a href="#it-just-happened">It just happened</a></li>
<li><a href="#first-job-as-developer">My first job as a software developer</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ul>
<hr />
<!-- markdownlint-disable MD033 -->
<h2 id="intro">Introduction</h2>
<p>After giving a talk at <a href="https://github.com/satrapu/iquest-keyboards-and-mice-brasov-2018">iQuest Keyboards & Mice</a> in Brasov, watching <a href="https://en.wikipedia.org/wiki/2018_FIFA_World_Cup">World Cup</a> and returning from vacation, is now time to go back blogging, but before returning to technical posts, I’ve decided to write a personal one.<br />
The year 2018 marks a very important moment in my life: 25 years since I’ve written my first line of code, so faced with such an anniversary, I took the time to detail how did I become a software developer.</p>
<h2 id="first-contact">First contact with a computer</h2>
<p>In the summer of 1992, when I was 12 years old, my mother told me I could come play video games on the computer located in her office. If I remember correctly, it was a 286 computer running MS-DOS and since this was my first time in front of a computer, it was pure magic for me seeing someone typing something on the keyboard and launching a game. So, yeah, my first contact with a computer was playing video games and the first that I played was <a href="https://en.wikipedia.org/wiki/Grand_Prix_Circuit_(video_game)">Grand Prix Circuit</a>, while the second I played on the same computer was <a href="https://en.wikipedia.org/wiki/Prehistorik">Prehistorik</a>.<br />
Even though I was hooked to computers since that very moment, the first one that I could afford buying would not come until late 2001.<br />
Playing video games is a thing I’m still doing these days, helping me relax and distracting my attention from things like work and … blogging ;)</p>
<h2 id="my-first-line-of-code">Writing my first line of code</h2>
<p>During the same year, while entering my 7th grade, my classmates and I were split among different study groups, each one being allocated 2 hours per week for studying a specific class like Romanian, Computer Science, Physics & Chemistry and others, in addition to the daily classes. Of course I wanted to go to the Computer Science class, but since the entry criteria was having at least mark 8 at Mathematics (in Romanian educational system, mark 1 is the lowest, while mark 10 is the highest) and since I was around mark 6, I decided to pick Physics & Chemistry since my best friend was attending that class too.<br />
After several months of studying elements and forces, <em>it just happened</em> that I’ve found out one of my classmates, who also didn’t qualified for Computer Science class due to the same reason as me, managed to get transferred to that class. Based on that precedence, I’ve also asked for a transfer and thus I got to the point where I was standing in front of a computer and writing my first program using <a href="https://en.wikipedia.org/wiki/BASIC">Basic</a>.<br />
After finishing the programs requested by our teacher, we were allowed to play video games in the remaining study time and we were loading them into the computer using a cassette player, very much like shown <a href="https://www.youtube.com/watch?v=AicqKYvRmuk">here</a> - we’re talking about an <a href="http://www.homecomputer.de/pages/easteurope_ro.html">HC 90 computer</a> with a monochrome monitor and the screeching sound from this video does bring back good old memories!<br />
Solving geometry problems using Basic (as my Math and Computer Science teachers were one and the same person) was thus my first step into coding and in the autumn of 1993, when I was 13 years old, I realized I wanted to become a software developer. Choosing a career is not easy, so I feel very fortunate that I knew what my future career is at such a young age.</p>
<h2 id="high-school">High school</h2>
<p>After graduating secondary school and knowing what my future career is, in the summer of 1994, I enrolled in the exam for a place at <a href="http://www.moisilbrasov.ro/">Grigore Mosil National College</a> from Brasov, formally known as Computer Science High School; the exam consisted in 2 written exams: one for Romanian and another one for Mathematics. Due to my modest skills in Math, I failed to get a grade good enough to be admitted. At this point, it looked like I would never see my dream of becoming a software developer come true, but, after several days, <em>it just happened</em> that this institution allowed 25 more students to be admitted, so, if I remember correctly, I was the last or the one before the last person to be admitted!<br />
Even though I struggled with Math during the entire high school, I was getting a formal education in Computer Science. It all started with studying algorithms using logical schemas and pseudo-code, then I was introduced to several programming languages, like <a href="https://en.wikipedia.org/wiki/Pascal_(programming_language)">Pascal</a> and <a href="https://en.wikipedia.org/wiki/C_(programming_language)">C</a>. Later I studied <a href="https://en.wikipedia.org/wiki/Graph_theory">graph theory</a>, operating systems, like <a href="https://en.wikipedia.org/wiki/MS-DOS">MS-DOS</a> and database management systems, like <a href="https://en.wikipedia.org/wiki/FoxPro">FoxPro</a>.<br />
During the high school years (1994 - 1998) I discovered I had a real passion for programming and it really helped a lot having a very gifted Computer Science teacher like <strong>Mrs. Delia Gârbacea</strong>, who knew how to teach, challenge and motivate her students. Sadly, she was my Computer Science teacher for only 2 years, during my 9th and 10th grades, but even so, I consider her influence to be of paramount importance to my career as she shaped my problem analyzing and solving skills, while at the same time nurturing my passion for programming and teaching me not to be afraid of hard-working towards a solution.<br />
While being a 9th grade student, my mother helped a guy and asked him to return the favor by teaching me and my elder brother some Computer Science skills. What he did was opening the MS-DOS manual from the command line and reading us the entry of a particular command and then letting us run it on his computer. It was informal Computer Science education and this lead my brother and I buy the first programming book, an MS-DOS 6 manual, and then several second-hand English computer related magazines from an antique store. Learning several of these MS-DOS commands came in real handy as one year later, while attending an Operating Systems high school class, I was being examined by concocting various MS-DOS commands used for managing files and folders. It was my first time, but definitely not the last one, when I was studying Computer Science outside my day-to-day school and then, much later, my job.</p>
<h2 id="computer-science-1">First attempt to Computer Science</h2>
<p>By my senior high school year, it was crystal clear to me that after graduation, I would try and be admitted to a Computer Science Faculty. The year 1998 was also the first one when <a href="https://www.unitbv.ro/en/">Transilvania University</a> from Brasov allowed a person to enroll to as much specializations as possible, as long as the exam of each one was in a different day, so I enrolled to both <a href="https://www.unitbv.ro/en/faculties/faculty-of-mathematics-and-computer-science.html">Mathematics and Computer Science</a> and <a href="https://www.unitbv.ro/en/faculties/faculty-of-technological-engineering-and-industrial-management.html">Technological Engineering</a> Faculties.
Once again, due to my modest Math skills, I failed to be admitted to Computer Science specialization; on the other hand, my exam grades were good enough for both Mathematics-Physics and Aerospace Engineering. Between becoming a Math or Physics teacher and building planes, I have chosen the latter. Aerospace Engineering is a very tough specialization, full of Math and Physics based classes, but nevertheless I did manage to reach the 3rd year of study. Even though this Faculty was about engineering, <em>it just happened</em> that I was still in touch with Computer Science as my first 2 years of study contained several classes related to this study field.</p>
<h2 id="computer-science-2">Second attempt to Computer Science</h2>
<p>In April 2000 my father died and my brother and I were left with an apartment.<br />
<em>It just happened</em> that during the Christmas vacation from my 3rd year of study I had this very, very wild idea: how about I quit Aerospace Engineering, sell the apartment, split the money with my brother and thus have the means of paying for a place at Computer Science? Transilvania University had introduced paid studies in addition to the free ones since 1998 or so, but you still had to pass the exam like everyone else and get a good enough grade, as even these paid places were limited. I knew I was not going to get a very good grade at the Math exam, but now I would have the money to apply for a paid place. Without letting anybody know, not even my mother, I quit Aerospace Engineering just before my 3rd year winter session, when I was almost half-way to becoming an engineer, and in the summer of 2001 I enrolled in the exam for a place at Computer Science Faculty for the second time … and again for the second time, I did not get a good enough grade. Several days after the exam grades were displayed, <em>it just happened</em> that some of the admittees have chosen to study elsewhere, thus freeing enough places for me to be finally admitted - to a paid place, but hey, I was about to live my dream!</p>
<h2 id="living-the-dream">Living the dream</h2>
<p>So I finally got my way into Computer Science Faculty and started studying algorithms and their complexities, Pascal (again?), Java, Delphi, C++, C# and <del>sadly lots and lots of</del> Math.<br />
I remember that the Java exam from the summer of 2002 was difficult and I barely managed to take a grade of 5. Due to my passion for programming, I could not bare getting such a low grade to a Computer Science class (though I was definitely <em>not</em> so picky when dealing with a low grade at any of my Math related exams), so I came back in autumn, before the start of my 2nd year of study, for a re-examination. I got a 9, which qualified me for a scholarship, but since I was on a paid study place, I was denied it.</p>
<h2 id="the-shift">The shift</h2>
<p>In December 2001, one of my classmates invited several people, including myself, to a Christmas carol concert which took place at Home Army from Brasov; the following year, I was invited to a similar concert, but this time the location was different: <a href="http://bbsperanta.ro/">Hope Baptist Church</a>. Thus, on the Christmas Day of 2002, I have entered a Protestant church for the first time in my life.<br />
Before the concert started, <em>it just happened</em> that the principal pastor made an announcement about Sunday school. This piqued my curiosity, so in one of the first two Sundays of 2003 I went to this church, looking to better understand who they are and what they teach. At the end of the Sunday school, I pulled the sleeve of the youth pastor as he was about to exit the church main hall to get his attention and told him I would like to know more about God. He invited me to his office and told me to reflect upon this wish and come back to him after a couple of weeks, as he was about to be out-of-town for some time. We met again after two weeks and agreed to meet every Thursday at 17:00 for an hour to discuss about God.<br />
During our first meeting he opened the Bible and showed me a verse found in the New Testament: <em>“That if thou shalt confess with thy mouth the Lord Jesus, and shalt believe in thine heart that God hath raised him from the dead, thou shalt be saved.”</em> (<a href="https://www.kingjamesbibleonline.org/Romans-10-9/">Romans 10:9</a>). I remember asking him whether being saved (forgiven of all sins) is that simple and he clearly responded: yes. We met several times more and during these meetings the youth pastor showed me different Bible verses so that I may understand that in God’s eyes I’m a sinner, deserving an eternity in Hell for all my transgressions, while the only way of escaping such fate being Jesus Christ.<br />
During our 6th meeting, he directly asked me why don’t I do what Romans 10:9 says. In that moment I truly realized my sinful nature and its eternal repercussions. Our one hour meeting was at its end, so I took my leave to the church main hall to reflect upon these things. I was the only person there and I remember taking the Bible, opening it to Romans 10:9 and did <strong>exactly</strong> what the verse said: I confessed Jesus as Lord with my mouth and believed in my heart that God had raised Him from the dead. Thus, on February 27th, 2003, I received Jesus Christ as Lord and Saviour, being born again as Christian, not because of my good deeds, but because of the faith I have put in God. Four weeks later, on March 30th, 2003, I was baptised too at the same Hope Baptist Church.</p>
<h2 id="it-just-happened">It just happened</h2>
<p>By now, I believe you are aware of all the above <em>“it just happened”</em> occurrences. After I received Jesus Christ as Lord and Saviour, I also became aware of them and finally understood why I was so passionate about Computer Science and why it took so many trials for me to finally attend and then graduate this specialization: God Himself put this passion into my heart so that I may be attracted to Computer Science class during secondary school, to be further attracted to Computer Science high school and faculty, so that I may meet this classmate, so that she may invite me to Hope Baptist Church, so that I may meet this youth pastor, so that he may tell me about the crucifixion, death, burial and resurrection of Jesus Christ, so that I may believe in Him, so that I may be saved, so that I may have the opportunity of sharing with others what God did to me and is ready to do to anybody: <strong>grant forgiveness of sins and eternal life</strong>.<br />
I was denied entry to Computer Science for years so that I may realize that me getting there was due to God alone and had nothing to do with my own abilities.<br />
I could never have become part of the IT industry without the help of God, so let me give you another example: when my mother retired in 2003, I had to spend my study money to provide for my family until she got her first pension, a thing which happened after several months. So I was faced with the possibility of not being able to finish my studies due to lack of finances, but God had other plans for me. He <em>wanted</em> me to get a Computer Science diploma, so <em>it just happened</em> that my father’s sister, when hearing about my situation, decided to help me and paid for a semester, enough for me to buy more time and get the money to pay for the second. Once again God showed me Computer Science was <em>His</em> choice for me, as I was not able to finish it on my own.<br />
When entering my 4th year of study, the Faculty decided to increase the annual study fee, so I was back in a tight spot, but again, God had other plans for me: due to the good marks from the past study years (remember the 9 I got for Java?) and due to several free places becoming available, I was allowed to transfer from my current paid place to a free one and on top of that, I also got a scholarship, as in: <em>do not pay to go to school, but get paid for coming to school!</em>. <strong>God is truly good!</strong><br />
So, you see, all these events <em>cannot</em> be mere coincidences, as they are way too many and way too deeply connected to each other to not act as the links of the same chain used for pulling me from the abyss of sin: the God’s plan for my life.</p>
<h2 id="first-job-as-developer">My first job as a software developer</h2>
<p>My first software developer job was working on an ERP written in Delphi 5 and using MySQL as its storage. This happened in August 2004 and back then I was happy to get 80 EUR for working half-time; I was about to enter my 4th and last year as a Computer Science student and wanted to finish university the following one, so working 4 hours a day was the maximum I could afford at that time.<br />
I still remember the satisfaction I had when I found a way of cramming 4 pay slip fields onto the same A4 page and being able to print it using Delphi! I quit that job after several months for a .NET developer role and the rest is <a href="https://www.linkedin.com/in/bmarian/">history</a>.</p>
<h2 id="conclusion">Conclusion</h2>
<p>As I write this post, looking back to all persons and events which have lead me to this very moment, I know for sure that God Himself has chosen software development as the means for me to know Him personally as Lord and Saviour. He cares a lot about me, that’s why He didn’t stop at granting me forgiveness of sins and eternal life: while studying Computer Science I met my future wife, as we were students attending several common classes; she is the love of my life and the mother of a most beautiful, smart and bright daughter. Furthermore, software development showed itself as the best career choice I could’ve hoped for, letting me create things from scratch, fixing broken ones, visiting far away places and meeting people from far away.</p>
<p>Thank you, Jesus, for software development!</p>Introduction First contact with a computer Writing my first line of code High school First attempt to Computer Science Second attempt to Computer Science Living the dream The shift It just happened My first job as a software developer ConclusionControlling service startup order in Docker Compose2018-05-13T13:02:29+00:002018-05-13T13:02:29+00:00https://crossprogramming.com/2018/05/13/controlling-service-startup-order-in-docker-compose<ul>
<li><a href="#context">Context</a></li>
<li><a href="#service_healthy">Solution #1: Use depends_on, condition and service_healthy</a></li>
<li><a href="#port-checking">Solution #2: Port checking with a twist</a></li>
<li><a href="#docker_engine_api">Solution #3: Invoke Docker Engine API</a></li>
<li><a href="#conclusion">Conclusion</a></li>
<li><a href="#bonus">Bonus</a>
<ul>
<li><a href="#maven_assembly_plugin">Maven Assembly plugin</a></li>
<li><a href="#debug_dockerized_java_app">Debug a dockerized Java application</a></li>
</ul>
</li>
<li><a href="#resources">Resources</a></li>
</ul>
<hr />
<!-- markdownlint-disable MD033 -->
<!-- markdownlint-disable MD036 -->
<h2 id="context">Context</h2>
<p>In case you’re using Docker Compose for running several containers on a machine, sooner or later you’ll end-up in a situation where you’ll need to ensure service A runs <em>before</em> service B. The classic example is an application which needs to access a database; if both of these compose services are started via the <em>docker-compose up</em> command, there is a chance this will fail, since the application service might start before the database service and it will not find a database able to handle its SQL statements.
The guys behind Docker Compose have thought about this issue and provided the <strong><a href="https://docs.docker.com/compose/compose-file/#depends_on">depends_on</a></strong> directive for expressing dependency between services.</p>
<p>On the other hand, just because the database service was started before application service, it doesn’t mean that the database is ready to handle incoming connections (the <em>ready</em> state). Any relational database system needs to start its own services before being able to handle incoming connections (for instance, check a simplified view over <a href="https://sqltimes.wordpress.com/2013/02/10/sql-server-start-up-steps/">SQL Server startup steps</a>) and the startup might take a while, so we need a better mechanism of detecting the <em>ready</em> state of a particular compose service, in addition to specifying its dependents.</p>
<p>In this post I will present several approaches inspired by the official <a href="https://docs.docker.com/compose/startup-order/">recommendations</a> and other sources.
Each approach will use its own compose file and each of these compose files contains at least 2 services: a Java 8 console application and a MySQL v5.7 database; the former will connect to the latter using <a href="https://docs.oracle.com/javase/tutorial/jdbc/basics/connecting.html">plain-old JDBC</a>, will read some <a href="https://en.wikipedia.org/wiki/Information_schema">metadata</a> and then will print them to the console.<br />
All compose files will use the same Java application <a href="https://github.com/satrapu/jdbc-with-docker/blob/master/Dockerfile-jdbc-with-docker-console-runner">Docker image</a>.</p>
<p>There is also a bonus section at the <a href="#bonus">end</a> of this post, so please check it out too!</p>
<p><strong>IMPORTANT THINGS</strong></p>
<ul>
<li>My environment
<ul>
<li>Windows 10 x64 Pro</li>
<li>Docker v18.03.1-ce-win65 (17513)</li>
<li>Docker Compose v1.21.1, build 7641a569</li>
</ul>
</li>
<li>The source code used by this post can be found on <a href="https://github.com/satrapu/jdbc-with-docker">GitHub</a></li>
<li>All commands below must be executed from a Powershell console run as admin</li>
<li>Also, since I’m lazy, I have embedded Linux shell commands inside the Docker Compose files, which is most definitely not a best practice, but since the point of this post is service startup order and not Docker Compose file best practices, please endure</li>
<li>I’m using “mvn”, “<a href="https://docs.docker.com/compose/reference/down/">docker-compose down</a>” and “<a href="https://docs.docker.com/compose/reference/build/">docker-compose build</a>” commands before starting any compose service via “<a href="https://docs.docker.com/compose/reference/up/">docker-compose up</a>” to ensure:</li>
<li>I will run the latest build of the Java application using <a href="http://www.adam-bien.com/roller/abien/entry/configuring_default_goal_in_maven">default</a> Maven goal; in my case this is: <em>clean compile assembly:single</em>
<ul>
<li>Any running compose service will be stopped</li>
<li>Any Docker image declared in the compose file will be rebuilt</li>
</ul>
</li>
<li>The aforementioned compose files make use of variables declared in a <a href="https://docs.docker.com/compose/env-file/">.env</a> file, with the following content:</li>
</ul>
<div class="language-ini highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="py">mysql_root_password</span><span class="p">=</span><span class="s"><ENTER_A_PASSWORD_HERE></span>
<span class="py">mysql_database_name</span><span class="p">=</span><span class="s">jdbcwithdocker</span>
<span class="py">mysql_database_user</span><span class="p">=</span><span class="s">satrapu</span>
<span class="py">mysql_database_password</span><span class="p">=</span><span class="s"><ENTER_A_DIFFERENT_PASSWORD_HERE></span>
<span class="py">java_jvm_flags</span><span class="p">=</span><span class="s">-Xmx512m</span>
<span class="py">java_debug_port</span><span class="p">=</span><span class="s">9876</span>
<span class="c"># Use "suspend=y" to ensure the JVM will pause the application,
# waiting for a debugger to be attached
</span><span class="py">java_debug_settings</span><span class="p">=</span><span class="s">-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=9876</span>
<span class="c"># The amount of time between two consecutive health state checks
# (used by docker-compose-using-healthcheck.yml)
</span><span class="py">healthcheck_interval</span><span class="p">=</span><span class="s">2s</span>
<span class="c"># The maximum amount of time each healthcheck state try must end in
# (used inside docker-compose-using-healthcheck.yml)
</span><span class="py">healthcheck_timeout</span><span class="p">=</span><span class="s">5s</span>
<span class="c"># The maximum amount of retries before giving up and considering
# the Docker container in an unhealthy state
# (used by docker-compose-using-port-checking.yml and docker-compose-using-api.yml)
</span><span class="py">healthcheck_retries</span><span class="p">=</span><span class="s">20</span>
<span class="c"># The amount of time between two consecutive queries against the database
# (used by docker-compose-using-port-checking.yml)
</span><span class="py">check_db_connectivity_interval</span><span class="p">=</span><span class="s">2s</span>
<span class="c"># The maximum amount of retries before giving up and considering
# the database is not able to process incoming connections
# (used by docker-compose-using-port-checking.yml)
</span><span class="py">check_db_connectivity_retries</span><span class="p">=</span><span class="s">20</span>
<span class="c"># The Docker API version to use when querying for container metadata
# (used by docker-compose-using-api.yml)
</span><span class="py">docker_api_version</span><span class="p">=</span><span class="s">1.37</span>
</code></pre></div></div>
<p>Since the .env file contains sensitive things like database passwords, it should not be put under source control.</p>
<h2 id="service_healthy">Solution #1: Use depends_on, condition and service_healthy</h2>
<p>This solution uses this Docker compose file: <strong><a href="https://github.com/satrapu/jdbc-with-docker/blob/master/docker-compose-using-healthcheck.yml">docker-compose-using-healthcheck.yml</a></strong>.<br />
Run it using the following commands:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">mvn</span><span class="w"> </span><span class="se">`
</span><span class="p">;</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-healthcheck.yml</span><span class="w"> </span><span class="nx">down</span><span class="w"> </span><span class="nt">--rmi</span><span class="w"> </span><span class="nx">local</span><span class="w"> </span><span class="se">`
</span><span class="p">;</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-healthcheck.yml</span><span class="w"> </span><span class="nx">build</span><span class="w"> </span><span class="se">`
</span><span class="p">;</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-healthcheck.yml</span><span class="w"> </span><span class="nx">up</span><span class="w">
</span></code></pre></div></div>
<p>Starting with version <a href="https://docs.docker.com/release-notes/docker-engine/#1120-2016-07-28">1.12</a>, Docker has added the <a href="https://docs.docker.com/engine/reference/builder/#healthcheck">HEALTHCHECK</a> Dockerfile instruction used for verifying whether a container is still working; Docker Compose file has added support for using the health check when expressing a service dependency since version 2.1, as documented inside the <a href="https://docs.docker.com/compose/compose-file/compose-versioning/#compatibility-matrix">compatibility matrix</a>.</p>
<p>My database service will define its <a href="https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck">health check</a> as a My SQL client command which will periodically query whether the underlying MySQL database is ready to handle incoming connections via the <a href="https://dev.mysql.com/doc/refman/5.7/en/use.html">USE</a> SQL statement:</p>
<div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">...</span>
<span class="na">db</span><span class="pi">:</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">mysql:5.7.20</span>
<span class="na">healthcheck</span><span class="pi">:</span>
<span class="na">test</span><span class="pi">:</span> <span class="pi">></span>
<span class="s">mysql \</span>
<span class="s">--host='localhost' \</span>
<span class="s">--user='${mysql_database_user}' \</span>
<span class="s">--password='${mysql_database_password}' \</span>
<span class="s">--execute='USE ${mysql_database_name}' \</span>
<span class="na">interval</span><span class="pi">:</span> <span class="s">${healthcheck_interval}</span>
<span class="na">timeout</span><span class="pi">:</span> <span class="s">${healthcheck_timeout}</span>
<span class="na">retries</span><span class="pi">:</span> <span class="s">${healthcheck_retries}</span>
<span class="nn">...</span>
</code></pre></div></div>
<p>Keep in mind USE statement is not the only way of performing such check. For instance, one could periodically run a SQL script which would test whether the database is accessible <em>and</em> that the database user has been granted all the expected permissions (e.g. can perform INSERT against a particular table, etc.).</p>
<p>My application service will be <a href="https://docs.docker.com/compose/compose-file/compose-file-v2/#depends_on">started</a> as soon as the database service has reached the “healthy” state:</p>
<div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">...</span>
<span class="na">app</span><span class="pi">:</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">satrapu/jdbc-with-docker-console-runner</span>
<span class="s">...</span>
<span class="na">depends_on</span><span class="pi">:</span>
<span class="na">db</span><span class="pi">:</span>
<span class="na">condition</span><span class="pi">:</span> <span class="s">service_healthy</span>
<span class="nn">...</span>
</code></pre></div></div>
<p>As you can see, stating the dependency between db and app services is pretty easy, same as doing a health check. Even better, these things are built-in Docker Compose.</p>
<p>And now the <strong>bad</strong> news: since Docker Compose file format is used by Docker Swarm too, the development team has decided to mark this feature as obsolete starting with compose file v3, as documented <a href="https://docs.docker.com/compose/compose-file/#depends_on">here</a>; see more reasoning behind this decision <a href="https://github.com/docker/compose/issues/4305#issuecomment-276527457">here</a>.<br />
The depends_on, condition and service_healthy are usable only when using older compose file versions (v2.1 up to and including v2.4).<br />
Keep in mind Docker Compose might remove support for these versions in a future release, but as long as you’re OK with using compose file versions before v3, this solution is very simple to understand and use.</p>
<h2 id="port-checking">Solution #2: Port checking with a twist</h2>
<p>This solution uses this Docker compose file: <strong><a href="https://github.com/satrapu/jdbc-with-docker/blob/master/docker-compose-using-port-checking.yml">docker-compose-using-port-checking.yml</a></strong>.<br />
Run it using the following commands:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">mvn</span><span class="w"> </span><span class="se">`
</span><span class="p">;</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-port-checking.yml</span><span class="w"> </span><span class="nx">down</span><span class="w"> </span><span class="nt">--rmi</span><span class="w"> </span><span class="nx">local</span><span class="w"> </span><span class="se">`
</span><span class="p">;</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-port-checking.yml</span><span class="w"> </span><span class="nx">build</span><span class="w"> </span><span class="se">`
</span><span class="p">;</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-port-checking.yml</span><span class="w"> </span><span class="nx">up</span><span class="w"> </span><span class="nt">--exit-code-from</span><span class="w"> </span><span class="nx">check_db_connectivity</span><span class="w"> </span><span class="nx">check_db_connectivity</span><span class="w"> </span><span class="se">`
</span><span class="p">;</span><span class="kr">if</span><span class="w"> </span><span class="p">(</span><span class="nv">$LASTEXITCODE</span><span class="w"> </span><span class="o">-eq</span><span class="w"> </span><span class="mi">0</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-port-checking.yml</span><span class="w"> </span><span class="nx">up</span><span class="w"> </span><span class="nx">app</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="err">`</span><span class="w">
</span><span class="kr">else</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">echo</span><span class="w"> </span><span class="s2">"ERROR: Failed to start service due to one of its dependencies!"</span><span class="w"> </span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>This solution was inspired by <a href="https://8thlight.com/blog/dariusz-pasciak/2016/10/17/docker-compose-wait-for-dependencies.html">one</a> of Dariusz Pasciak’s articles, but I’m not just checking whether MySQL port 3306 is open (<em>port checking</em>), as Dariusz is doing: I’m running the aforementioned USE SQL statement using a MySQL client found inside the <strong>check_db_connectivity</strong> compose service to ensure the underlying database can handle incoming connections (<em>the twist</em>); additionally, the exit code of the check_db_connectivity service will be evaluated due to the <em>–exit-code-from check_db_connectivity</em> compose option and if different than 0 (which marks the db service is in desired ready state), an error message will be printed and app service will not start.</p>
<ul>
<li>
<p>Docker Compose will try starting check_db_connectivity service, but it will see that it has a dependency on db service:</p>
<div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">...</span>
<span class="na">db</span><span class="pi">:</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">mysql:5.7.20</span>
<span class="nn">...</span>
<span class="na">check_db_connectivity</span><span class="pi">:</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">activatedgeek/mysql-client:0.1</span>
<span class="na">depends_on</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">db</span>
<span class="nn">...</span>
</code></pre></div> </div>
</li>
<li>Docker Compose will start db service</li>
<li>Docker Compose will then start check_db_connectivity service, which will initiate a loop checking that the MySQL database can handle incoming connections</li>
<li>
<p>Docker Compose will wait for check_db_connectivity service to finish its loop, as the loop is part of the service <a href="https://docs.docker.com/compose/compose-file/#entrypoint">entry point</a>:</p>
<div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">check_db_connectivity</span><span class="pi">:</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">activatedgeek/mysql-client:0.1</span>
<span class="na">entrypoint</span><span class="pi">:</span> <span class="pi">></span>
<span class="s">/bin/sh -c "</span>
<span class="s">sleepingTime='${check_db_connectivity_interval}'</span>
<span class="s">totalAttempts=${check_db_connectivity_retries}</span>
<span class="s">currentAttempt=1</span>
<span class="s">echo \"Start checking whether MySQL database \"${mysql_database_name}\" is up & running\" \</span>
<span class="s">\"(able to process incoming connections) each $$sleepingTime for a total amount of $$totalAttempts times\"</span>
<span class="s">while [ $$currentAttempt -le $$totalAttempts ]; do</span>
<span class="s">sleep $$sleepingTime</span>
<span class="s">mysql \</span>
<span class="s">--host='db' \</span>
<span class="s">--port='3306' \</span>
<span class="s">--user='${mysql_database_user}' \</span>
<span class="s">--password='${mysql_database_password}' \</span>
<span class="s">--execute='USE ${mysql_database_name}'</span>
<span class="s">if [ $$? -eq 0 ]; then</span>
<span class="s">echo \"OK: [$$currentAttempt/$$totalAttempts] MySQL database \"${mysql_database_name}\" is up & running.\"</span>
<span class="s">return 0</span>
<span class="s">else</span>
<span class="s">echo \"WARN: [$$currentAttempt/$$totalAttempts] MySQL database \"${mysql_database_name}\" is still NOT up & running ...\"</span>
<span class="s">currentAttempt=`expr $$currentAttempt + 1`</span>
<span class="s">fi</span>
<span class="s">done;</span>
<span class="s">echo 'ERROR: Could not connect to MySQL database \"${mysql_database_name}\" in due time.'</span>
<span class="s">return 1"</span>
</code></pre></div> </div>
</li>
<li>
<p>Docker Compose will then start app service; by the time this service is running, the MySQL database is able to handle incoming connections</p>
<div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="na">app</span><span class="pi">:</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">satrapu/jdbc-with-docker-console-runner</span>
<span class="na">depends_on</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">db</span>
</code></pre></div> </div>
</li>
</ul>
<p>This solution is similar with the <a href="#service_healthy">previous one</a> in the sense that the application service waits till the database service enters a specific state, but then without using a Docker Compose obsolete feature.</p>
<h2 id="docker_engine_api">Solution #3: Invoke Docker Engine API</h2>
<p>This solution uses this Docker compose file: <strong><a href="https://github.com/satrapu/jdbc-with-docker/blob/master/docker-compose-using-api.yml">docker-compose-using-api.yml</a></strong>.<br />
Run it using the following commands:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$</span><span class="nn">Env</span><span class="p">:</span><span class="nv">COMPOSE_CONVERT_WINDOWS_PATHS</span><span class="o">=</span><span class="mi">1</span><span class="w"> </span><span class="err">`</span><span class="w">
</span><span class="p">;</span><span class="n">mvn</span><span class="w"> </span><span class="se">`
</span><span class="p">;</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-api.yml</span><span class="w"> </span><span class="nx">down</span><span class="w"> </span><span class="nt">--rmi</span><span class="w"> </span><span class="nx">local</span><span class="w"> </span><span class="se">`
</span><span class="p">;</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-api.yml</span><span class="w"> </span><span class="nx">build</span><span class="w"> </span><span class="se">`
</span><span class="p">;</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-api.yml</span><span class="w"> </span><span class="nx">up</span><span class="w">
</span></code></pre></div></div>
<p><strong>IMPORTANT</strong><br />
Running the above commands without including <strong>COMPOSE_CONVERT_WINDOWS_PATHS</strong> environment variable will fail:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">...</span><span class="w">
</span><span class="n">Creating</span><span class="w"> </span><span class="nx">jdbc-with-docker_app_1</span><span class="w"> </span><span class="o">...</span><span class="w"> </span><span class="nx">error</span><span class="w">
</span><span class="n">ERROR:</span><span class="w"> </span><span class="nx">for</span><span class="w"> </span><span class="nx">jdbc-with-docker_app_1</span><span class="w"> </span><span class="nx">Cannot</span><span class="w"> </span><span class="nx">create</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">for</span><span class="w"> </span><span class="nx">service</span><span class="w"> </span><span class="nx">app:</span><span class="w"> </span><span class="nx">b</span><span class="s1">'Mount denied:\nThe source path "\\\\var\\\\run\\\\docker.sock:/var/run/docker.sock"\nis not a valid Windows path'</span><span class="w">
</span><span class="o">...</span><span class="w">
</span></code></pre></div></div>
<p>This issue and its fix are documented <a href="https://github.com/docker/for-win/issues/1829#issuecomment-376328022">here</a>.</p>
<p>I really like the idea of expressing dependencies between compose services via health checks.<br />
Since <em>condition</em> form of <em>depends_on</em> will be gone sooner or later, I thought about implementing something conceptually similar and one way is using <a href="https://docs.docker.com/develop/sdk/">Docker Engine API</a>.</p>
<p>My approach is to periodically query the health state of the database service from within the application service entry point by making an HTTP request to the Docker API endpoint and parse the response using <a href="https://stedolan.github.io/jq/">jq</a>, a command-line JSON processor; the Java application will start as soon as the database service has reached the “healthy” state.</p>
<p>First, I will get the JSON document containing information about all running containers via a simple <a href="https://curl.haxx.se/docs/manpage.html">curl</a> command. The special thing is to use the <a href="https://curl.haxx.se/docs/manpage.html#--unix-socket">unix-socket</a> curl option, since this kind of socket is used by Docker daemon.<br />
Additionally, I need to expose the <a href="https://docs.docker.com/engine/reference/commandline/dockerd/#examples">docker.sock</a> as a volume to the container running curl command to allow it communicate with the local Docker daemon.</p>
<p><strong>IMPORTANT</strong><br />
Sharing your local Docker daemon socket should be done with care, as it <em>can</em> lead to security issues, as very clearly presented <a href="https://www.ctl.io/developers/blog/post/tutorial-understanding-the-security-risks-of-running-docker-containers">here</a>, so carefully consider all things <em>before</em> using this approach!</p>
<p>Now that the security ad has been played, below you may find an example of what the stand-alone command used for listing all Docker containers running on the local host would look like - please note I’m running curl from within a Docker container, <a href="https://hub.docker.com/r/byrnedo/alpine-curl/">byrnedo/alpine-curl</a>, while the actual command is executed from a container based on <a href="https://hub.docker.com/_/openjdk/">openjdk:8-jre-alpine</a> Docker image:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Ensure db service is running before querying its metadata</span><span class="w">
</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-api.yml</span><span class="w"> </span><span class="nx">up</span><span class="w"> </span><span class="nt">-d</span><span class="w"> </span><span class="nx">db</span><span class="w"> </span><span class="se">`
</span><span class="p">;</span><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--rm</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">/var/run/docker.sock:/var/run/docker.sock</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">byrnedo/alpine-curl</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--silent</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--unix-socket</span><span class="w"> </span><span class="nx">/var/run/docker.sock</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">http://v1.37/containers/json</span><span class="w">
</span></code></pre></div></div>
<p>The output would look similar to this:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"Id"</span><span class="p">:</span><span class="s2">"5d9108769de3641692a5d636aa361866f09e6403309e6262520447dae9115344"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Names"</span><span class="p">:[</span><span class="w">
</span><span class="s2">"/jdbc-with-docker_db_1"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"Image"</span><span class="p">:</span><span class="s2">"mysql:5.7.20"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ImageID"</span><span class="p">:</span><span class="s2">"sha256:7d83a47ab2d2d0f803aa230fdac1c4e53d251bfafe9b7265a3777bcc95163755"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Command"</span><span class="p">:</span><span class="s2">"docker-entrypoint.sh mysqld"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Created"</span><span class="p">:</span><span class="mi">1525887950</span><span class="p">,</span><span class="w">
</span><span class="nl">"Ports"</span><span class="p">:[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"IP"</span><span class="p">:</span><span class="s2">"0.0.0.0"</span><span class="p">,</span><span class="w">
</span><span class="nl">"PrivatePort"</span><span class="p">:</span><span class="mi">3306</span><span class="p">,</span><span class="w">
</span><span class="nl">"PublicPort"</span><span class="p">:</span><span class="mi">32771</span><span class="p">,</span><span class="w">
</span><span class="nl">"Type"</span><span class="p">:</span><span class="s2">"tcp"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"Labels"</span><span class="p">:{</span><span class="w">
</span><span class="nl">"com.docker.compose.config-hash"</span><span class="p">:</span><span class="s2">"cea84824338bc0ea6a7da437084f00a8bfc9647b91dd8de5e41694269498dec6"</span><span class="p">,</span><span class="w">
</span><span class="nl">"com.docker.compose.container-number"</span><span class="p">:</span><span class="s2">"1"</span><span class="p">,</span><span class="w">
</span><span class="nl">"com.docker.compose.oneoff"</span><span class="p">:</span><span class="s2">"False"</span><span class="p">,</span><span class="w">
</span><span class="nl">"com.docker.compose.project"</span><span class="p">:</span><span class="s2">"jdbc-with-docker"</span><span class="p">,</span><span class="w">
</span><span class="nl">"com.docker.compose.service"</span><span class="p">:</span><span class="s2">"db"</span><span class="p">,</span><span class="w">
</span><span class="nl">"com.docker.compose.version"</span><span class="p">:</span><span class="s2">"1.21.1"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"State"</span><span class="p">:</span><span class="s2">"running"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Status"</span><span class="p">:</span><span class="s2">"Up 6 seconds (healthy)"</span><span class="p">,</span><span class="w">
</span><span class="nl">"HostConfig"</span><span class="p">:{</span><span class="w">
</span><span class="nl">"NetworkMode"</span><span class="p">:</span><span class="s2">"jdbc-with-docker_default"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"NetworkSettings"</span><span class="p">:{</span><span class="w">
</span><span class="nl">"Networks"</span><span class="p">:{</span><span class="w">
</span><span class="nl">"jdbc-with-docker_default"</span><span class="p">:{</span><span class="w">
</span><span class="nl">"IPAMConfig"</span><span class="p">:</span><span class="kc">null</span><span class="p">,</span><span class="w">
</span><span class="nl">"Links"</span><span class="p">:</span><span class="kc">null</span><span class="p">,</span><span class="w">
</span><span class="nl">"Aliases"</span><span class="p">:</span><span class="kc">null</span><span class="p">,</span><span class="w">
</span><span class="nl">"NetworkID"</span><span class="p">:</span><span class="s2">"fd1c60a463a8b39dd3cb9b34c8e5792c069e18cd5076f6321f5554c10ec1765d"</span><span class="p">,</span><span class="w">
</span><span class="nl">"EndpointID"</span><span class="p">:</span><span class="s2">"b80cfc9c45e0816cd9af9507f76e3a0f9f1e203d2d2b0e081b8affc1293e8cf4"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Gateway"</span><span class="p">:</span><span class="s2">"172.18.0.1"</span><span class="p">,</span><span class="w">
</span><span class="nl">"IPAddress"</span><span class="p">:</span><span class="s2">"172.18.0.2"</span><span class="p">,</span><span class="w">
</span><span class="nl">"IPPrefixLen"</span><span class="p">:</span><span class="mi">16</span><span class="p">,</span><span class="w">
</span><span class="nl">"IPv6Gateway"</span><span class="p">:</span><span class="s2">""</span><span class="p">,</span><span class="w">
</span><span class="nl">"GlobalIPv6Address"</span><span class="p">:</span><span class="s2">""</span><span class="p">,</span><span class="w">
</span><span class="nl">"GlobalIPv6PrefixLen"</span><span class="p">:</span><span class="mi">0</span><span class="p">,</span><span class="w">
</span><span class="nl">"MacAddress"</span><span class="p">:</span><span class="s2">"02:42:ac:12:00:02"</span><span class="p">,</span><span class="w">
</span><span class="nl">"DriverOpts"</span><span class="p">:</span><span class="kc">null</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"Mounts"</span><span class="p">:[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"Type"</span><span class="p">:</span><span class="s2">"volume"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Name"</span><span class="p">:</span><span class="s2">"jdbc-with-docker_jdbc-with-docker-mysql-data"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Source"</span><span class="p">:</span><span class="s2">"/var/lib/docker/volumes/jdbc-with-docker_jdbc-with-docker-mysql-data/_data"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Destination"</span><span class="p">:</span><span class="s2">"/var/lib/mysql"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Driver"</span><span class="p">:</span><span class="s2">"local"</span><span class="p">,</span><span class="w">
</span><span class="nl">"Mode"</span><span class="p">:</span><span class="s2">"rw"</span><span class="p">,</span><span class="w">
</span><span class="nl">"RW"</span><span class="p">:</span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nl">"Propagation"</span><span class="p">:</span><span class="s2">""</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">]</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="p">]</span><span class="w">
</span></code></pre></div></div>
<p>Secondly, I will extract the health state of the database service using <a href="https://stedolan.github.io/jq/manual/#Builtinoperatorsandfunctions">various</a> jq operators and functions:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>jq <span class="s1">'.[] | select(.Names[] | contains("_db_")) | select(.State == "running") | .Status | contains("healthy")'</span>
<span class="c"># The output should be "true" in case the db service has reached the healthy state</span>
</code></pre></div></div>
<ul>
<li><strong>.[]</strong> : this will select all records from the given JSON document</li>
<li><strong>select(.Names[] | contains(“_db_”))</strong> : this will select the records whose “Names” array property has a record containing the “_db_” string - the name of a Docker container created by Docker Compose contains the service name; in our case it is “db”</li>
<li><strong>select(.State == “running”)</strong> : this will select only running Docker containers</li>
<li><strong>.Status | contains(“healthy”)</strong> : this will select the value of the “Status” property, which, in case the container has reached healthy state, should be “true”</li>
</ul>
<p>In order to reach the final jq command found inside the Docker Compose file, I have experimented using <a href="https://jqplay.org/s/svMcFCRZ31">jq Playground</a>.<br />
Please note this is not the only way of extracting the health status out of the Docker JSON - use your imagination to come up with better jq commands.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Controlling service startup order in Docker Compose is something we cannot ignore, but I hope the approaches presented in this post will help anybody understand where to start from.<br />
I’m fully aware these are not the <em>only</em> options - for instance, <a href="https://www.joyent.com/containerpilot">ContainerPilot</a>, which implement the <a href="http://autopilotpattern.io/">autopilot pattern</a>, looks very interesting. Another option is moving the delayed startup logic inside the dependent service (e.g. have my Java console application use a connection pool with a longer timeout used for fetching connections to MySQL database), but this requires glue code for checking each dependency (one approach for MySQL, another one for a cache provider, like Memcache, etc.).<br />
The good news is that there are many options, you just need to identify which one is more/most suitable for your use case.</p>
<h2 id="bonus">Bonus</h2>
<p>While working on the Java console application, I have encountered several challenges and I thought I should also mention them here, along with their solutions, as this may help others too.</p>
<h3 id="maven_assembly_plugin">Maven Assembly plugin</h3>
<p>Adding a dependency in a Maven pom.xml file is <del>trivial</del><a href="https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Importing_Dependencies">well documented</a>, but then you need to ensure that the dependency JAR file(s) will be correctly packaged with your console application.<br />
One way of packing all files in one JAR is using <a href="http://maven.apache.org/plugins/maven-assembly-plugin/">Maven Assembly plugin</a> and use its <a href="http://maven.apache.org/plugins/maven-assembly-plugin/single-mojo.html">assembly:single</a> goal, like I <a href="https://github.com/satrapu/jdbc-with-docker/blob/master/pom.xml#L31">did</a>.<br />
Running this goal will create an <em>jdbc-with-docker-jar-with-dependencies.jar</em> file under the <em>./target</em> folder instead if the usual <em>jdbc-with-docker.jar</em>, that’s why I’m <a href="https://github.com/satrapu/jdbc-with-docker/blob/master/Dockerfile-jdbc-with-docker-console-runner#L5">renaming</a> the JAR file inside the Dockerfile to a shorter name.</p>
<h3 id="debug_dockerized_java_app">Debug dockerized Java application</h3>
<p>Debugging a Java process means launching the process with several debugging related <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/jpda/conninv.html#Invocation">parameters</a>.<br />
Two of these parameters are crucial for debugging:</p>
<ul>
<li><em>address</em>, representing the port where the JVM listens for a debugger; the same port must be configured on IDE side when starting the debug session</li>
<li><em>suspend</em>, which specifies whether the JVM should block and wait until a debugger is attached</li>
</ul>
<p>Since I’m using Visual Studio Code for developing this particular Java application, I need to create a debug configuration and set the port which is specified inside the .env file via key <em>java_debug_port</em> (e.g. java_debug_port=9876).<br />
On the other hand, since the application will run inside a container, this port needs to be published to the Docker host where the IDE is running on.</p>
<p>Launch the application and see the JVM waiting for a debugger:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">λ</span><span class="w"> </span><span class="nv">$</span><span class="nn">Env</span><span class="p">:</span><span class="nv">COMPOSE_CONVERT_WINDOWS_PATHS</span><span class="o">=</span><span class="mi">1</span><span class="w"> </span><span class="err">`</span><span class="w">
</span><span class="err">>></span><span class="w"> </span><span class="p">;</span><span class="n">mvn</span><span class="w"> </span><span class="se">`
</span><span class="err">>></span><span class="w"> </span><span class="p">;</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-api.yml</span><span class="w"> </span><span class="nx">down</span><span class="w"> </span><span class="nt">--rmi</span><span class="w"> </span><span class="nx">local</span><span class="w"> </span><span class="se">`
</span><span class="err">>></span><span class="w"> </span><span class="p">;</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-api.yml</span><span class="w"> </span><span class="nx">build</span><span class="w"> </span><span class="se">`
</span><span class="err">>></span><span class="w"> </span><span class="p">;</span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-api.yml</span><span class="w"> </span><span class="nx">up</span><span class="w">
</span><span class="c"># ...</span><span class="w">
</span><span class="c"># Creating jdbc-with-docker_db_1 ... done</span><span class="w">
</span><span class="c"># Creating jdbc-with-docker_app_1 ... done</span><span class="w">
</span><span class="c"># Attaching to jdbc-with-docker_db_1, jdbc-with-docker_app_1</span><span class="w">
</span><span class="c"># ...</span><span class="w">
</span><span class="c"># db_1 | 2018-05-12T20:46:19.560436Z 0 [Note] Beginning of list of non-natively partitioned tables</span><span class="w">
</span><span class="c"># db_1 | 2018-05-12T20:46:19.574074Z 0 [Note] End of list of non-natively partitioned tables</span><span class="w">
</span><span class="c"># app_1 | Start checking whether MySQL database jdbcwithdocker is up & running (able to process incoming connections) each 2s for a total amount of 20 times</span><span class="w">
</span><span class="c"># app_1 | OK: [1/20] MySQL database jdbcwithdocker is up & running.</span><span class="w">
</span><span class="c"># app_1 | Listening for transport dt_socket at address: 9876</span><span class="w">
</span></code></pre></div></div>
<p>Docker Compose can get the host port via the following command:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="w"> </span><span class="n">docker-compose</span><span class="w"> </span><span class="nt">--file</span><span class="w"> </span><span class="nx">docker-compose-using-api.yml</span><span class="w"> </span><span class="nx">port</span><span class="w"> </span><span class="nt">--protocol</span><span class="o">=</span><span class="n">tcp</span><span class="w"> </span><span class="nx">app</span><span class="w"> </span><span class="nx">9876</span><span class="w">
</span><span class="c"># 0.0.0.0:32809</span><span class="w">
</span></code></pre></div></div>
<p>Visual Studio Code needs to have its debug configuration use port <em>32809</em>:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">Use</span><span class="w"> </span><span class="err">IntelliSense</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">learn</span><span class="w"> </span><span class="err">about</span><span class="w"> </span><span class="err">possible</span><span class="w"> </span><span class="err">attributes.</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">Hover</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">view</span><span class="w"> </span><span class="err">descriptions</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">existing</span><span class="w"> </span><span class="err">attributes.</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">For</span><span class="w"> </span><span class="err">more</span><span class="w"> </span><span class="err">information</span><span class="p">,</span><span class="w"> </span><span class="err">visit:</span><span class="w"> </span><span class="err">https://go.microsoft.com/fwlink/?linkid=</span><span class="mi">830387</span><span class="w">
</span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"0.2.0"</span><span class="p">,</span><span class="w">
</span><span class="nl">"configurations"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"java"</span><span class="p">,</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Debug (Attach)"</span><span class="p">,</span><span class="w">
</span><span class="nl">"request"</span><span class="p">:</span><span class="w"> </span><span class="s2">"attach"</span><span class="p">,</span><span class="w">
</span><span class="nl">"hostName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"localhost"</span><span class="p">,</span><span class="w">
</span><span class="nl">"port"</span><span class="p">:</span><span class="w"> </span><span class="mi">32809</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>Then launch the debug configuration and see the following output generated by the Java application:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">...</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">JDBC_URL</span><span class="o">=</span><span class="s2">"jdbc:mysql://address=(protocol=tcp)(host=db)(port=3306)/jdbcwithdocker?useSSL=false"</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">JDBC_USER</span><span class="o">=</span><span class="s2">"satrapu"</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">JDBC_PASSWORD</span><span class="o">=</span><span class="s2">"********"</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="o">--------------------------------------------------------------------------------------------------------------</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">TABLE_SCHEMA</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">TABLE_NAME</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">TABLE_TYPE</span><span class="w"> </span><span class="o">|</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="o">--------------------------------------------------------------------------------------------------------------</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">information_schema</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">CHARACTER_SETS</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">SYSTEM</span><span class="w"> </span><span class="nx">VIEW</span><span class="w"> </span><span class="o">|</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">information_schema</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">COLLATIONS</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">SYSTEM</span><span class="w"> </span><span class="nx">VIEW</span><span class="w"> </span><span class="o">|</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">information_schema</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">COLLATION_CHARACTER_SET_APPLICABILITY</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">SYSTEM</span><span class="w"> </span><span class="nx">VIEW</span><span class="w"> </span><span class="o">|</span><span class="w">
</span><span class="o">...</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">information_schema</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">VIEWS</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">SYSTEM</span><span class="w"> </span><span class="nx">VIEW</span><span class="w"> </span><span class="o">|</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="o">--------------------------------------------------------------------------------------------------------------</span><span class="w">
</span><span class="n">app_1</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Application</span><span class="w"> </span><span class="nx">was</span><span class="w"> </span><span class="nx">successfully</span><span class="w"> </span><span class="nx">able</span><span class="w"> </span><span class="nx">to</span><span class="w"> </span><span class="nx">fetch</span><span class="w"> </span><span class="nx">data</span><span class="w"> </span><span class="nx">out</span><span class="w"> </span><span class="nx">of</span><span class="w"> </span><span class="nx">the</span><span class="w"> </span><span class="nx">underlying</span><span class="w"> </span><span class="nx">database</span><span class="o">!</span><span class="w">
</span><span class="n">jdbc-with-docker_app_1</span><span class="w"> </span><span class="nx">exited</span><span class="w"> </span><span class="nx">with</span><span class="w"> </span><span class="nx">code</span><span class="w"> </span><span class="nx">0</span><span class="w">
</span></code></pre></div></div>
<h2 id="resources">Resources</h2>
<ul>
<li><a href="https://github.com/docker/compose/">Docker Compose</a>
<ul>
<li><a href="https://docs.docker.com/compose/reference/">Docker Compose command-line reference</a></li>
<li><a href="https://docs.docker.com/compose/compose-file/">Docker Compose file reference</a></li>
</ul>
</li>
<li><a href="https://docs.docker.com/engine/api/v1.37/">Docker Engine API v1.37</a></li>
<li><a href="https://stedolan.github.io/jq/">jq</a>
<ul>
<li><a href="https://stedolan.github.io/jq/manual/">jq Manual</a></li>
<li><a href="https://jqplay.org/">jq Playground</a></li>
</ul>
</li>
<li><a href="https://docs.oracle.com/javase/tutorial/jdbc/TOC.html">JDBC - The Java Tutorials</a></li>
<li><a href="https://code.visualstudio.com/docs/languages/java#_debugging">Debugging Java in VS Code</a></li>
</ul>Context Solution #1: Use depends_on, condition and service_healthy Solution #2: Port checking with a twist Solution #3: Invoke Docker Engine API Conclusion Bonus Maven Assembly plugin Debug a dockerized Java application ResourcesRunning Ansible Vault on Windows2018-03-29T09:52:10+00:002018-03-29T09:52:10+00:00https://crossprogramming.com/2018/03/29/running-ansible-vault-on-windows<ul>
<li><a href="#context">Context</a></li>
<li><a href="#prerequisites">Prerequisites</a></li>
<li><a href="#docker-machine">Setup Ansible managed node using Docker Machine</a></li>
<li><a href="#git-clone">Clone Ansible Vault example</a></li>
<li><a href="#issues">Encountered issues</a>
<ul>
<li><a href="#executable-bit">Issue #1: Executable bit</a></li>
<li><a href="#line-endings">Issue #2: Line endings</a></li>
</ul>
</li>
<li><a href="#ansible-vault">Ansible Vault commands</a></li>
<li><a href="#run-ansible">Run Ansible via Docker container</a></li>
<li><a href="#resources">Resources</a></li>
</ul>
<hr />
<!-- markdownlint-disable MD033 -->
<h2 id="context">Context</h2>
<p>After having successfully run Ansible on Windows using Docker, as documented inside my previous <a href="http://crossprogramming.com/2018/02/14/running-ansible-on-windows.html">post</a>, I thought about documenting how to use <a href="https://docs.ansible.com/ansible/latest/vault.html">Ansible Vault</a> on Windows.<br />
This tool was included in Ansible since version 1.5 and its purpose is to ensure sensitive data like credentials, private keys, certificates, etc., used by Ansible playbooks, are stored encrypted.<br />
This post will present my approach for running Ansible Vault on Windows using Docker, along with the issues I have encountered and their fixes.</p>
<p>As a real life example of when to use Ansible Vault, I have chosen the task of running a Docker container inside a virtual machine:</p>
<ul>
<li>Create the VM
<ul>
<li>I’ll use Docker Machine to create a VM using <a href="https://docs.docker.com/machine/drivers/hyper-v/">Hyper-V driver</a>; this approach has the added benefit of creating a VM which already has Docker installed</li>
<li>Beside having access to a Docker host with <del>minimum</del> medium effort, I ended up tinkering with a new Linux distro, other than what I’m usually exposed to (Ubuntu and CentOS)</li>
</ul>
</li>
<li>Setup the VM to be managed by Ansible
<ul>
<li>Provide SSH access - already done, since Docker Machine will handle it while creating the VM</li>
<li>Provide a working Python version - as you’ll see below, this step is not difficult at all</li>
</ul>
</li>
<li>Clone a git repository from <a href="https://github.com/satrapu/ansible-vault-on-windows">GitHub</a> containing the Ansible playbook used for running the Docker container based on the <a href="https://hub.docker.com/_/hello-world/">hello-world</a> image</li>
<li>Add the Docker Hub credentials ued for pulling the image inside the appropriate Ansible variable YAML file</li>
<li>Run Ansible Vault from a Docker container to encrypt these credentials</li>
<li>Run the Ansible playbook used for pulling the Docker image, run the container, then remove them both</li>
</ul>
<p>I will use <a href="https://hub.docker.com/r/satrapu/ansible-alpine-apk/">satrapu/ansible-alpine-apk</a> Docker image for running both Ansible and Ansible Vault on Windows.<br />
All the Docker and Docker Machine related commands below must be executed inside a Powershell console run as admin (use Git Bash as a backup for some commands - e.g. “docker-machine ssh”).</p>
<h2 id="prerequisites">Prerequisites</h2>
<p>All versions below are the latest at the time of writing this particular section (March 26th, 2018).</p>
<ul>
<li>Windows 10 Professional Edition (v1709)</li>
<li>Hyper-V</li>
<li>Docker for Windows - I’ve recently upgraded to v18.03.0-ce, but older versions should be good enough</li>
<li>Docker Machine - v0.13.0 or older, since v0.14.0 (coming with Docker for Windows v18.0.3.0-ce) is unable to create VMs using hyperv driver - see more details <a href="https://github.com/docker/machine/issues/4424">here</a>
<ul>
<li>Right after I’ve upgraded Docker for Windows from 17.12.1-ce to v18.03.0-ce, I was no longer able to create VMs using Docker Machine and hyperv driver; this issue did not occur when using v0.13.0!</li>
<li>Download Docker Machine v0.13.0 from <a href="https://github.com/docker/machine/releases/download/v0.13.0/docker-machine-Windows-x86_64.exe">GitHub</a>, rename it to docker-machine.exe and then move it inside %DOCKER_HOME%\resources\bin to overwrite the existing docker-machine.exe (v0.14.0)</li>
</ul>
</li>
<li><a href="https://code.visualstudio.com/">Visual Studio Code</a> - v1.21.1
<ul>
<li>Any other editor capable of switching line endings between CRLF and LF should be fine too - see below for the actual motivation behind this prerequisite ;)</li>
</ul>
</li>
<li><a href="https://git-scm.com/download/win">Git</a> - v2.16.2
<ul>
<li>The version is not that important, but installing Git Bash along with Git is!</li>
</ul>
</li>
</ul>
<h2 id="docker-machine">Setup Ansible managed node using Docker Machine</h2>
<ul>
<li>Create a virtual network switch named <strong>ansible</strong>, as described <a href="https://docs.docker.com/machine/drivers/hyper-v/#2-set-up-a-new-external-network-switch-optional">here</a></li>
<li>Create a Hyper-V virtual machine named <strong>ansible-vault</strong> having 2 CPUs, 2048 MB RAM, 10 GB disk and attached to the previously created external virtual switch
<ul>
<li>The boot2docker ISO URL is explicitly set to fixate the Docker version (v18.03.0-ce) for repeatability purposes</li>
<li>Prepare to wait for a rather long period of time (10 minutes or more) for the VM to be created</li>
<li>Ignore the SSH reported error</li>
</ul>
</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker-machine</span><span class="w"> </span><span class="nx">create</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--driver</span><span class="w"> </span><span class="nx">hyperv</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--hyperv-cpu-count</span><span class="w"> </span><span class="nx">2</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--hyperv-memory</span><span class="w"> </span><span class="nx">2048</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--hyperv-disk-size</span><span class="w"> </span><span class="nx">10240</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--hyperv-virtual-switch</span><span class="w"> </span><span class="s2">"ansible"</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--hyperv-boot2docker-url</span><span class="w"> </span><span class="nx">https://github.com/boot2docker/boot2docker/releases/download/v18.03.0-ce/boot2docker.iso</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">ansible-vault</span><span class="w">
</span><span class="c"># Running pre-create checks...</span><span class="w">
</span><span class="c"># (ansible-vault) Boot2Docker URL was explicitly set to "https://github.com/boot2docker/boot2docker/releases/download/v18.03.0-ce/boot2docker.iso" at create time, so Docker Machine cannot upgrade this machine to the latest version.</span><span class="w">
</span><span class="c"># Creating machine...</span><span class="w">
</span><span class="c"># (ansible-vault) Boot2Docker URL was explicitly set to "https://github.com/boot2docker/boot2docker/releases/download/v18.03.0-ce/boot2docker.iso" at create time, so Docker Machine cannot upgrade this machine to the latest version.</span><span class="w">
</span><span class="c"># (ansible-vault) Downloading C:\Users\admin\.docker\machine\cache\boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v18.03.0-ce/boot2docker.iso...</span><span class="w">
</span><span class="c"># (ansible-vault) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%</span><span class="w">
</span><span class="c"># (ansible-vault) Creating SSH key...</span><span class="w">
</span><span class="c"># (ansible-vault) Creating VM...</span><span class="w">
</span><span class="c"># (ansible-vault) Using switch "ansible"</span><span class="w">
</span><span class="c"># (ansible-vault) Creating VHD</span><span class="w">
</span><span class="c"># (ansible-vault) Starting VM...</span><span class="w">
</span><span class="c"># (ansible-vault) Waiting for host to start...</span><span class="w">
</span><span class="c"># Waiting for machine to be running, this may take a few minutes...</span><span class="w">
</span><span class="c"># Detecting operating system of created instance...</span><span class="w">
</span><span class="c"># Waiting for SSH to be available...</span><span class="w">
</span><span class="c"># Error creating machine: Error detecting OS: Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded</span><span class="w">
</span></code></pre></div></div>
<ul>
<li>Check that the VM is running (look for “STATE Running”):</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="w"> </span><span class="n">docker-machine</span><span class="w"> </span><span class="nx">ls</span><span class="w">
</span><span class="c"># NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS</span><span class="w">
</span><span class="c"># ansible-vault - hyperv Running tcp://192.168.1.168:2376 Unknown Unable to query docker version: Get https://192.168.1.168:2376/v1.15/version: x509: certificate signed by unknown authority</span><span class="w">
</span></code></pre></div></div>
<ul>
<li>Get the IPv4 address of the VM, since you’ll needed it inside the Ansible inventory file:</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker-machine</span><span class="w"> </span><span class="nx">ip</span><span class="w"> </span><span class="nx">ansible-vault</span><span class="w">
</span><span class="c"># 192.168.1.168</span><span class="w">
</span></code></pre></div></div>
<ul>
<li>Connect to the VM using SSH (see more <a href="https://github.com/boot2docker/boot2docker#ssh-into-vm">here</a>)
<ul>
<li>In case you’re unable to enter the VM via SSH from a Powershell terminal, try using Git Bash run as admin - welcome to Windows!</li>
</ul>
</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker-machine</span><span class="w"> </span><span class="nx">ssh</span><span class="w"> </span><span class="nx">ansible-vault</span><span class="w">
</span><span class="c"># ## .</span><span class="w">
</span><span class="c"># ## ## ## ==</span><span class="w">
</span><span class="c"># ## ## ## ## ## ===</span><span class="w">
</span><span class="c"># /"""""""""""""""""\___/ ===</span><span class="w">
</span><span class="c"># ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~</span><span class="w">
</span><span class="c"># \______ o __/</span><span class="w">
</span><span class="c"># \ \ __/</span><span class="w">
</span><span class="c"># \____\_______/</span><span class="w">
</span><span class="c"># _ _ ____ _ _</span><span class="w">
</span><span class="c"># | |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __</span><span class="w">
</span><span class="c"># | '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|</span><span class="w">
</span><span class="c"># | |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |</span><span class="w">
</span><span class="c"># |_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|</span><span class="w">
</span><span class="c"># Boot2Docker version 18.03.0-ce, build HEAD : 404ee40 - Thu Mar 22 17:12:23 UTC 2018</span><span class="w">
</span><span class="c"># Docker version 18.03.0-ce, build 0520e24</span><span class="w">
</span></code></pre></div></div>
<ul>
<li>Install Python and Python setup tools on the VM, as they are needed by Ansible - check <a href="https://stackoverflow.com/a/28750034">this</a> StackOverflow article for instructions.<br />
Keep in mind that all changes done to this machine will be lost after a restart, as documented <a href="https://github.com/boot2docker/boot2docker#persist-data">here</a>!</li>
</ul>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>tce-load <span class="nt">-wi</span> python python-setuptools
<span class="c"># python.tcz.dep OK</span>
<span class="c"># tk.tcz.dep OK</span>
<span class="c"># readline.tcz.dep OK</span>
<span class="c"># Downloading: libffi.tcz</span>
<span class="c"># Connecting to repo.tinycorelinux.net (89.22.99.37:80)</span>
<span class="c"># python-setuptools.tcz.dep OK</span>
<span class="c"># libffi.tcz 100% |*************************************************************************************************| 16384 0:00:00 ETA</span>
<span class="c"># libffi.tcz: OK</span>
<span class="c"># Downloading: expat2.tcz</span>
<span class="c"># Connecting to repo.tinycorelinux.net (89.22.99.37:80)</span>
<span class="c"># expat2.tcz 100% |*************************************************************************************************| 73728 0:00:00 ETA</span>
<span class="c"># expat2.tcz: OK</span>
<span class="c"># Downloading: ncurses.tcz</span>
<span class="c"># Connecting to repo.tinycorelinux.net (89.22.99.37:80)</span>
<span class="c"># ncurses.tcz 100% |*************************************************************************************************| 196k 0:00:00 ETA</span>
<span class="c"># ncurses.tcz: OK</span>
<span class="c"># Downloading: readline.tcz</span>
<span class="c"># Connecting to repo.tinycorelinux.net (89.22.99.37:80)</span>
<span class="c"># readline.tcz 100% |*************************************************************************************************| 144k 0:00:00 ETA</span>
<span class="c"># readline.tcz: OK</span>
<span class="c"># Downloading: gdbm.tcz</span>
<span class="c"># Connecting to repo.tinycorelinux.net (89.22.99.37:80)</span>
<span class="c"># gdbm.tcz 100% |*************************************************************************************************| 73728 0:00:00 ETA</span>
<span class="c"># gdbm.tcz: OK</span>
<span class="c"># Downloading: tcl.tcz</span>
<span class="c"># Connecting to repo.tinycorelinux.net (89.22.99.37:80)</span>
<span class="c"># tcl.tcz 100% |*************************************************************************************************| 1128k 0:00:00 ETA</span>
<span class="c"># tcl.tcz: OK</span>
<span class="c"># Downloading: tk.tcz</span>
<span class="c"># Connecting to repo.tinycorelinux.net (89.22.99.37:80)</span>
<span class="c"># tk.tcz 100% |*********************************************************************************************************************************************| 916k 0:00:00 ETA</span>
<span class="c"># tk.tcz: OK</span>
<span class="c"># Downloading: openssl.tcz</span>
<span class="c"># Connecting to repo.tinycorelinux.net (89.22.99.37:80)</span>
<span class="c"># openssl.tcz 100% |*********************************************************************************************************************************************| 1500k 0:00:00 ETA</span>
<span class="c"># openssl.tcz: OK</span>
<span class="c"># Downloading: bzip2-lib.tcz</span>
<span class="c"># Connecting to repo.tinycorelinux.net (89.22.99.37:80)</span>
<span class="c"># bzip2-lib.tcz 100% |*********************************************************************************************************************************************| 28672 0:00:00 ETA</span>
<span class="c"># bzip2-lib.tcz: OK</span>
<span class="c"># Downloading: sqlite3.tcz</span>
<span class="c"># Connecting to repo.tinycorelinux.net (89.22.99.37:80)</span>
<span class="c"># sqlite3.tcz 100% |*********************************************************************************************************************************************| 388k 0:00:00 ETA</span>
<span class="c"># sqlite3.tcz: OK</span>
<span class="c"># Downloading: python.tcz</span>
<span class="c"># Connecting to repo.tinycorelinux.net (89.22.99.37:80)</span>
<span class="c"># python.tcz 100% |*********************************************************************************************************************************************| 11820k 0:00:00 ETA</span>
<span class="c"># python.tcz: OK</span>
<span class="c"># Downloading: python-setuptools.tcz</span>
<span class="c"># Connecting to repo.tinycorelinux.net (89.22.99.37:80)</span>
<span class="c"># python-setuptools.tc 100% |*********************************************************************************************************************************************| 236k 0:00:00 ETA</span>
<span class="c"># python-setuptools.tcz: OK</span>
</code></pre></div></div>
<p>In case you forgot this step, when running Ansible playbook you’ll see something like this:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># PLAY [docker_hosts] ************************************************************</span><span class="w">
</span><span class="c"># TASK [Gathering Facts] *********************************************************</span><span class="w">
</span><span class="c"># fatal: [ansible_vault_example]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "/bin/sh: /usr/local/bin/python: not found\r\n", "msg": "MODULE FAILURE", "rc": 0}</span><span class="w">
</span><span class="c"># to retry, use: --limit @/opt/ansible-playbooks/hello-world.retry</span><span class="w">
</span><span class="c"># PLAY RECAP *********************************************************************</span><span class="w">
</span><span class="c"># ansible_vault_example : ok=0 changed=0 unreachable=0 failed=1</span><span class="w">
</span></code></pre></div></div>
<ul>
<li>Display Python version</li>
</ul>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>python <span class="nt">--version</span>
<span class="c"># Python 2.7.14</span>
</code></pre></div></div>
<ul>
<li>Exit the VM:</li>
</ul>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">exit</span>
</code></pre></div></div>
<h2 id="git-clone">Clone Ansible Vault example</h2>
<ul>
<li>Clone the following git repository hosted on GitHub somewhere on your Windows machine (e.g. E:\Satrapu\Programming\Ansible\ansible-vault-on-windows):</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">cd</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible</span><span class="w">
</span><span class="n">git</span><span class="w"> </span><span class="nx">clone</span><span class="w"> </span><span class="nx">https://github.com/satrapu/ansible-vault-on-windows.git</span><span class="w">
</span></code></pre></div></div>
<p>This git repo is based on the classic Ansible folder structure, as documented <a href="http://docs.ansible.com/ansible/latest/playbooks_best_practices.html#directory-layout">here</a>.</p>
<ul>
<li>Change the Ansible inventory file named <strong>local</strong>
<ul>
<li>Set the value of the <strong>ansible_host</strong> property to the IP address of the ansible-vault VM (e.g. ansible_host=192.168.1.168)</li>
<li>Please note property <strong>ansible_ssh_private_key_file</strong> has been set to “/opt/docker-machine/ansible-vault/id_rsa” value - the id_rsa represents a private key generated by Docker Machine while creating ansible-vault VM and which will be made available inside the Ansible Docker container via a Docker volume; this property should not be changed without fully understanding what else needs to be changed (see below)</li>
</ul>
</li>
<li>Create a file named <strong>vault_password</strong> under <strong>../ansible-vault-password</strong> folder (outside Git repo!) and add a password (one line, no line ending)
<ul>
<li>Since this file contains a password, it must not be put under source control, that’s why it should be created outside the Git repo</li>
<li>To make it available inside Ansible Docker container, we’ll mount the containing folder as a Docker volume under path “/opt/ansible-vault-password”
<ul>
<li>Example: “-v E:/Satrapu/Programming/Ansible/ansible-vault-password:/opt/ansible-vault-password”</li>
</ul>
</li>
<li>I have used <a href="https://strongpasswordgenerator.com/">https://strongpasswordgenerator.com</a> to generate such password
<ul>
<li>Click “Show Options” panel under the “Generate password” big green button to fine tune your password</li>
</ul>
</li>
</ul>
</li>
<li>Replace the <strong>TBD</strong> placeholders from the <strong>/ansible-vault-on-windows/group_vars/docker_hosts/vault.yml</strong> file:</li>
</ul>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">vault_docker_registry_url</span><span class="pi">:</span> <span class="s">TBD</span>
<span class="na">vault_docker_registry_auth_username</span><span class="pi">:</span> <span class="s">TBD</span>
<span class="na">vault_docker_registry_auth_password</span><span class="pi">:</span> <span class="s">TBD</span>
<span class="na">vault_docker_registry_auth_email</span><span class="pi">:</span> <span class="s">TBD</span>
</code></pre></div></div>
<p>with the appropriate values, like this:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">vault_docker_registry_url</span><span class="pi">:</span> <span class="s">https://index.docker.io/v1/</span>
<span class="na">vault_docker_registry_auth_username</span><span class="pi">:</span> <span class="s">some_user_name</span>
<span class="na">vault_docker_registry_auth_password</span><span class="pi">:</span> <span class="s">P@zZwWwooRdddd</span>
<span class="na">vault_docker_registry_auth_email</span><span class="pi">:</span> <span class="s">some_user_name@server.ro</span>
</code></pre></div></div>
<p>This file should be put under source control once it has been encrypted.<br />
For instance, the Docker Hub registry URL can be found via this command:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span><span class="w"> </span><span class="nx">info</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">findstr</span><span class="w"> </span><span class="nx">Registry</span><span class="w">
</span><span class="c"># Registry: https://index.docker.io/v1/</span><span class="w">
</span></code></pre></div></div>
<p>In case you forgot to correctly update <strong>vault.yml</strong> file, when running Ansible playbook you should see something like this:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># PLAY [docker_hosts] ************************************************************</span><span class="w">
</span><span class="c"># TASK [Gathering Facts] *********************************************************</span><span class="w">
</span><span class="c"># ok: [ansible_vault_example]</span><span class="w">
</span><span class="c"># TASK [run_hello_world_container : Install pip] *********************************</span><span class="w">
</span><span class="c"># changed: [ansible_vault_example]</span><span class="w">
</span><span class="c"># TASK [run_hello_world_container : Install docker-py] ***************************</span><span class="w">
</span><span class="c"># changed: [ansible_vault_example]</span><span class="w">
</span><span class="c"># TASK [run_hello_world_container : Login into Docker registry TBD] **************</span><span class="w">
</span><span class="c"># fatal: [ansible_vault_example]: FAILED! => {"changed": false, "failed": true, "msg": "Parameter error: the email address appears to be incorrect. Expecting it to match /[^@]+@[^@]+\\.[^@]+/"}</span><span class="w">
</span><span class="c"># to retry, use: --limit @/opt/ansible-playbooks/hello-world.retry</span><span class="w">
</span><span class="c"># PLAY RECAP *********************************************************************</span><span class="w">
</span><span class="c"># ansible_vault_example : ok=3 changed=2 unreachable=0 failed=1</span><span class="w">
</span></code></pre></div></div>
<ul>
<li>You’ll see a <strong>vars.yml</strong> file under the same folder, <strong>/ansible-vault-on-windows/group_vars/docker_hosts</strong>:</li>
</ul>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">docker_registry_url</span><span class="pi">:</span> <span class="s2">"</span><span class="s">{{</span><span class="nv"> </span><span class="s">vault_docker_registry_url</span><span class="nv"> </span><span class="s">}}"</span>
<span class="na">docker_registry_auth_username</span><span class="pi">:</span> <span class="s2">"</span><span class="s">{{</span><span class="nv"> </span><span class="s">vault_docker_registry_auth_username</span><span class="nv"> </span><span class="s">}}"</span>
<span class="na">docker_registry_auth_password</span><span class="pi">:</span> <span class="s2">"</span><span class="s">{{</span><span class="nv"> </span><span class="s">vault_docker_registry_auth_password</span><span class="nv"> </span><span class="s">}}"</span>
<span class="na">docker_registry_auth_email</span><span class="pi">:</span> <span class="s2">"</span><span class="s">{{</span><span class="nv"> </span><span class="s">vault_docker_registry_auth_email</span><span class="nv"> </span><span class="s">}}"</span>
</code></pre></div></div>
<p>Ansible will use the password residing inside the one-line file passed as the value of the <strong>–vault-password-file</strong> argument (e.g. –vault-password-file=/opt/ansible-vault-password/vault_password) to automatically decrypt the vault.yml file and will populate the above variables with the correct sensitive data, e.g. the user name and password used for pulling images from Docker Hub.</p>
<ul>
<li>After applying the aforementioned changes, the local git repo should look like this:</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Change drive letters and paths according to your local setup</span><span class="w">
</span><span class="n">E:</span><span class="p">;</span><span class="w"> </span><span class="n">cd</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-on-windows</span><span class="p">;</span><span class="w"> </span><span class="n">tree</span><span class="w"> </span><span class="nx">/F</span><span class="w">
</span><span class="c"># E:\SATRAPU\PROGRAMMING\ANSIBLE\ANSIBLE-VAULT-ON-WINDOWS</span><span class="w">
</span><span class="c"># │ .gitattributes</span><span class="w">
</span><span class="c"># │ .gitignore</span><span class="w">
</span><span class="c"># │ ansible.cfg</span><span class="w">
</span><span class="c"># │ hello-world.yml</span><span class="w">
</span><span class="c"># │ LICENSE</span><span class="w">
</span><span class="c"># │ local</span><span class="w">
</span><span class="c"># │ README.md</span><span class="w">
</span><span class="c"># │ vault_password_provider.py</span><span class="w">
</span><span class="c"># │</span><span class="w">
</span><span class="c"># ├───group_vars</span><span class="w">
</span><span class="c"># │ └───docker_hosts</span><span class="w">
</span><span class="c"># │ vars.yml</span><span class="w">
</span><span class="c"># │ vault.yml</span><span class="w">
</span><span class="c"># │</span><span class="w">
</span><span class="c"># └───roles</span><span class="w">
</span><span class="c"># └───run_hello_world_container</span><span class="w">
</span><span class="c"># ├───defaults</span><span class="w">
</span><span class="c"># │ main.yml</span><span class="w">
</span><span class="c"># │</span><span class="w">
</span><span class="c"># └───tasks</span><span class="w">
</span><span class="c"># main.yml</span><span class="w">
</span></code></pre></div></div>
<h2 id="issues">Encountered issues</h2>
<h3 id="executable-bit">Issue #1: Executable bit</h3>
<p>Running Ansible Vault from a Docker container will fail since I’m trying to mount a Windows folder in a Linux container and all of its files will be mounted with all Linux permissions (read, write and execute):</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--rm</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-on-windows:/opt/ansible-playbooks</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-password:/opt/ansible-vault-password</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">satrapu/ansible-alpine-apk:2.4.1.0-r0</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">ansible-vault</span><span class="w"> </span><span class="nx">encrypt</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--vault-password-file</span><span class="o">=</span><span class="n">/opt/ansible-vault-password/vault_password</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="o">.</span><span class="nx">/group_vars/docker_hosts/vault.ym</span><span class="w">
</span><span class="c"># [WARNING]: Error in vault password file loading (default): Problem running</span><span class="w">
</span><span class="c"># vault password script /opt/ansible-vault-password/vault_password ([Errno 8]</span><span class="w">
</span><span class="c"># Exec format error). If this is not a script, remove the executable bit from the</span><span class="w">
</span><span class="c"># file.</span><span class="w">
</span><span class="c"># ERROR! Problem running vault password script /opt/ansible-vault-password/vault_password ([Errno 8] Exec format error). If this is not a script, remove the executable bit from the file.</span><span class="w">
</span></code></pre></div></div>
<p>Here are the permissions found inside the Docker container:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--rm</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-password:/opt/ansible-vault-password</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">satrapu/ansible-alpine-apk:2.4.1.0-r0</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">ls</span><span class="w"> </span><span class="nt">-al</span><span class="w"> </span><span class="nx">/opt/ansible-vault-password</span><span class="w">
</span><span class="c"># total 5</span><span class="w">
</span><span class="c"># drwxr-xr-x 2 root root 0 Mar 29 19:56 .</span><span class="w">
</span><span class="c"># drwxr-xr-x 1 root root 4096 Mar 29 20:03 ..</span><span class="w">
</span><span class="c"># -rwxr-xr-x 1 root root 100 Mar 24 19:20 vault_password</span><span class="w">
</span></code></pre></div></div>
<p>The above executable bit related error message is pretty clear, unfortunately, at the moment there is no easy way of mounting files without the execute bit, as stated <a href="https://docs.docker.com/docker-for-windows/troubleshoot/#permissions-errors-on-data-directories-for-shared-volumes">here</a>.</p>
<p>On the other hand, Ansible knows how to process a file with executable bit containing a Vault password if it is a Python script, as documented <a href="http://docs.ansible.com/ansible/latest/playbooks_vault.html#running-a-playbook-with-vault">here</a>, so the idea is to load the password via a Python script, which will be passed as the value of the –vault-password-file argument - see an example <a href="https://github.com/hashicorp/packer/issues/555#issuecomment-145749614">here</a>.<br />
At this moment I’m able to bypass the pesky Windows-Docker-folder-mounting issue, but this has lead me to the 2nd issue :)</p>
<h3 id="line-endings">Issue #2: Line endings</h3>
<p>Ansible Vault being able to run a Python script which returns the password is great news, but keep in mind we’re still editing files on Windows, which uses CRLF as line ending, which, of course, will not work on Linux:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--rm</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-on-windows:/opt/ansible-playbooks</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-password:/opt/ansible-vault-password</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">satrapu/ansible-alpine-apk:2.4.1.0-r0</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">ansible-vault</span><span class="w"> </span><span class="nx">encrypt</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--vault-password-file</span><span class="o">=.</span><span class="n">/vault_password_provider.py</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="o">.</span><span class="nx">/group_vars/docker_hosts/vault.yml</span><span class="w">
</span><span class="c"># [WARNING]: Error in vault password file loading (default): Problem running</span><span class="w">
</span><span class="c"># vault password script /opt/ansible-playbooks/vault_password_provider.py ([Errno</span><span class="w">
</span><span class="c"># 2] No such file or directory). If this is not a script, remove the executable</span><span class="w">
</span><span class="c"># bit from the file.</span><span class="w">
</span><span class="c"># ERROR! Problem running vault password script /opt/ansible-playbooks/vault_password_provider.py ([Errno 2] No such file or directory). If this is not a script, remove the executable bit from the file.</span><span class="w">
</span></code></pre></div></div>
<p>The fix is to edit vault_password_provider.py with an editor having line endings set for this file to “LF” instead of “CRLF” - see such setup for <a href="https://stackoverflow.com/a/39532890">Visual Studio Code</a>.</p>
<h2 id="ansible-vault">Ansible Vault commands</h2>
<p>Having fixed the above 2 issues, the following Ansible Vault commands will work like a charm:</p>
<ul>
<li>Encrypt vault.yml:</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--rm</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-on-windows:/opt/ansible-playbooks</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-password:/opt/ansible-vault-password</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">satrapu/ansible-alpine-apk:2.4.1.0-r0</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">ansible-vault</span><span class="w"> </span><span class="nx">encrypt</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--vault-password-file</span><span class="o">=.</span><span class="n">/vault_password_provider.py</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="o">.</span><span class="nx">/group_vars/docker_hosts/vault.yml</span><span class="w">
</span></code></pre></div></div>
<ul>
<li>Decrypt vault.yml:</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--rm</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-on-windows:/opt/ansible-playbooks</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-password:/opt/ansible-vault-password</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">satrapu/ansible-alpine-apk:2.4.1.0-r0</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">ansible-vault</span><span class="w"> </span><span class="nx">decrypt</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--vault-password-file</span><span class="o">=.</span><span class="n">/vault_password_provider.py</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="o">.</span><span class="nx">/group_vars/docker_hosts/vault.yml</span><span class="w">
</span></code></pre></div></div>
<ul>
<li>View the decrypted vault.yml:</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--rm</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-on-windows:/opt/ansible-playbooks</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-password:/opt/ansible-vault-password</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">satrapu/ansible-alpine-apk:2.4.1.0-r0</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">ansible-vault</span><span class="w"> </span><span class="nx">view</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--vault-password-file</span><span class="o">=.</span><span class="n">/vault_password_provider.py</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="o">.</span><span class="nx">/group_vars/docker_hosts/vault.yml</span><span class="w">
</span><span class="c"># vault_docker_registry_url: https://index.docker.io/v1/</span><span class="w">
</span><span class="c"># vault_docker_registry_auth_username: xxxxxxx</span><span class="w">
</span><span class="c"># vault_docker_registry_auth_password: xxxxxxx</span><span class="w">
</span><span class="c"># vault_docker_registry_auth_email: xxxxxxx</span><span class="w">
</span></code></pre></div></div>
<h2 id="run-ansible">Run Ansible via Docker container</h2>
<ul>
<li>Run Ansible playbook:</li>
</ul>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Replace <YOUR_ADMIN_USERS> placeholder with the Windows user name used for creating ansible-vault VM.</span><span class="w">
</span><span class="c"># Tip: Increase the verbosity of the ansible-playbook output by adding "-vvv" option at the end of the below line</span><span class="w">
</span><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--rm</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-on-windows:/opt/ansible-playbooks</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">E:/Satrapu/Programming/Ansible/ansible-vault-password:/opt/ansible-vault-password</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">C:/Users/</span><span class="err"><</span><span class="nx">YOUR_ADMIN_USERS</span><span class="err">></span><span class="nx">/.docker/machine/machines/ansible-vault:/opt/docker-machine/ansible-vault</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">satrapu/ansible-alpine-apk:2.4.1.0-r0</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">ansible-playbook</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--inventory-file</span><span class="o">=</span><span class="n">local</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nt">--vault-password-file</span><span class="o">=.</span><span class="n">/vault_password_provider.py</span><span class="w"> </span><span class="se">`
</span><span class="w"> </span><span class="nx">hello-world.yml</span><span class="w">
</span><span class="c"># PLAY [docker_hosts] ************************************************************</span><span class="w">
</span><span class="c"># TASK [Gathering Facts] *********************************************************</span><span class="w">
</span><span class="c"># ok: [ansible_vault_example]</span><span class="w">
</span><span class="c"># TASK [run_hello_world_container : Install pip] *********************************</span><span class="w">
</span><span class="c"># ok: [ansible_vault_example]</span><span class="w">
</span><span class="c"># TASK [run_hello_world_container : Install docker-py] ***************************</span><span class="w">
</span><span class="c"># ok: [ansible_vault_example]</span><span class="w">
</span><span class="c"># TASK [run_hello_world_container : Login into Docker registry https://index.docker.io/v1/] ***</span><span class="w">
</span><span class="c"># changed: [ansible_vault_example]</span><span class="w">
</span><span class="c"># TASK [run_hello_world_container : Pull Docker image hello-world:linux] *********</span><span class="w">
</span><span class="c"># changed: [ansible_vault_example]</span><span class="w">
</span><span class="c"># TASK [run_hello_world_container : Logout from Docker registry https://index.docker.io/v1/] ***</span><span class="w">
</span><span class="c"># ok: [ansible_vault_example]</span><span class="w">
</span><span class="c"># TASK [run_hello_world_container : Run Docker container hello-world-from-satrapu] ***</span><span class="w">
</span><span class="c"># changed: [ansible_vault_example]</span><span class="w">
</span><span class="c"># TASK [run_hello_world_container : Remove Docker container hello-world-from-satrapu] ***</span><span class="w">
</span><span class="c"># changed: [ansible_vault_example]</span><span class="w">
</span><span class="c"># TASK [run_hello_world_container : Remove Docker image hello-world:linux] *******</span><span class="w">
</span><span class="c"># changed: [ansible_vault_example]</span><span class="w">
</span><span class="c"># PLAY RECAP *********************************************************************</span><span class="w">
</span><span class="c"># ansible_vault_example : ok=9 changed=5 unreachable=0 failed=0</span><span class="w">
</span></code></pre></div></div>
<h2 id="resources">Resources</h2>
<ul>
<li><a href="https://docs.docker.com/machine/">Docker Machine</a></li>
<li><a href="https://docs.docker.com/machine/reference/">Docker Machine command-line reference</a></li>
<li><a href="http://boot2docker.io/">boot2docker</a></li>
<li><a href="http://www.tinycorelinux.net/">Tiny Core Linux</a></li>
<li>Ansible modules
<ul>
<li><a href="http://docs.ansible.com/ansible/latest/modules/easy_install_module.html">easy_install</a></li>
<li><a href="http://docs.ansible.com/ansible/latest/modules/pip_module.html">pip</a></li>
<li><a href="http://docs.ansible.com/ansible/latest/modules/docker_login_module.html">docker_login</a></li>
<li><a href="http://docs.ansible.com/ansible/latest/modules/docker_image_module.html">docker_image</a></li>
<li><a href="http://docs.ansible.com/ansible/latest/modules/docker_container_module.html">docker_container</a></li>
</ul>
</li>
</ul>Context Prerequisites Setup Ansible managed node using Docker Machine Clone Ansible Vault example Encountered issues Issue #1: Executable bit Issue #2: Line endings Ansible Vault commands Run Ansible via Docker container ResourcesRunning Ansible on Windows2018-02-14T22:08:33+00:002018-02-14T22:08:33+00:00https://crossprogramming.com/2018/02/14/running-ansible-on-windows<ul>
<li><a href="#context">Context</a></li>
<li><a href="#dockerize">Why dockerize Ansible?</a></li>
<li><a href="#common">Common things</a></li>
<li><a href="#apk">Approach #1: APK</a></li>
<li><a href="#pip">Approach #2: PIP</a></li>
<li><a href="#sources">Approach #3: Sources</a></li>
<li><a href="#example">Example</a></li>
<li><a href="#conclusion">Conclusion</a></li>
<li><a href="#resources">Resources</a></li>
</ul>
<hr />
<!-- markdownlint-disable MD033 -->
<h2 id="context">Context</h2>
<p><a href="https://www.ansible.com/">Ansible</a> is an automation tool written in Python which simplistically speaking works like this: Ansible is installed on a machine (the control node) and it will execute a series of Python scripts generated based on some <a href="http://www.yaml.org/">YAML</a> files (playbooks, roles and tasks) on a bunch of other machines (the managed nodes).<br />
This tool was so successful that in October 2015 Red Hat <a href="https://www.redhat.com/en/about/press-releases/red-hat-acquire-it-automation-and-devops-leader-ansible">acquired</a> Ansible, Inc., the commercial entity behind it. In case you’re wondering why did this acquisition happened, read <a href="https://www.redhat.com/en/blog/why-red-hat-acquired-ansible">this article</a>; it will also highlight the main features of this tool and why should we use it.</p>
<p>I have used Ansible for the past year and I really liked it due to its (apparent) lean learning curve and its ability to automate (almost) anything. I was fortunate enough to use it from a MacBook Pro and thus I was able to easily install it via <a href="http://brewformulas.org/Ansible">Homebrew</a>, the macOS package manager.<br />
The common scenarios where I used this tool were: <strong>provisioning</strong> Linux (CentOS 7.x) environments (e.g. installing Oracle JDK, running MySQL or Oracle databases inside Docker, setting up firewall rules and much more) and performing automated <strong>deployments</strong> for various SAP Hybris applications.</p>
<h2 id="dockerize">Why dockerize Ansible?</h2>
<p>Until now, Ansible is still not officially supporting <a href="http://docs.ansible.com/ansible/latest/intro_installation.html#control-machine-requirements">a Windows control machine</a>.
One brave enough soul might try install Python and then Ansible on a Windows machine, be it virtual or not, but one should use the officially supported ways: running Ansible from a Linux or macOS machine. Or use Docker.</p>
<p>Running Ansible from a Docker container has several benefits:</p>
<ul>
<li>Use a great automation tool on Windows</li>
<li>Easy to share the tool across the team, no matter the underlying OS</li>
<li>Fixate your Ansible version to ensure the stability & repeatability of your automated processes (no more “Works on my machine … only” syndrome!)</li>
<li>Easy to test latest & greatest release without impacting your current dev environment</li>
<li>Keeps your dev environment clean</li>
</ul>
<p>Creating an Ansible Docker image is not that hard and you can find <a href="https://hub.docker.com/search/?q=ansible">plenty</a> of such images.
I decided to write my own in order to learn how to author Docker images, in addition to merely using them.</p>
<p>Since Ansible works like a charm on a Linux control node, I’ve decided to use <a href="https://hub.docker.com/_/alpine/">Alpine Linux Docker image</a> as the base for my own as it’s very small: v3.6 is less than 4MB, while v3.7 is a little over 4MB.<br />
Alpine Linux has its own package manager, <a href="https://wiki.alpinelinux.org/wiki/Alpine_Linux_package_management">apk</a>, so I could’ve installed one of the available packages and be done with it, but this approach has one flaw: even though Ansible has released many <a href="http://releases.ansible.com/ansible/">versions</a>, Alpine didn’t packed them all, so you just have to live with whatever Ansible versions a specific Alpine release comes with - e.g. <a href="https://pkgs.alpinelinux.org/packages?name=ansible&branch=v3.6">v3.6</a>, <a href="https://pkgs.alpinelinux.org/packages?name=ansible&branch=v3.7">v3.7</a>.</p>
<p>There are different ways to overcome this limitation: install Ansible using <a href="https://pip.pypa.io/en/stable/">pip</a>, the Python package manager, install it from its sources hosted on <a href="https://github.com/ansible/ansible">GitHub</a>, use a different Linux distribution, etc.<br />
Pip offers <a href="https://pypi.python.org/pypi/ansible">lots</a> of Ansible versions to choose from, while installing Ansible from its sources gives you the ability to dockerize any of its git commits. I recommend the former when you need to get stuff done and the latter when you want to experiment/test a specific commit.</p>
<p>Below you may find 3 Docker images I have created for running Ansible on Windows, each with its pros and cons.</p>
<h2 id="common">Common things</h2>
<p>I wanted a simple way of running Ansible playbooks from my Windows machine, so I’m using a Docker volume for sharing them with the Docker container; the playbooks can be found here: <strong>/opt/ansible-playbooks</strong> - this path is customizable via a build argument, <strong>ANSIBLE_PLAYBOOKS_HOME</strong>.<br />
The whole development experience is like this: open your favorite (Ansible aware) text editor (see several options <a href="#resources">below</a>), write your playbooks, then play them from a CLI of your choice (e.g. PowerShell, <a href="http://www.techoism.com/how-to-install-git-bash-on-windows/">Git Bash</a> or <a href="http://cmder.net/">Cmder</a>) - basically, the same as running Ansible from a Linux or macOS control node.</p>
<p>Below you may find the Docker commands you should use for any of these 3 images.<br />
Please replace the <strong><SUFFIX></strong> placeholder with the appropriate value: <strong>apk</strong>, <strong>pip</strong> or <strong>git</strong>.</p>
<h3>Building the Docker image</h3>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker image build <span class="nt">--file</span> Dockerfile-<SUFFIX> <span class="nt">--tag</span> satrapu/ansible-alpine-<SUFFIX>:latest <span class="nb">.</span>
</code></pre></div></div>
<h3>Pushing the Docker image to DockerHub</h3>
<p>Please note you need to be logged-in before pushing a Docker image to a registry, be it DockerHub or a private one!</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker image push satrapu/ansible-alpine-<SUFFIX>:latest
</code></pre></div></div>
<h3>Running the Docker container</h3>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker container run <span class="nt">-v</span> <DOCKER_HOST_ANSIBLE_PLAYBOOKS_HOME>:/opt/ansible-playbook satrapu/ansible-alpine-<SUFFIX>:latest ansible-playbook <RELATIVE_PATH_TO_YOUR_ANSIBLE_PLAYBOOK>
</code></pre></div></div>
<h2 id="apk">Approach #1: APK</h2>
<p>The Dockerfile used for installing Ansible via apk can be found <a href="https://github.com/satrapu/docker-ansible/blob/master/Dockerfile-apk">here</a>.
Additionally, this image uses an argument, <strong>ANSIBLE_VERSION</strong>, which specifies the particular Ansible release version to install at build time.</p>
<ul>
<li>Pros
<ul>
<li>Small image size ~ 97MB</li>
<li>Simple Dockerfile</li>
<li>Easy to upgrade to future Ansible or Alpine versions</li>
</ul>
</li>
<li>Cons
<ul>
<li>Coarse grained control over the Ansible release version to use</li>
</ul>
</li>
</ul>
<h3>Specific Docker commands</h3>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker image build <span class="nt">--file</span> Dockerfile-apk <span class="nt">--tag</span> satrapu/ansible-alpine-apk:latest <span class="nb">.</span>
docker image push satrapu/ansible-alpine-apk:latest
docker container run <span class="nt">-v</span> <DOCKER_HOST_ANSIBLE_PLAYBOOKS_HOME>:/opt/ansible-playbook satrapu/ansible-alpine-apk:latest ansible-playbook <ANSIBLE_PLAYBOOK>
</code></pre></div></div>
<h2 id="pip">Approach #2: PIP</h2>
<p>The Dockerfile used for installing Ansible via pip can be found <a href="https://github.com/satrapu/docker-ansible/blob/master/Dockerfile-pip">here</a>.
Additionally, this image uses an argument, <strong>ANSIBLE_VERSION</strong>, which specifies the particular Ansible release version to install at build time.</p>
<ul>
<li>Pros
<ul>
<li>Easy to upgrade to future Ansible or Alpine versions</li>
<li>Finer grained control over the Ansible release version to use</li>
<li>Medium image size ~ 268MB</li>
</ul>
</li>
<li>Cons
<ul>
<li>The Dockerfile is a little bit more complex than apk based approach</li>
<li>Any upgrade to future Ansible or Alpine versions may require additional effort in order to identify the right prerequisites (e.g. specific versions of Python packages)</li>
<li>Increased Docker image build time</li>
</ul>
</li>
</ul>
<h3>Specific Docker commands</h3>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker image build <span class="nt">--file</span> Dockerfile-pip <span class="nt">--tag</span> satrapu/ansible-alpine-pip:latest <span class="nb">.</span>
docker image push satrapu/ansible-alpine-pip:latest
docker container run <span class="nt">-v</span> <DOCKER_HOST_ANSIBLE_PLAYBOOKS_HOME>:/opt/ansible-playbook satrapu/ansible-alpine-pip:latest ansible-playbook <ANSIBLE_PLAYBOOK>
</code></pre></div></div>
<h2 id="sources">Approach #3: Sources</h2>
<p>The Dockerfile used for installing Ansible from its sources can be found <a href="https://github.com/satrapu/docker-ansible/blob/master/Dockerfile-git">here</a>.
Additionally, this image used an argument, <strong>ANSIBLE_GIT_CHECKOUT_ARGS</strong>, which is directly passed to the <strong>git checkout</strong> command, thus it can represent a git branch name, tag or commit hash (short or long), including specific flags, as documented <a href="https://git-scm.com/docs/git-checkout">here</a>.</p>
<ul>
<li>Pros
<ul>
<li>Finest grained control over the Ansible release version to use</li>
</ul>
</li>
<li>Cons
<ul>
<li>Rather complex Dockerfile</li>
<li>Any future change in the process of installing Ansible from sources might negatively affect building this image</li>
<li>Any upgrade to future Ansible or Alpine versions may require additional effort in order to identify the right prerequisites (e.g. Python packages)</li>
<li>Largest image size ~ 487MB</li>
<li>Rather long Docker image build time</li>
</ul>
</li>
</ul>
<h3>Specific Docker commands</h3>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker image build <span class="nt">--file</span> Dockerfile-git <span class="nt">--tag</span> satrapu/ansible-alpine-git:latest <span class="nb">.</span>
docker image push satrapu/ansible-alpine-git:latest
docker container run <span class="nt">-v</span> <DOCKER_HOST_ANSIBLE_PLAYBOOKS_HOME>:/opt/ansible-playbook satrapu/ansible-alpine-git:latest ansible-playbook <ANSIBLE_PLAYBOOK>
</code></pre></div></div>
<h2 id="example">Example</h2>
<p>The below commands have been executed on a machine running Windows 10 Pro x64, release 1709 and Docker version 17.12.0-ce, build c97c6d6.<br />
Given the following folder <strong>hello-world</strong> on this Windows machine:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">P:\Satrapu\Programming\Ansible</span><span class="w">
</span><span class="err">└───</span><span class="nx">hello-world</span><span class="w">
</span><span class="n">hello-world.yml</span><span class="w">
</span></code></pre></div></div>
<p>Containing the <strong>hello-world.yml</strong> file:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">---</span>
<span class="c1"># This playbook prints a simple debug message</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Echo</span>
<span class="na">hosts</span><span class="pi">:</span> <span class="s">127.0.0.1</span>
<span class="na">connection</span><span class="pi">:</span> <span class="s">local</span>
<span class="na">tasks</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Print debug message</span>
<span class="na">debug</span><span class="pi">:</span>
<span class="na">msg</span><span class="pi">:</span> <span class="s">Hello, world!</span>
</code></pre></div></div>
<p>I’m playing the hello-world.yml playbook via the following command:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="nt">-v</span><span class="w"> </span><span class="nx">P:\Satrapu\Programming\Ansible\hello-world:/opt/ansible-playbooks</span><span class="w"> </span><span class="nx">satrapu/ansible-alpine-apk</span><span class="w"> </span><span class="nx">ansible-playbook</span><span class="w"> </span><span class="nx">hello-world.yml</span><span class="w">
</span><span class="n">PLAY</span><span class="w"> </span><span class="p">[</span><span class="n">Echo</span><span class="p">]</span><span class="w"> </span><span class="o">********************************************************************</span><span class="w">
</span><span class="n">TASK</span><span class="w"> </span><span class="p">[</span><span class="n">Gathering</span><span class="w"> </span><span class="n">Facts</span><span class="p">]</span><span class="w"> </span><span class="o">*********************************************************</span><span class="w">
</span><span class="n">ok:</span><span class="w"> </span><span class="p">[</span><span class="n">localhost</span><span class="p">]</span><span class="w">
</span><span class="n">TASK</span><span class="w"> </span><span class="p">[</span><span class="n">Print</span><span class="w"> </span><span class="n">debug</span><span class="w"> </span><span class="n">message</span><span class="p">]</span><span class="w"> </span><span class="o">*****************************************************</span><span class="w">
</span><span class="n">ok:</span><span class="w"> </span><span class="p">[</span><span class="n">localhost</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="err">></span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="s2">"msg"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Hello, world!"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="n">PLAY</span><span class="w"> </span><span class="nx">RECAP</span><span class="w"> </span><span class="o">*********************************************************************</span><span class="w">
</span><span class="n">localhost</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="nx">ok</span><span class="o">=</span><span class="mi">2</span><span class="w"> </span><span class="n">changed</span><span class="o">=</span><span class="mi">0</span><span class="w"> </span><span class="n">unreachable</span><span class="o">=</span><span class="mi">0</span><span class="w"> </span><span class="n">failed</span><span class="o">=</span><span class="mi">0</span><span class="w">
</span></code></pre></div></div>
<p>I can run other Ansible commands:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Print Ansible version</span><span class="w">
</span><span class="n">docker</span><span class="w"> </span><span class="nx">container</span><span class="w"> </span><span class="nx">run</span><span class="w"> </span><span class="nx">satrapu/ansible-alpine-apk</span><span class="w"> </span><span class="nx">ansible</span><span class="w"> </span><span class="nt">--version</span><span class="w">
</span><span class="n">ansible</span><span class="w"> </span><span class="nx">2.4.1.0</span><span class="w">
</span><span class="n">config</span><span class="w"> </span><span class="nx">file</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">/etc/ansible/ansible.cfg</span><span class="w">
</span><span class="nx">configured</span><span class="w"> </span><span class="nx">module</span><span class="w"> </span><span class="nx">search</span><span class="w"> </span><span class="nx">path</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="n">u</span><span class="s1">'/root/.ansible/plugins/modules'</span><span class="p">,</span><span class="w"> </span><span class="nx">u</span><span class="s1">'/usr/share/ansible/plugins/modules'</span><span class="p">]</span><span class="w">
</span><span class="n">ansible</span><span class="w"> </span><span class="n">python</span><span class="w"> </span><span class="n">module</span><span class="w"> </span><span class="n">location</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">/usr/lib/python2.7/site-packages/ansible</span><span class="w">
</span><span class="nx">executable</span><span class="w"> </span><span class="nx">location</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">/usr/bin/ansible</span><span class="w">
</span><span class="nx">python</span><span class="w"> </span><span class="nx">version</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mf">2.7</span><span class="o">.</span><span class="nf">14</span><span class="w"> </span><span class="p">(</span><span class="n">default</span><span class="p">,</span><span class="w"> </span><span class="n">Dec</span><span class="w"> </span><span class="mi">14</span><span class="w"> </span><span class="mi">2017</span><span class="p">,</span><span class="w"> </span><span class="mi">15</span><span class="p">:</span><span class="mi">51</span><span class="p">:</span><span class="mi">29</span><span class="p">)</span><span class="w"> </span><span class="p">[</span><span class="n">GCC</span><span class="w"> </span><span class="mf">6.4</span><span class="o">.</span><span class="nf">0</span><span class="p">]</span><span class="w">
</span></code></pre></div></div>
<h2 id="conclusion">Conclusion</h2>
<p>Ansible can be run on many operating systems with similar developer experience, as long as they are <a href="https://docs.docker.com/install/#supported-platforms">supported by Docker</a>.<br />
Just because a tool is not officially supported on Windows or it’s *nix only, doesn’t mean you have to forget about it - dockerize it and start using it!</p>
<h2 id="resources">Resources</h2>
<ul>
<li>Ansible official resources
<ul>
<li><a href="http://docs.ansible.com">Documentation</a></li>
<li><a href="http://docs.ansible.com/ansible/latest/dev_guide">Developer guide</a></li>
<li><a href="http://docs.ansible.com/ansible/latest/playbooks_best_practices.html#directory-layout">Directory layout</a></li>
<li><a href="https://github.com/lorin/ansible-quickref">Variable quick reference</a></li>
<li><a href="https://raw.githubusercontent.com/ansible/ansible/devel/examples/ansible.cfg">Configuration file template</a></li>
<li><a href="http://docs.ansible.com/ansible/latest/YAMLSyntax.html">YAML syntax</a></li>
<li><a href="http://docs.ansible.com/ansible/latest/playbooks_best_practices.html">Best practices</a></li>
<li><a href="https://www.ansible.com/blog/ansible-best-practices-essentials">Ansible Best Practices: The Essentials</a></li>
<li><a href="https://www.ansible.com/products/tower">Ansible Tower</a></li>
</ul>
</li>
<li>Other resources
<ul>
<li><a href="https://blog.codecentric.de/en/2017/06/debug-ansible-playbooks-like-pro/">Debug Ansible Playbooks Like A Pro</a></li>
</ul>
</li>
<li>Udemy free courses
<ul>
<li><a href="https://www.udemy.com/learn-ansible/learn/v4/overview">Ansible for the Absolute Beginner - Hands-On</a></li>
<li><a href="https://www.udemy.com/ansible-essentials-simplicity-in-automation/learn/v4/overview">Ansible Essentials: Simplicity in Automation</a></li>
</ul>
</li>
<li>IDEs
<ul>
<li><a href="https://atom.io/">Atom</a>
<ul>
<li>Extensions
<ul>
<li><a href="https://atom.io/packages/language-ansible">language-ansible</a></li>
<li><a href="https://atom.io/packages/autocomplete-ansible">autocomplete-ansible</a></li>
<li><a href="https://atom.io/packages/linter-ansible-linting">linter-ansible-linting</a></li>
<li><a href="https://atom.io/packages/linter-ansible-syntax">linter-ansible-syntax</a></li>
<li><a href="https://atom.io/packages/ansible-vault">ansible-vault</a></li>
<li><a href="https://atom.io/packages/language-ini">language-ini</a> (used for highlighting Ansible inventory files)</li>
<li><a href="https://atom.io/packages/atom-jinja2">atom-jinja2</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://code.visualstudio.com/">Visual Studio Code</a>
<ul>
<li>Extensions
<ul>
<li><a href="https://marketplace.visualstudio.com/items?itemName=haaaad.ansible">language-Ansible</a></li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=timonwong.ansible-autocomplete">ansible-autocomplete</a></li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=dhoeric.ansible-vault">ansible-vault</a></li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=wholroyd.jinja">Jinja</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>Context Why dockerize Ansible? Common things Approach #1: APK Approach #2: PIP Approach #3: Sources Example Conclusion ResourcesDisplay images on GitHub wiki2018-01-06T17:04:44+00:002018-01-06T17:04:44+00:00https://crossprogramming.com/2018/01/06/display-images-on-github-wiki<ul>
<li><a href="#context">Context</a></li>
<li><a href="#image-relative-url">Approach #1: GitHub repo image relative URL</a></li>
<li><a href="#image-absolute-url">Approach #2: GitHub “raw” image absolute URL</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ul>
<hr />
<!-- markdownlint-disable MD033 -->
<h2 id="context">Context</h2>
<p>After giving a <a href="https://rancher.com/rancher/">Rancher</a> workshop during the 5th edition of the Java Tech Group Day, an <a href="http://www.iquestgroup.com/en/">iQuest</a> internal event, which took place on September 7th 2017, I thought about open-source it on GitHub as a series of wiki pages to let people outside this company benefit from it too.<br />
After getting the OK from the Java Practice leadership to publish it on <a href="https://github.com/satrapu/rancher-workshop">GitHub</a>, I started converting the workshop written as a series of Confluence pages to one of the formats supported by <a href="https://help.github.com/articles/about-github-wikis/">GitHub wiki</a>: <a href="https://daringfireball.net/projects/markdown/">Markdown</a>.<br />
Below you may find how I ended-up displaying the workshop images on GitHub wiki pages.</p>
<!-- markdownlint-disable MD033 -->
<h2 id="image-relative-url">Approach #1: GitHub repo image relative URL</h2>
<p>Since the workshop was built as a series of step-by-step tutorials, it contained lots of high-resolution images; see one <a href="https://github.com/satrapu/rancher-workshop/blob/master/images/scenarios/basic/01/image2017-8-19_22-23-22.png">here</a>. My first approach was to store them in the GitHub repository where my wiki pages were located too. A wiki page would link such an image, but the end result was a resized image which required the user to resort to browser zoom-in to clearly see the image and then zoom out to read the text - a pretty awful user-experience!<br />
Anyway, the image is displayed in a wiki page via this Markdown fragment:</p>
<div class="language-markdown highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">![](</span><span class="sx">https://github.com/satrapu/rancher-workshop/blob/master/images/scenarios/basic/01/image2017-8-19_22-23-22.png</span><span class="p">)</span>
</code></pre></div></div>
<p>The page revision using the above URL can be found <a href="https://github.com/satrapu/rancher-workshop/wiki/VirtualBox/105870481d0afe58360e57f2fa0f7f636cc94955">here</a> (scroll down to the “VirtualBox VM Installation Steps” section).</p>
<!-- markdownlint-disable MD033 -->
<h2 id="image-absolute-url">Approach #2: GitHub "raw" image absolute URL</h2>
<p>As you can imagine, I wasn’t very happy with the first approach, as it brought a poor user-experience - a thing I could not tolerate for <em>my</em> workshop!<br />
Accidentally, I stumbled upon other GitHub wikis showing high-res images and when I took a closer look at one of their images - bingo! The image URL was using a different domain than a normal GitHub repo would: <strong>raw.githubusercontent.com</strong> instead of <strong>github.com</strong>.
Additionally, to further improve the user-experience, I wanted to enlarge the image to its original size when clicked and thus I had to resort to plain HTML markup: have an <em>a</em> tag include an <em>img</em> one.<br />
After applying these changes, the aforementioned image fragment became:</p>
<div class="language-html highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt"><a</span> <span class="na">href=</span><span class="s">"https://raw.githubusercontent.com/satrapu/rancher-workshop/master/images/scenarios/basic/01/image2017-8-19_22-23-22.png"</span> <span class="na">target=</span><span class="s">"_blank"</span><span class="nt">></span>
<span class="nt"><img</span> <span class="na">src=</span><span class="s">"https://github.com/satrapu/rancher-workshop/blob/master/images/scenarios/basic/01/image2017-8-19_22-23-22.png"</span> <span class="nt">/></span>
<span class="nt"></a></span>
</code></pre></div></div>
<p>Please note that the <em>img</em> tag will render a down-sized image, while clicking the link will render the original high-res image.<br />
The page using the above URL can be found <a href="https://github.com/satrapu/rancher-workshop/wiki/VirtualBox">here</a> (scroll down to the “VirtualBox VM Installation Steps” section).<br />
The only thing I’m still not able to figure it out is how to open the original image in a different browser tab. I’ve tried setting “target=_blank” attribute to the <em>a</em> tag, but without any result. Anyway, the end result is a better user-experience, so my goal has been reached.</p>
<!-- markdownlint-disable MD033 -->
<h2 id="conclusion">Conclusion</h2>
<p>GitHub wiki is a good way of sharing information and it has good support for content presentation, high-res images included.</p>Context Approach #1: GitHub repo image relative URL Approach #2: GitHub “raw” image absolute URL Conclusion