Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Actually, this article is not about a tip to write cleaner code, but it is an article that aims at pointing out a code smell.
Of course, once you find this code smell in your code, you can act in order to eliminate it, and, as a consequence, you will end up with cleaner code.
The code smell is easy to identify: open your classes and have a look at the imports list (in C#, the using on top of the file).
A real example of too many imports
Here’s a real-life example (I censored the names, of course):
using MyCompany.CMS.Data;
using MyCompany.CMS.Modules;
using MyCompany.CMS.Rendering;
using MyCompany.Witch.Distribution;
using MyCompany.Witch.Distribution.Elements;
using MyCompany.Witch.Distribution.Entities;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using Newtonsoft.Json.Serialization;
using MyProject.Controllers.VideoPlayer.v1.DataSource;
using MyProject.Controllers.VideoPlayer.v1.Vod;
using MyProject.Core;
using MyProject.Helpers.Common;
using MyProject.Helpers.DataExplorer;
using MyProject.Helpers.Entities;
using MyProject.Helpers.Extensions;
using MyProject.Helpers.Metadata;
using MyProject.Helpers.Roofline;
using MyProject.ModelsEntities;
using MyProject.Models.ViewEntities.Tags;
using MyProject.Modules.EditorialDetail.Core;
using MyProject.Modules.VideoPlayer.Models;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
namespace MyProject.Modules.Video
Sounds familiar?
If we exclude the imports necessary to use some C# functionalities
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
We have lots of dependencies on external modules.
This means that if something changes in one of the classes that are part of those namespaces, we may end up with code that is difficult to update.
Also, guess what comes with all those imports? Constructor with too many parameters (and, in fact, in this class, I have 11 dependencies injected in the constructor) and code that is too long and difficult to understand (and, in fact, this class has 500+ lines).
A solution? Refactor your project in order to minimize scattering those dependencies.
Wrapping up
Having all those imports (in C# we use the keyword using) is a good indicator that your code does too many things. You should focus on minimizing those imports without cheating (like using global imports).
After 100 articles, I’ve found some neat ways to automate my blogging workflow. I will share my experience and the tools I use from the very beginning to the very end.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
This is my 100th article 🥳 To celebrate it, I want to share with you the full process I use for writing and publishing articles.
In this article I will share all the automation and tools I use for writing, starting from the moment an idea for an article pops up in my mind to what happens weeks after an article has been published.
I hope to give you some ideas to speed up your publishing process. Of course, I’m open to suggestions to improve my own flow: perhaps (well, certainly), you use better tools and processes, so feel free to share them.
Introducing my blog architecture
To better understand what’s going on, I need a very brief overview of the architecture of my blog.
It is written in Gatsby, a framework based on ReactJS that, in short, allows you to transform Markdown files into blog posts (it does many other things, but they are not important for the purpose of this article).
So, all my blog is stored in a private GitHub repository. Every time I push some changes on the master branch, a new deployment is triggered, and I can see my changes in a bunch of minutes on my blog.
As I said, I use Gatsby. But the key point here is that my blog is stored in a GitHub repo: this means that everything you’ll read here is valid for any Headless CMS based on Git, such as Gatsby, Hugo, NextJS, and Jekyll.
Now that you know some general aspects, it’s time to deep dive into my writing process.
Before writing: organizing ideas with GitHub
My central source, as you might have already understood, is GitHub.
There, I write all my notes and keep track of the status of my articles.
Everything is quite well organized, and with the support of some automation, I can speed up my publishing process.
Github Projects to track the status of the articles
GitHub Projects are the parts of GitHub that allow you to organize GitHub Issues to track their status.
I’ve created 2 GitHub Projects: one for the main articles (like this one), and one for my C# and Clean Code Tips.
In this way, I can use different columns and have more flexibility when handling the status of the tasks.
GitHub issues templates
As I said, to write my notes I use GitHub issues.
When I add a new Issue, the first thing is to define which type of article I want to write. And, since sometimes many weeks or months pass between when I came up with the idea for an article and when I start writing it, I need to organize my ideas in a structured way.
To do that, I use GitHub templates. When I create a new Issue, I choose which kind of article I’m going to write.
Based on the layout, I can add different info. For instance, when I want to write a new “main” article, I see this form
which is prepopulated with some fields:
Title: with a placeholder ([Article] )
Content: with some sections (the titles, translated from Italian, mean Topics, Links, General notes)
Labels: I automatically assign the Article label to the issue (you’ll see later why I do that)
How can you create GitHub issue templates? All you need is a Markdown file under the .github/ISSUE_TEMPLATE folder with content similar to this one.
---
name: New article
about: New blog article
title: "[Article] - "
labels: Article
assignees: bellons91
---
## Argomenti
## Link
## Appunti vari
And you’re good to go!
GitHub action to assign issues to a project
Now I have GitHub Projects and different GitHub Issues Templates. How can I join the different parts? Well, with GitHub Actions!
With GitHub Actions, you can automate almost everything that happens in GitHub (and outside) using YAML files.
So, here’s mine:
For better readability, you can find the Gist here.
This action looks for opened and labeled issues and pull requests, and based on the value of the label it assigns the element to the correct project.
In this way, after I choose a template, filled the fields, and added additional labels (like C#, Docker, and so on), I can see my newly created issue directly in the Articles board. Neat 😎
Writing
Now it’s the time of writing!
As I said, I’m using Gatsby, so all my articles are stored in a GitHub repository and written in Markdown.
For every article I write, I use a separate git branch: in this way, I’m free to update the content already online (in case of a typo) without publishing my drafts.
But, of course, I automated it! 😎
Powershell script to scaffold a new article
Every article lives in its /content/posts/{year}/{folder-name}/article.md file. And they all have a cover image in a file named cover.png.
Also, every MD file begins with a Frontmatter section, like this:
---
title: "How I automated my publishing flow with Gatsby, GitHub, PowerShell and Azure"
path: "/blog/automate-articles-creations-github-powershell-azure"
tags: ["MainArticle"]
featuredImage: "./cover.png"
excerpt: "a description for 072-how-i-create-articles"
created: 4219-11-20
updated: 4219-11-20
---
But, you know, I was tired of creating everything from scratch. So I wrote a Powershell Script to do everything for me.
where article-creator.ps1 is the name of the file that contains the script.
Now I can simply run npm run create-article to have a new empty article in a new branch, already updated with everything published in the Master branch.
Markdown preview on VS Code
I use Visual Studio Code to write my articles: I like it because it’s quite fast and with lots of functionalities to write in Markdown (you can pick your favorites in the Extensions store).
One of my favorites is the Preview on Side. To see the result of your MarkDown on a side panel, press CTRL+SHIFT+P and select Open Preview to the Side.
Here’s what I can see right now while I’m writing:
Grammar check with Grammarly
Then, it’s time for a check on the Grammar. I use Grammarly, which helps me fix lots of errors (well, in the last time, only a few: it means I’ve improved a lot! 😎).
I copy the Markdown in their online editor, fix the issues, and copy it back into my repo.
Fun fact: the online editor recognizes that you’re using Markdown and automatically checks only the actual text, ignoring all the symbols you use in Markdown (like brackets).
Unprofessional, but fun, cover images
One of the tasks I like the most is creating my cover images.
I don’t use stock images, I prefer using less professional but more original cover images.
Creating and scheduling PR on GitHub with Templates and Actions
Now that my article is complete, I can set it as ready for being scheduled.
To do that, I open a Pull Request to the Master Branch, and, again, add some kind of automation!
I have created a PR template in an MD file, which I use to create a draft of the PR content.
In this way, I can define which task (so, which article) is related to this PR, using the “Closes” formula (“Closes #111174” means that I’m closing the Issue with ID 111174).
Also, I can define when this PR will be merged on Master, using the /schedule tag.
It works because I have integrated into my workflow a GitHub Action, merge-schedule, that reads the date from that field to understand when the PR must be merged.
So, every Tuesday at 8 AM, this action runs to check if there are any PRs that can be merged. If so, the PR will be merged into master, and the CI/CD pipeline builds the site and publishes the new content.
As usual, you can find the code of this action here
After the PR is merged, I also receive an email that notifies me of the action.
After publishing
Once a new article is online, I like to give it some visibility.
To do that, I heavily rely on Azure Logic Apps.
Azure Logic App for sharing on Twitter
My blog exposes an RSS feed. And, obviously, when a new article is created, a new item appears in the feed.
I use it to trigger an Azure Logic App to publish a message on Twitter:
The Logic App reads the newly published feed item and uses its metadata to create a message that will be shared on Twitter.
If you prefer, you can use a custom Azure Function! The choice is yours!
Cross-post reminder with Azure Logic Apps
Similarly, I use an Azure Logic App to send to myself an email to remind me to cross-post my articles to other platforms.
I’ve added a delay so that my content lives longer, and I can repost it even after weeks or months.
Unluckily, when I cross-post my articles I have to do it manually, This is quite a time-consuming especially when there are lots of images: in my MD files I use relative paths, so when porting my content to different platforms I have to find the absolute URL for my images.
And, my friends, this is everything that happens in the background of my blog!
What I’m still missing
I’ve added a lot of effort to my blog, and I’m incredibly proud of it!
But still, there are a few things I’d like to improve.
SEO Tools/analysis
I’ve never considered SEO. Or, better, Keywords.
I write for the sake of writing, and because I love it. And I don’t like to stuff my content with keywords just to rank better on search engines.
I take care of everything like alt texts, well-structured sections, and everything else. But I’m not able to follow the “rules” to find the best keywords.
Maybe I should use some SEO tools to find the best keywords for me. But I don’t want to bend to that way of creating content.
Also, I should spend more time thinking of the correct title and section titles.
Any idea?
Easy upgrade of Gatsby/Migrate to other headless CMSs
Lastly, I’d like to find another theme or platform and leave the one I’m currently using.
Not because I don’t like it. But because many dependencies are outdated, and the theme I’m using hasn’t been updated since 2019.
Wrapping up
That’s it: in this article, I’ve explained everything that I do when writing a blog post.
Feel free to take inspiration from my automation to improve your own workflow, and contact me if you have some nice improvements or ideas: I’m all ears!
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
In C#, ExpandoObjects are dynamically-populated objects without a predefined shape.
dynamic myObj = new ExpandoObject();
myObj.Name ="Davide";
myObj.Age = 30;
Name and Age are not part of the definition of ExpandoObject: they are two fields I added without declaring their type.
This is a dynamic object, so I can add new fields as I want. Say that I need to add my City: I can simply use
without creating any field on the ExpandoObject class.
Now: how can I retrieve all the values? Probably the best way is by converting the ExpandoObject into a Dictionary.
Create a new Dictionary
Using an IDictionary makes it easy to access the keys of the object.
If you have an ExpandoObject that will not change, you can use it to create a new IDictionary:
Notice that we use the ExpandoObject to create a newIDictionary. This means that after the Dictionary creation if we add a new field to the ExpandoObject, that new field will not be present in the Dictionary.
Cast to IDictionary
If you want to use an IDictionary to get the ExpandoObject keys, and you need to stay in sync with the ExpandoObject status, you just have to cast that object to an IDictionary
In unit tests, sometimes you need to perform deep checks on the object passed to the mocked service. We will learn 3 ways to do that with Moq and C#
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
When writing unit tests, you can use Mocks to simulate the usage of class dependencies.
Even though some developers are harshly against the usage of mocks, they can be useful, especially when the mocked operation does not return any value, but still, you want to check that you’ve called a specific method with the correct values.
In this article, we will learn 3 ways to check the values passed to the mocks when using Moq in our C# Unit Tests.
To better explain those 3 ways, I created this method:
publicvoid UpdateUser(User user, Preference preference)
{
var userDto = new UserDto
{
Id = user.id,
UserName = user.username,
LikesBeer = preference.likesBeer,
LikesCoke = preference.likesCoke,
LikesPizza = preference.likesPizza,
};
_userRepository.Update(userDto);
}
UpdateUser simply accepts two objects, user and preference, combines them into a single UserDto object, and then calls the Update method of _userRepository, which is an interface injected in the class constructor.
As you can see, we are not interested in the return value from _userRepository.Update. Rather, we are interested in checking that we are calling it with the right values.
We can do it in 3 ways.
Verify each property with It.Is
The simplest, most common way is by using It.Is<T> within the Verify method.
This approach works well when you have to perform checks on only a few fields. But the more fields you add, the longer and messier that code becomes.
Also, a problem with this approach is that if it fails, it becomes hard to understand which is the cause of the failure, because there is no indication of the specific field that did not match the expectations.
Here’s an example of an error message:
Expected invocation on the mock at least once, but was never performed: _ => _.Update(It.Is<UserDto>(u => (((u.Id == 1 && u.UserName == "Davidde") && u.LikesPizza == True) && u.LikesBeer == True) && u.LikesCoke == False))
Performed invocations:
Mock<IUserRepository:1> (_):
IUserRepository.Update(UserDto { UserName = Davide, Id = 1, LikesPizza = True, LikesCoke = False, LikesBeer = True })
Can you spot the error? And what if you were checking 15 fields instead of 5?
Verify with external function
Another approach is by externalizing the function.
[Test]publicvoid WithExternalFunction()
{
//Arrangevar user = new User(1, "Davide");
var preferences = new Preference(true, true, false);
UserDto expected = new UserDto
{
Id = 1,
UserName = "Davide",
LikesBeer = true,
LikesCoke = false,
LikesPizza = true,
};
//Act userUpdater.UpdateUser(user, preferences);
//Assert userRepo.Verify(_ => _.Update(It.Is<UserDto>(u => AreEqual(u, expected))));
}
privatebool AreEqual(UserDto u, UserDto expected)
{
Assert.AreEqual(expected.UserName, u.UserName);
Assert.AreEqual(expected.Id, u.Id);
Assert.AreEqual(expected.LikesBeer, u.LikesBeer);
Assert.AreEqual(expected.LikesCoke, u.LikesCoke);
Assert.AreEqual(expected.LikesPizza, u.LikesPizza);
returntrue;
}
Here, we are passing an external function to the It.Is<T> method.
This approach allows us to define more explicit and comprehensive checks.
The good parts of it are that you will gain more control over the assertions, and you will also have better error messages in case a test fails:
Expected string length 6 but was 7. Strings differ at index 5.
Expected: "Davide"
But was: "Davidde"
The bad part is that you will stuff your test class with lots of different methods, and the class can easily become hard to maintain. Unluckily, we cannot use local functions.
On the other hand, having external functions allows us to combine them when we need to do some tests that can be reused across test cases.
Intercepting the function parameters with Callback
Lastly, we can use a hidden gem of Moq: Callbacks.
With Callbacks, you can store in a local variable the reference to the item that was called by the method.
[Test]publicvoid CompareWithCallback()
{
// Arrangevar user = new User(1, "Davide");
var preferences = new Preference(true, true, false);
UserDto actual = null;
userRepo.Setup(_ => _.Update(It.IsAny<UserDto>()))
.Callback(new InvocationAction(i => actual = (UserDto)i.Arguments[0]));
UserDto expected = new UserDto
{
Id = 1,
UserName = "Davide",
LikesBeer = true,
LikesCoke = false,
LikesPizza = true,
};
//Act userUpdater.UpdateUser(user, preferences);
//Assert Assert.IsTrue(AreEqual(expected, actual));
}
In this way, you can use it locally and run assertions directly to that object without relying on the Verify method.
Or, if you use records, you can use the auto-equality checks to simplify the Verify method as I did in the previous example.
Wrapping up
In this article, we’ve explored 3 ways to perform checks on the objects passed to dependencies mocked with Moq.
Each way has its pros and cons, and it’s up to you to choose the approach that fits you the best.
I personally prefer the second and third approaches, as they allow me to perform better checks on the passed values.
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
You surely take care of your code to make it easy to read and understand, right? RIGHT??
Well done! 👏
But most of the developers tend to write good production code (the one actually executed by your system), but very poor test code.
Production code is meant to be run, while tests are also meant to document your code; therefore there must not be doubts about the meaning and the reason behind a test.
This also means that all the names must be explicit enough to help readers understand how and why a test should pass.
This is a valid C# test:
[Test]publicvoid TestHtmlParser()
{
HtmlDocument doc = new HtmlDocument();
doc.LoadHtml("<p>Hello</p>");
var node = doc.DocumentNode.ChildNodes[0];
var parser = new HtmlParser();
Assert.AreEqual("Hello", parser.ParseContent(node));
}
What is the meaning of this test? We should be able to understand it just by reading the method name.
Also, notice that here we are creating the HtmlNode object; imagine if this node creation is present in every test method: you will see the same lines of code over and over again.
you can understand its meaning by reading the test name
the code is concise, and some creation parts are refactored out
we’ve well separated the 3 parts of the tests: Arrange, Act, Assert (we’ve already talked about it here)
Wrapping up
Tests are still part of your project, even though they are not used directly by your customers.
Never skip tests, and never write them in a rush. After all, when you encounter a bug, the first thing you should do is write a test to reproduce the bug, and then validate the fix using that same test.
C# recently introduced Records, a new way of defining types. In this article, we will see 8 things you probably didn’t know about C# Records
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Records are the new data type introduced in 2021 with C# 9 and .NET Core 5.
publicrecordPerson(string Name, int Id);
Records are the third way of defining data types in C#; the other two are class and struct.
Since they’re a quite new idea in .NET, we should spend some time experimenting with it and trying to understand its possibilities and functionalities.
In this article, we will see 8 properties of Records that you should know before using it, to get the best out of this new data type.
1- Records are immutable
By default, Records are immutable. This means that, once you’ve created one instance, you cannot modify any of its fields:
var me = new Person("Davide", 1);
me.Name = "AnotherMe"; // won't compile!
This operation is not legit.
Even the compiler complains:
Init-only property or indexer ‘Person.Name’ can only be assigned in an object initializer, or on ’this’ or ‘base’ in an instance constructor or an ‘init’ accessor.
2- Records implement equality
The other main property of Records is that they implement equality out-of-the-box.
[Test]publicvoid EquivalentInstances_AreEqual()
{
var me = new Person("Davide", 1);
var anotherMe = new Person("Davide", 1);
Assert.That(anotherMe, Is.EqualTo(me));
Assert.That(me, Is.Not.SameAs(anotherMe));
}
As you can see, I’ve created two instances of Person with the same fields. They are considered equal, but they are not the same instance.
3- Records can be cloned or updated using ‘with’
Ok, so if we need to update the field of a Record, what can we do?
We can use the with keyword:
[Test]publicvoid WithProperty_CreatesNewInstance()
{
var me = new Person("Davide", 1);
var anotherMe = me with { Id = 2 };
Assert.That(anotherMe, Is.Not.EqualTo(me));
Assert.That(me, Is.Not.SameAs(anotherMe));
}
Take a look at me with { Id = 2 }: that operation creates a clone of me and updates the Id field.
Of course, you can use with to create a new instance identical to the original one.
[Test]publicvoid With_CreatesNewInstance()
{
var me = new Person("Davide", 1);
var anotherMe = me with { };
Assert.That(anotherMe, Is.EqualTo(me));
Assert.That(me, Is.Not.SameAs(anotherMe));
}
4- Records can be structs and classes
Basically, Records act as Classes.
publicrecordPerson(string Name, int Id);
Sometimes that’s not what you want. Since C# 10 you can declare Records as Structs:
publicrecordstruct Point(int X, int Y);
Clearly, everything we’ve seen before is still valid.
[Test]publicvoid EquivalentStructsInstances_AreEqual()
{
var a = new Point(2, 1);
var b = new Point(2, 1);
Assert.That(b, Is.EqualTo(a));
//Assert.That(a, Is.Not.SameAs(b));// does not compile!}
Well, almost everything: you cannot use Is.SameAs() because, since structs are value types, two values will always be distinct values. You’ll get notified about it by the compiler, with an error that says:
The SameAs constraint always fails on value types as the actual and the expected value cannot be the same reference
5- Records are actually not immutable
We’ve seen that you cannot update existing Records. Well, that’s not totally correct.
That assertion is true in the case of “simple” Records like Person:
publicrecordPerson(string Name, int Id);
But things change when we use another way of defining Records:
We can explicitly declare the properties of the Record to make it look more like plain classes.
Using this approach, we still can use the auto-equality functionality of Records
[Test]publicvoid ComplexRecordsAreEquatable()
{
var a = new Pair("Capital", "Roma");
var b = new Pair("Capital", "Roma");
Assert.That(b, Is.EqualTo(a));
}
But we can update a single field without creating a brand new instance:
[Test]publicvoid ComplexRecordsAreNotImmutable()
{
var b = new Pair("Capital", "Roma");
b.Value = "Torino";
Assert.That(b.Value, Is.EqualTo("Torino"));
}
Also, only simple types are immutable, even with the basic Record definition.
The ComplexPair type is a Record that accepts in the definition a list of strings.
That list of strings is not immutable: you can add and remove items as you wish:
[Test]publicvoid ComplexRecordsAreNotImmutable2()
{
var b = new ComplexPair("Capital", "Roma", new List<string> { "City" });
b.Metadata.Add("Another Value");
Assert.That(b.Metadata.Count, Is.EqualTo(2));
}
In the example below, you can see that I added a new item to the Metadata list without creating a new object.
6- Records can have subtypes
A neat feature is that we can create a hierarchy of Records in a very simple manner.
Do you remember the Person definition?
publicrecordPerson(string Name, int Id);
Well, you can define a subtype just as you would do with plain classes:
publicrecordEmployee(string Name, int Id, string Role) : Person(Name, Id);
Of course, all the rules of Boxing and Unboxing are still valid.
[Test]publicvoid Records_CanHaveSubtypes()
{
Person meEmp = new Employee("Davide", 1, "Chief");
Assert.That(meEmp, Is.AssignableTo<Employee>());
Assert.That(meEmp, Is.AssignableTo<Person>());
}
Finally, if you’re interested in trivia about C# stuff we use but we rarely explore, here’s an article I wrote a while ago about GUIDs in C# – you’ll find some neat stuff in there!
Logs are important. Properly structured logs can be the key to resolving some critical issues. With Serilog’s Scopes, you can enrich your logs with info about the context where they happened.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Even though it’s not one of the first things we usually set up when creating a new application, logging is a real game-changer in the long run.
When an error occurred, if we have proper logging we can get more info about the context where it happened so that we can easily identify the root cause.
In this article, we will use Scopes, one of the functionalities of Serilog, to create better logs for our .NET 6 application. In particular, we’re going to create a .NET 6 API application in the form of Minimal APIs.
We will also use Seq, just to show you the final result.
To summarize, Serilog is an open source .NET library for logging. One of the best features of Serilog is that messages are in the form of a template (called Structured Logs), and you can enrich the logs with some value automatically calculated, such as the method name or exception details.
To add Serilog to your application, you simply have to run dotnet add package Serilog.AspNetCore.
Since we’re using Minimal APIs, we don’t have the StartUp file anymore; instead, we will need to add it to the Program.cs file:
As you can see, we’re injecting an ILogger<ItemsRepository>: specifying the related class automatically adds some more context to the logs that we will generate.
Installing Seq and adding it as a Sink
Seq is a logging platform that is a perfect fit for Serilog logs. If you don’t have it already installed, head to their download page and install it locally (you can even install it as a Docker container 🤩).
In the installation wizard, you can select the HTTP port that will expose its UI. Once everything is in place, you can open that page on your localhost and see a page like this:
On this page, we will see all the logs we write.
But wait! ⚠ We still have to add Seq as a sink for Serilog.
A sink is nothing but a destination for the logs. When using .NET APIs we can define our sinks both on the appsettings.json file and on the Program.cs file. We will use the second approach.
First of all, you will need to install a NuGet package to add Seq as a sink: dotnet add package Serilog.Sinks.Seq.
Then, you have to update the Serilog definition we’ve seen before by adding a .WriteTo.Seq instruction:
Notice that we’ve specified also the port that exposes our Seq instance.
Now, every time we log something, we will see our logs both on the Console and on Seq.
How to add scopes
The time has come: we can finally learn how to add Scopes using Serilog!
Setting up the example
For this example, I’ve created a simple controller, ItemsController, which exposes two endpoints: Get and Add. With these two endpoints, we are able to add and retrieve items stored in an in-memory collection.
This class has 2 main dependencies: IItemsRepository and IUsersItemsRepository. Each of these interfaces has its own concrete class, each with a private logger injected in the constructor:
public ItemsRepository(ILogger<ItemsRepository> logger)
{
_logger = logger;
}
and, similarly
public UsersItemRepository(ILogger<UsersItemRepository> logger)
{
_logger = logger;
}
How do those classes use their own _logger instances?
For example, the UsersItemRepository class exposes an AddItem method that adds a specific item to the list of items already possessed by a specific user.
publicvoid AddItem(string username, Item item)
{
if (!_usersItems.ContainsKey(username))
{
_usersItems.Add(username, new List<Item>());
_logger.LogInformation("User was missing from the list. Just added");
}
_usersItems[username].Add(item);
_logger.LogInformation("Added item for to the user's catalogue");
}
We are logging some messages, such as “User was missing from the list. Just added”.
Something similar happens in the ItemsRepository class, where we have a GetItem method that returns the required item if it exists, and null otherwise.
[HttpPost(Name = "AddItems")]public IActionResult Add(string userName, int itemId)
{
var item = _itemsRepository.GetItem(itemId);
if (item == null)
{
_logger.LogWarning("Item does not exist");
return NotFound();
}
_usersItemsRepository.AddItem(userName, item);
return Ok(item);
}
Ok then, we’re ready to run the application and see the result.
When I call that endpoint by passing “davide” as userName and “1” as itemId, we can see these logs:
We can see the 3 log messages but they are unrelated one each other. In fact, if we expand the logs to see the actual values we’ve logged, we can see that only the “Retrieving item 1” log has some information about the item ID we want to associate with the user.
Using BeginScope with Serilog
Finally, it’s time to define the Scope.
It’s as easy as adding a simple using statement; see how I added the scope to the Add method in the Controller:
[HttpPost(Name = "AddItems")]public IActionResult Add(string userName, int itemId)
{
using (_logger.BeginScope("Adding item {ItemId} for user {UserName}", itemId, userName))
{
var item = _itemsRepository.GetItem(itemId);
if (item == null)
{
_logger.LogWarning("Item does not exist");
return NotFound();
}
_usersItemsRepository.AddItem(userName, item);
return Ok(item);
}
}
Here’s the key!
using (_logger.BeginScope("Adding item {ItemId} for user {UserName}", itemId, userName))
With this single instruction, we are actually performing 2 operations:
we are adding a Scope to each message – “Adding item 1 for user davide”
we are adding ItemId and UserName to each log entry that falls in this block, in every method in the method chain.
Let’s run the application again, and we will see this result:
So, now you can use these new properties to get some info about the context of when this log happened, and you can use the ItemId and UserName fields to search for other related logs.
You can also nest scopes, of course.
Why scopes instead of Correlation ID?
You might be thinking
Why can’t I just use correlation IDs?
Well, the answer is pretty simple: correlation IDs are meant to correlate different logs in a specific request, and, often, across services. You generally use Correlation IDs that represent a specific call to your API and act as a Request ID.
For sure, that can be useful. But, sometimes, not enough.
Using scopes you can also “correlate” distinct HTTP requests that have something in common.
If I call 2 times the AddItem endpoint, I can filter both for UserName and for ItemId and see all the related logs across distinct HTTP calls.
Let’s see a real example: I have called the endpoint with different values
id=1, username=“davide”
id=1, username=“luigi”
id=2, username=“luigi”
Since the scope reference both properties, we can filter for UserName and discover that Luigi has added both Item1 and Item 2.
At the same time, we can filter by ItemId and discover that the item with id = 2 has been added only once.
Ok, then, in the end, Scopes or Correlation IDs? The answer is simple:
Then, you might want to deep dive into Serilog’s BeginScope. Here’s a neat article by Nicholas Blumhardt. Also, have a look at the comments, you’ll find interesting points to consider
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Duplication is not only about lines of code, but also about data usage and meaning.
Reducing it will help us minimize the impact of every change.
Take this class as an example:
classBookShelf{
private Book[] myBooks = new Book[]
{
new Book(1, "C# in depth"),
new Book(2, "I promessi paperi")
};
publicint Count() => myBooks.Length;
publicbool IsEmpty() => myBooks.Length == 0;
publicbool HasElements() => myBooks.Length > 0;
}
Here, both Count and IsEmpty use the same logical way to check the length of the collection: by calling myBooks.Length.
What happens if you have to change the myBooks collection and replace the array of Books with a collection that does not expose the Length property? You will have to replace the logic everywhere!
So, a better approach is to “centralize” the way to count the items in the collection in this way:
classBookShelf{
private Book[] myBooks = new Book[]
{
new Book(1, "C# in depth"),
new Book(2, "I promessi paperi")
};
publicint Count() => myBooks.Length;
publicbool IsEmpty() => Count() == 0;
publicbool HasElements() => Count() > 0;
}
If you will need to replace the myBooks data type, you will simply have to update the Count method – everything else will be the same.
Also, HasElements and IsEmpty are a logical duplication. If they’re not necessary, you should remove one. Remove the one most used in its negative form: if you find lots of if(!HasElements()), you should consider replacing it with if(IsEmpty()): always prefer the positive form!
Yes, I know, this is an extreme example: it’s too simple. But think of a more complex class or data flow in which you reuse the same logical flow, even if you’re not really using the exact same lines of code.
By duplicating the logic, you will need to write more tests that do the same thing. Also, it may happen that if you found a flaw in your logic, and you fix it in some places and forget to fix it in other methods.
Centralizing it will allow you to build safer code that is easier to test and update.
A simple way to avoid “logical” duplication? Abstract classes!
Well, there are many others… that I expect you to tell me in the comments section!
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
There’s one LINQ method that I always struggle in understanding: SelectMany.
It’s actually a pretty simple method, but somehow it doesn’t stuck in my head.
In simple words, SelectMany works on collections of items that you can use, in whichever way, to retrieve other items.
Let’s see an example using the dear old for loop, and then we will replace it with SelectMany.
For this example, I’ve created a simple record type that represents an office. Each office has one or more phone numbers.
LINQPad is one of the tools I use daily. But still, I haven’t used it at its full power. And you?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
LINQPad is one of my best friends: I use it daily, and it helps me A LOT when I need to run some throwaway code.
There are many other tools out there, but I think that LINQPad (well, the full version!) is one of the best tools on the market.
But still, many C# developers only use just a few of its functionalities! In this article, I will show you my top 5 functionalities you should know.
Advanced Dump()
As many of you already know, to print stuff on the console you don’t have to call Console.WriteLine(something), but you can use something.Dump();
void Main()
{
var user = new User(1, "Davide", "DavideB");
user.Dump();
}
You can simplify it by avoiding calling the Dump operation in a separate step: Dump can print the content and return it at the same time:
var user = new User(1, "Davide", "DavideB").Dump();
For sure, this simple trick makes your code easier to read!
Ok, what if you have too many Dump calls and you don’t know which operation prints which log? Lucky for us, the Dump method accepts a string as a Title: that text will be displayed in the output panel.
var user = new User(1, "Davide", "DavideB").Dump("My User content");
You can now see the “My User content” header right above the log of the user:
Dump containers
We can do a step further and introduce Dump containers.
Dump Containers are some sort of sink for your logs (we’ve already talked about sinks, do you remember?). Once you’ve instantiated a DumpContainer object, you can perform some operations such as AppendContent to append some content at the end of the logs, ClearContent to clear the content (obviously!), and Dump to display the content of the Container in the Results panel.
DumpContainer dc = new DumpContainer();
dc.Content = "Hey!";
dc.AppendContent("There");
dc.Dump();
Note: you don’t need to place the Dump() instruction at the end of the script: you can put it at the beginning and you’ll see the content as soon as it gets added. Otherwise, you will build the internal list of content and display it only at the end.
So, this is perfectly valid:
DumpContainer dc = new DumpContainer();
dc.Dump();
dc.Content = "Hey!";
dc.AppendContent("There");
You can even explicitly set the content of the Container: setting it will replace everything else.
Here you can see what happens when we override the content:
Why should we even care? 🤔
My dear friend, it’s easy! Because we can create more Containers to log different things!
Take this example: we want to loop over a list of items and use one Container to display the item itself, and another Container to list what happens when we perform some operations on each item. Yeeees, I know, it’s hard to understand in this way: let me show you an example!
DumpContainer dc1 = new DumpContainer();
DumpContainer dc2 = new DumpContainer();
dc1.Dump();
dc2.Dump();
var users = new List<User> {
new User(1, "Davide", "DavideB"),
new User(2, "Dav", "Davi Alt"),
new User(3, "Bellone", "Bellone 3"),
};
foreach (var element in users)
{
dc1.AppendContent(element);
dc2.AppendContent(element.name.ToUpper());
}
Here we’re using two different containers, each of them lives its own life.
In this example I used AppendContent, but of course, you can replace the full content of a Container to analyze one item at a time.
I can hear you: there’s another question in your mind:
How can we differentiate those containers?
You can use the Style property of the DumpContainer class to style the output, using CSS-like properties:
DumpContainer dc2 = new DumpContainer();
dc2.Style = "color:red; font-weight: bold";
Now all the content stored in the dc2 container will be printed in red:
Great stuff 🤩
Read text from input
Incredibly useful, but often overlooked, is the ability to provide inputs to our scripts.
To do that, you can rely on the Util.ReadLine method already included in LINQPad:
string myContent = Util.ReadLine();
When running the application, you will see a black box at the bottom of the window that allows you to write (or paste) some text. That text will then be assigned to the myContent variable.
There’s a nice overload that allows you to specify a sort of title to the text box, to let you know which is the current step:
Paste as escaped string
This is one of my favorite functionalities: many times I have to escape text that contains quotes, copied from somewhere else to assign it to a string variable; I used to lose time escaping those values manually (well, using other tools that still are slower than this one).
Assigning it manually to a string becomes a mess. Lucky for us, we can copy it, get back on LINQPad, right-click, choose “Paste as escaped string” (or, if you prefer, use Alt+Shift+V) and have it already escaped and ready to be used:
We’ve seen 5 amazing tricks to get the best out of LINQPad. In my opinion, every C# developer that uses this tool should know those tricks, they can really boost your productivity.
Did you already know all of them? Which are your favorites? Drop a message in the comments section or on Twitter 📧