Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
When you need to generate a sequence of numbers in ascending order, you can just use a while loop with an enumerator, or you can use Enumerable.Range.
This method, which you can find in the System.Linq namespace, allows you to generate a sequence of numbers by passing two parameters: the start number and the total numbers to add.
But it will not work if the count parameter is negative: in fact, it will throw an ArgumentOutOfRangeException:
Enumerable.Range(start:1, count:-23) // Throws ArgumentOutOfRangeException// with message "Specified argument was out of the range of valid values"(Parameter 'count')
⚠ Beware of overflows: it’s not a circular array, so if you pass the int.MaxValue value while building the collection you will get another ArgumentOutOfRangeException.
Notice that this pattern is not very efficient: you first have to build a collection with N integers to then generate a collection of N strings. If you care about performance, go with a simple while loop – if you need a quick and dirty solution, this other approach works just fine.
Further readings
There are lots of ways to achieve a similar result: another interesting one is by using the yield return statement:
In this C# tip, we learned how to generate collections of numbers using LINQ.
This is an incredibly useful LINQ method, but you have to remember that the second parameter does not indicate the last value of the collection, rather it’s the length of the collection itself.
I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛
In C#, nameof can be quite useful. But it has some drawbacks, if used the wrong way.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
As per Microsoft’s definition,
A nameof expression produces the name of a variable, type, or member as the string constant.
that will print “items”, and not “hello”: this is because we are printing the name of the variable, items, and not its runtime value.
A real example I saw in my career
In some of the projects I’ve worked on during these years, I saw an odd approach that I highly recommend NOT to use: populate constants with the name of the constant itself:
conststring User_Table = nameof(User_Table);
and then use the constant name to access stuff on external, independent systems, such as API endpoints or Databases:
conststring User_Table = nameof(User_Table);
var users = db.GetAllFromTable(User_Table);
The reasons behind this, in my teammates opinion, are that:
It’s easier to write
It’s more performant: we’re using constants that are filled at compile time, not at runtime
You can just rename the constant if you need to access a new database table.
I do not agree with them: expecially the third point is pretty problematic.
Why this approach should not be used
We are binding the data access to the name of a constant, and not to its value.
We could end up in big trouble because if, from one day to the next, the system might not be able to reach the User table because the name does not exist.
How is it possible? It’s a constant, it can’t change! No: it’s a constant whose value changes if the contant name changes.
It can change for several reasons:
A developer, by mistake, renames the constant. For example, from User_Table to Users_Table.
An automatic tool (like a Linter) with wrong configurations updates the constants’ names: from User_Table to USER_TABLE.
New team styleguides are followed blindly: if the new rule is that “constants must not contain hyphens” and you apply it everywhere, you’ll end in trouble.
To me, those are valid reasons not to use nameof to give a value to a constant.
How to overcome it
If this approach is present in your codebase and it’s too time-consuming to update it everywhere, not everything is lost.
You must absolutely do just one thing to prevent all the issues I listed above: add tests, and test on the actual value.
If you’re using Moq, for instance, you should test the database access we saw before as:
// initialize and run the method[...]// test for the Table name_mockDb.Verify(db => db.GetAllFromTable("User_Table"));
Notice that here you must test against the actual name of the table: if you write something like
_mockDb.Verify(db => db.GetAllFromTable(DbAccessClass.User_Table));
//say that DbAccessClass is the name of the class the uses the data access showed above
Using Conventional Commits you can define a set of rules useful for writing meaningful commit messages. Using NPM. Yes, in a dotNET application!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Setting teams conventions is a crucial step to have the project prepared to live long and prosper 🖖
A good way to set some clarity is by enforcing rules on GIT commit messages: you can enforce devs to specify the reason behind some code changes so that you can understand the history and the reason for each of those commits. Also, if you have well-crafted commit messages, Pull Requests become easier to understand, leading to better code.
Conventional Commits help you set such rules, and help you level up your commit history. In this article, we will learn how to add Conventional Commits in a .NET application.
Conventional Commits
Conventional Commits are a set of rules that help you write commit messages using a format that has multiple purposes:
they help developers understand the history of a git branch;
they help PR reviewers focus on the Pull Request by understanding the changes proposed by the developer;
using automated tools, they help versioning the application – this is useful when using Semantic Versioning;
they allow you to create automated Changelog files.
So, what does an average Conventional Commit look like?
There’s not just one way to specify such formats.
For example, you can specify that you’ve added a new feature (feat) to your APIs and describe it shortly:
feat(api): send an email to the customer
Or you can explain that you’ve fixed a bug (using fix) and add a full description of the scope of the commit.
fix: prevent racing condition
Introduce a request id and a reference to latest request. Dismiss
incoming responses other than from latest request.
There are several types of commits that you can support, such as:
feat, used when you add a new feature to the application;
fix, when you fix a bug;
docs, used to add or improve documentation to the project;
refactor, used – well – after some refactoring;
test, when adding tests or fixing broken ones
All of this prevents developers write commit messages such as “something”, “fixed bug”, “some stuff”.
So, now, it’s time to include Conventional Commits in our .NET applications.
What is our goal?
For the sake of this article, I’m going to add Conventional Commits in a .NET 7 API project. The same approach works for all the other types of .NET projects: as long as you have a Solution to work with, I’ve got you covered.
Well, actually, the following approach can be used by every project, not only those based on .NET: the reason I wrote this article is that many dotnet developers are not confident in using and configuring NPM packages, so my personal goal with this article is to give you the basics of such tools and configurations.
For the sake of this article, I’m going to explain how to add Conventional Commits with a custom format.
Say that you want to associate each commit to a Jira task. As you may know, Jira tasks have an ID composed of a project prefix and a numeric Id. So, for a project named FOO, you can have a task with Id FOO-123.
The goal of this article is, then, to force developers to create Commit messages such as
feat/FOO-123: commit short description
or, if you want to add a full description of the commit,
feat/FOO-123: commit short description
Here we can have the full description of the task.
And it can also be on multiple lines.
We are going to work at Solution level; you don’t even need an IDE: just Notepad and a Terminal are fine. Before continuing, open your solution folder and a Console pointing to the same folder.
Install NPM in your folder
Yes, even if the main application is built with .NET, we are gonna need some NPM packages to set up our Conventional Commits.
First things first: head to the Command Line and run
After specifying some configurations (Package name? Licence? Author?), you will have a brand new package.json file.
Now we can move on and add a GIT Hook.
Husky: integrate GIT Hooks to improve commit messages
To use conventional commits we have to “intercept” our GIT actions: we will need to run a specific tool right after having written a commit message; we have to validate it and, in case it does not follow the rules we’ve set, abort the operations.
We will use Husky 🔗: it’s a facility package that allows us to do stuff with our commit messages and, in general, integrate work with Git Hooks.
Head to the terminal, and install Husky by running
npm install husky --save-dev
This command will add a dependency to Husky, as you can see from the new item listed in the package.json file:
"devDependencies": {
"husky": "^8.0.3"}
Finally, to enable Git Hooks, we have to run
npm pkg set scripts.prepare="husky install"
and notice the new section in the package.json.
"scripts": {
"prepare": "husky install"},
Even with just these simple steps, we can see a first result: if you run git commit you will see a text editor open. Here you can write your commit message.
Save and close the file. The commit message has been applied, as you can see by running git log --oneline.
CommitLint: a package to validate Commit messages
We need to install and configure CommitLint, the NPM package that does the dirty job.
This snippet tells Commitlint to use the default conventions, such as feat(api): send an email.
To test the default rules without issuing any real commit, we have to install the previous packages globally, so that they can be accessed outside the scope of the git hooks:
echo 'foo: a message with wrong format' | commitlint
and see the error messages
At this point, we still don’t have CommitLint ready to validate our commit messages. In fact, if you try to commit your changes with an invalid message, you will see that the message passes the checks (because there are no checks!), and your changes get committed.
We need to do some more steps.
First of all, we have to create a folder named .husky that will be used by Husky to understand which commands are supported.
Notice: you have to keep the dot at the beginning of the folder name: it’s .husky, not husky.
Then we need to add a new file within that folder to tell Husky that it needs to run CommitLint.
The first value is a number that expresses the severity of the rule:
0: the rule is disabled;
1: show a warning;
2: it’s an error.
The second value defines if the rule must be applied (using always), or if it must be reversed (using never).
The third value provides generic arguments for the related rule. For example, "header-max-length": [2, "always", 50], tells that the header must always have a length with <= 50 characters.
But not before helping devs with a simple trick! As you remember, when you run git commit without specifying the message, an editor appears with some hints about the structure of the commit message.
You can set your own text with hints about the structure of the messages.
You just need to create a file named .gitmessage and put some text in it, such as:
# <type>/FOO-<jira-ticket-id>: <title>
# YOU CAN WRITE WHATEVER YOU WANT HERE
# allowed types: feat | fix | hot | chore
# Example:
#
# feat/FOO-01: first commit
#
# No more than 50 chars. #### 50 chars is here: #
# Remember blank line between title and body.
# Body: Explain *what* and *why* (not *how*)
# Wrap at 72 chars. ################################## which is here: #
#
Now, we have to tell Git to use that file as a template:
git config commit.template ./.gitmessage
and.. TA-DAH! Here’s your message template!
Putting all together
Finally, we have everything in place: git hooks, commit template, and template hints.
If we run git commit, we will see an IDE open and the message we’ve defined before. Now, type A message with wrong format, save, close the editor, and you’ll see that the commit is aborted.
Now you run git commit again, you’ll see again the IDE, and type feat/FOO-123: a valid message, and you’ll see it working
Further readings
Conventional Commits is a project that lists a set of specifications for writing such good messages. You can read more here:
This new kind of commit message works well with Semantic Versioning, which can be useful to publish package versions with a meaningful version number, such as 2.0.1: 🔗 Semantic Versioning
And, to close the loop, Semantic Versioning can be easily integrated with CI pipelines. If you use .NET APIs and want to deploy your APIs to Azure using GitHub Actions, you can start from this article and add SemVer: 🔗 How to deploy .NET APIs on Azure using GitHub actions
Wrapping up
In this article, we’ve learned what are Conventional Commits, how to add them using Husky and NPM, and how to configure our folder to use such tools.
The steps we’ve seen before work for every type of application, even not related to dotnet.
So, to recap everything, we have to:
Install NPM: npm init;
Install Husky: npm install husky --save-dev;
Enable Husky: npm pkg set scripts.prepare="husky install";
By using list patterns on an array or a list you can check whether a it contains the values you expect in a specific position.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
With C# 11 we have an interesting new feature: list patterns.
You can, in fact, use the is operator to check if an array has the exact form that you expect.
Take this method as an example.
Introducing List Patterns
string YeahOrError(int[] s)
{
if (s is [1, 2, 3]) return"YEAH";
return"error!";
}
As you can imagine, the previous method returns YEAH if the input array is exactly [1, 2, 3]. You can, in fact, try it by running some tests:
You can also assign one or more of such values to a variable, and discard all the others:
string SelfOrMessageWithVar(int[] s)
{
if (s is [_, 2, int third]) return"YEAH_" + third;
return"error!";
}
The previous condition, s is [_, 2, int third], returns true only if the array has 3 elements, and the second one is “2”. Then, it stores the third element in a new variable, int third, and uses it to build the returned string.
Lists have an inner capacity. Every time you add more items than the current Capacity, you add performance overhead. How to prevent it?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Some collections, like List<T>, have a predefined initial size.
Every time you add a new item to the collection, there are two scenarios:
the collection has free space, allocated but not yet populated, so adding an item is immediate;
the collection is already full: internally, .NET resizes the collection, so that the next time you add a new item, we fall back to option #1.
Clearly, the second approach has an impact on the overall performance. Can we prove it?
Here’s a benchmark that you can run using BenchmarkDotNet:
[Params(2, 100, 1000, 10000, 100_000)]publicint Size;
[Benchmark]publicvoid SizeDefined()
{
int itemsCount = Size;
List<int> set = new List<int>(itemsCount);
foreach (var i in Enumerable.Range(0, itemsCount))
{
set.Add(i);
}
}
[Benchmark]publicvoid SizeNotDefined()
{
int itemsCount = Size;
List<int> set = new List<int>();
foreach (var i in Enumerable.Range(0, itemsCount))
{
set.Add(i);
}
}
Those two methods are almost identical: the only difference is that in one method we specify the initial size of the list: new List<int>(itemsCount).
Have a look at the result of the benchmark run with .NET 7:
Method
Size
Mean
Error
StdDev
Median
Gen0
Gen1
Gen2
Allocated
SizeDefined
2
49.50 ns
1.039 ns
1.678 ns
49.14 ns
0.0248
–
–
104 B
SizeNotDefined
2
63.66 ns
3.016 ns
8.507 ns
61.99 ns
0.0268
–
–
112 B
SizeDefined
100
798.44 ns
15.259 ns
32.847 ns
790.23 ns
0.1183
–
–
496 B
SizeNotDefined
100
1,057.29 ns
42.100 ns
121.469 ns
1,056.42 ns
0.2918
–
–
1224 B
SizeDefined
1000
9,180.34 ns
496.521 ns
1,400.446 ns
8,965.82 ns
0.9766
–
–
4096 B
SizeNotDefined
1000
9,720.66 ns
406.184 ns
1,184.857 ns
9,401.37 ns
2.0142
–
–
8464 B
SizeDefined
10000
104,645.87 ns
7,636.303 ns
22,395.954 ns
99,032.68 ns
9.5215
1.0986
–
40096 B
SizeNotDefined
10000
95,192.82 ns
4,341.040 ns
12,524.893 ns
92,824.50 ns
31.2500
–
–
131440 B
SizeDefined
100000
1,416,074.69 ns
55,800.034 ns
162,771.317 ns
1,402,166.02 ns
123.0469
123.0469
123.0469
400300 B
SizeNotDefined
100000
1,705,672.83 ns
67,032.839 ns
186,860.763 ns
1,621,602.73 ns
285.1563
285.1563
285.1563
1049485 B
Notice that, in general, they execute in a similar amount of time; for instance when running the same method with 100000 items, we have the same magnitude of time execution: 1,416,074.69 ns vs 1,705,672.83 ns.
The huge difference is with the allocated space: 400,300 B vs 1,049,485 B. Almost 2.5 times better!
Ok, it works. Next question: How can we check a List capacity?
We’ve just learned that capacity impacts the performance of a List.
How can you try it live? Easy: have a look at the Capacity property!
List<int> myList = new List<int>();
foreach (var element in Enumerable.Range(0,50))
{
myList.Add(element);
Console.WriteLine($"Items count: {myList.Count} - List capacity: {myList.Capacity}");
}
If you run this method, you’ll see this output:
Items count: 1 - List capacity: 4
Items count: 2 - List capacity: 4
Items count: 3 - List capacity: 4
Items count: 4 - List capacity: 4
Items count: 5 - List capacity: 8
Items count: 6 - List capacity: 8
Items count: 7 - List capacity: 8
Items count: 8 - List capacity: 8
Items count: 9 - List capacity: 16
Items count: 10 - List capacity: 16
Items count: 11 - List capacity: 16
Items count: 12 - List capacity: 16
Items count: 13 - List capacity: 16
Items count: 14 - List capacity: 16
Items count: 15 - List capacity: 16
Items count: 16 - List capacity: 16
Items count: 17 - List capacity: 32
Items count: 18 - List capacity: 32
Items count: 19 - List capacity: 32
Items count: 20 - List capacity: 32
Items count: 21 - List capacity: 32
Items count: 22 - List capacity: 32
Items count: 23 - List capacity: 32
Items count: 24 - List capacity: 32
Items count: 25 - List capacity: 32
Items count: 26 - List capacity: 32
Items count: 27 - List capacity: 32
Items count: 28 - List capacity: 32
Items count: 29 - List capacity: 32
Items count: 30 - List capacity: 32
Items count: 31 - List capacity: 32
Items count: 32 - List capacity: 32
Items count: 33 - List capacity: 64
Items count: 34 - List capacity: 64
Items count: 35 - List capacity: 64
Items count: 36 - List capacity: 64
Items count: 37 - List capacity: 64
Items count: 38 - List capacity: 64
Items count: 39 - List capacity: 64
Items count: 40 - List capacity: 64
Items count: 41 - List capacity: 64
Items count: 42 - List capacity: 64
Items count: 43 - List capacity: 64
Items count: 44 - List capacity: 64
Items count: 45 - List capacity: 64
Items count: 46 - List capacity: 64
Items count: 47 - List capacity: 64
Items count: 48 - List capacity: 64
Items count: 49 - List capacity: 64
Items count: 50 - List capacity: 64
So, as you can see, List capacity is doubled every time the current capacity is not enough.
Further readings
To populate the lists in our Benchmarks we used Enumerable.Range. Do you know how it works? Have a look at this C# tip:
In this article, we’ve learned that just a minimal change can impact our application performance.
We simply used a different constructor, but the difference is astounding. Clearly, this trick works only if already know the final length of the list (or, at least, an estimation). The more precise, the better!
I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛
Health Checks are fundamental to keep track of the health of a system. How can we check if MongoDB is healthy?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
In any complex system, you have to deal with external dependencies.
More often than not, if one of the external systems (a database, another API, or an authentication provider) is down, the whole system might be affected.
In this article, we’re going to learn what Health Checks are, how to create custom ones, and how to check whether a MongoDB instance can be reached or not.
What are Health Checks?
A Health Check is a special type of HTTP endpoint that allows you to understand the status of the system – well, it’s a check on the health of the whole system, including external dependencies.
You can use it to understand whether the application itself and all of its dependencies are healthy and responding in a reasonable amount of time.
Those endpoints are also useful for humans, but are even more useful for tools that monitor the application and can automatically fix some issues if occurring – for example, they can restart the application if it’s in a degraded status.
How to add Health Checks in dotNET
Lucky for us, .NET already comes with Health Check capabilities, so we can just follow the existing standard without reinventing the wheel.
For the sake of this article, I created a simple .NET API application.
Head to the Program class – or, in general, wherever you configure the application – and add this line:
builder.Services.AddHealthChecks();
and then, after var app = builder.Build();, you must add the following line to have the health checks displayed under the /healtz path.
app.MapHealthChecks("/healthz");
To sum up, the minimal structure should be:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
builder.Services.AddHealthChecks();
var app = builder.Build();
app.MapHealthChecks("/healthz");
app.MapControllers();
app.Run();
So that, if you run the application and navigate to /healthz, you’ll just see an almost empty page with two characteristics:
the status code is 200;
the only printed result is Healthy
Clearly, that’s not enough for us.
How to create a custom Health Check class in .NET
Every project has its own dependencies and requirements. We should be able to build custom Health Checks and add them to our endpoint.
It’s just a matter of creating a new class that implements IHealthCheck, an interface that lives under the Microsoft.Extensions.Diagnostics.HealthChecks namespace.
Then, you have to implement the method that tells us whether the system under test is healthy or degraded:
Now, you can create a stub class that implements IExternalDependency to toy with the different result types. In fact, if we create and inject a stub class like this:
and we run the application, we can see that the final result of the application is Unhealthy.
A question for you: why should we specify a name to health checks, such as “A custom name”? Drop a comment below 📩
Adding a custom Health Check Provider for MongoDB
Now we can create a custom Health Check for MongoDB.
Of course, we will need to use a library to access Mongo: so simply install via NuGet the package MongoDB.Driver – we’ve already used this library in a previous article.
Clearly, we create a reference to a specific DB instance: new MongoClient(url).GetDatabase(url.DatabaseName). Notice that we’re requiring access to the Secondary node, to avoid performing operations on the Primary node.
Then, we send the PING command: dbInstance.RunCommandAsync<BsonDocument>(new BsonDocument { { "ping", 1 } }).
Now what? The PING command either returns an object like this:
or, if the command cannot be executed, it throws a System.TimeoutException.
MongoDB Health Checks with AspNetCore.Diagnostics.HealthChecks
If we don’t want to write such things on our own, we can rely on pre-existing libraries.
AspNetCore.Diagnostics.HealthChecks is a library you can find on GitHub that automatically handles several types of Health Checks for .NET applications.
Note that this library is NOT maintained or supported by Microsoft – but it’s featured in the official .NET documentation.
This library exposes several NuGet packages for tens of different dependencies you might want to consider in your Health Checks. For example, we have Azure.IoTHub, CosmosDb, Elasticsearch, Gremlin, SendGrid, and many more.
Obviously, we’re gonna use the one for MongoDB. It’s quite easy.
First, you have to install the AspNetCore.HealthChecks.MongoDb NuGet package.
Then, you have to just add a line of code to the initial setup:
Ok, if we can just add a line of code instead of creating a brand-new class, why should we bother creating the whole custom class?
There are some reasons to create a custom provider:
You want more control over the DB access: for example, you want to ping only Secondary nodes, as we did before;
You don’t just want to check if the DB is up, but also the performance of doing some specific operations, such as retrieving all the documents from a specified collection.
But, yes, in general, you can simply use the NuGet package we used in the previous section, and you’re good to go.
Further readings
As usual, the best way to learn more about a topic is by reading the official documentation:
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Say that you have an array of N items and you need to access an element counting from the end of the collection.
Usually, we tend to use the Length property of the array, and then subtract the number corresponding to the position we want to reach:
Yes, that’s just syntactic sugar, but it can help make your code more readable. In fact, if you have a look at the IL code generated by both examples, they are perfectly identical. IL is quite difficult to read and understand, but you can acknowledge that both syntaxes are equivalent by looking at the decompiled C# code:
Performance is not affected by this operator, so it’s just a matter of readability.
Clearly, you still have to take care of array bounds – if you access values[^55] you’ll get an IndexOutOfRangeException.
Using ^ is a nice trick that many C# developers don’t know. There are some special characters that can help us but are often not used. Like the @ operator!
Interpolated strings are those built with the $ symbol, that you can use to create strings using existing variables or properties. Did you know that you can apply custom formattings to such values?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
As you know, there are many ways to “create” strings in C#. You can use a StringBuilder, you can simply concatenate strings, or you can use interpolated strings.
Interpolated? WHAT? I’m pretty sure that you’ve already used interpolated strings, even if you did not know the “official” name:
int age = 31;
string bio = $"Hi, I'm {age} years old";
That’s it: an interpolated string is one where you can reference a variable or a property within the string definition, using the $ and the {} operators to generate such strings.
Did you know that you can even format how the interpolated value must be rendered when creating the string? It’s just a matter of specifying the format after the : sign:
Formatting dates
The easiest way to learn it is by formatting dates:
DateTime date = new DateTime(2021,05,23);
Console.WriteLine($"The printed date is {date:yyyy-MM-dd}"); //The printed date is 2021-05-23Console.WriteLine($"Another version is {date:yyyy-MMMM-dd}"); //Another version is 2021-May-23Console.WriteLine($"The default version is {date}"); //The default version is 23/05/2021 00:00:00
Here we have date:yyyy-MM-dd which basically means “format the date variable using the yyyy-MM-dd format”.
There are, obviously, different ways to format dates, as described on the official documentation. Some of the most useful are:
dd: day of the month, in number (from 01 to 31);
ddd: abbreviated day name (eg: Mon)
dddd: complete day name (eg: Monday)
hh: hour in a 12-hour clock (01-> 12)
HH: hour in a 24-hour clock (00->23)
MMMM: full month day
and so on.
Formatting numbers
Similar to dates, we can format numbers.
For example, we can format a double number as currency or as a percentage:
var cost = 12.41;
Console.WriteLine($"The cost is {cost:C}"); // The cost is £12.41var variation = -0.254;
Console.WriteLine($"There is a variation of {variation:P}"); //There is a variation of -25.40%
Again, there are lots of different ways to format numbers:
C: currency – it takes the current culture, so it may be Euro, Yen, or whatever currency, depending on the process’ culture;
E: exponential number, used for scientific operations
P: percentage: as we’ve seen before {1:P} represents 100%;
X: hexadecimal
Further readings
There are too many formats that you can use to convert a value to a string, and we cannot explore all of them here.
But still, you can have a look at several ways to format date and time in C#
Downloading a file from a remote resource seems an easy task: download the byte stream and copy it to a local file. Beware of edge cases!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Downloading files from an online source and saving them on the local machine seems an easy task.
And guess what? It is!
In this article, we will learn how to download an online file, perform some operations on it – such as checking its file extension – and store it in a local folder. We will also learn how to deal with edge cases: what if the file does not exist? Can we overwrite existing files?
How to download a file stream from an online resource using HttpClient
Ok, this is easy. If you have the file URL, it’s easy to just download it using HttpClient.
HttpClient httpClient = new HttpClient();
Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
Using HttpClient can cause some trouble, especially when you have a huge computational load. As a matter of fact, using HttpClientFactory is preferred, as we’ve already explained in a previous article.
But, ok, it looks easy – way too easy! There are two more parts to consider.
How to handle errors while downloading a stream of data
You know, shit happens!
There are at least 2 cases that stop you from downloading a file: the file does not exist or the file requires authentication to be accessed.
In both cases, an HttpRequestException exception is thrown, with the following stack trace:
at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
at System.Net.Http.HttpClient.GetStreamAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)
As you can see, we are implicitly calling EnsureSuccessStatusCode while getting the stream of data.
You can tell the consumer that we were not able to download the content in two ways: throw a custom exception or return Stream.Null. We will use Stream.Null for the sake of this article.
Note: always throw custom exceptions and add context to them: this way, you’ll add more useful info to consumers and logs, and you can hide implementation details.
So, let me refactor the part that downloads the file stream and put it in a standalone method:
so that we can use Stream.Null to check for the existence of the stream.
How to store a file in your local machine
Now we have our stream of data. We need to store it somewhere.
We will need to copy our input stream to a FileStream object, placed within a using block.
using (FileStream outputFileStream = new FileStream(path, FileMode.Create))
{
await fileStream.CopyToAsync(outputFileStream);
}
Possible errors and considerations
When creating the FileStream instance, we have to pass the constructor both the full path of the image, with also the file name, and FileMode.Create, which tells the stream what type of operations should be supported.
FileMode is an enum coming from the System.IO namespace, and has different values. Each value fits best for some use cases.
Again, there are some edge cases that we have to consider:
the destination folder does not exist: you will get an DirectoryNotFoundException exception. You can easily fix it by calling Directory.CreateDirectory to generate all the hierarchy of folders defined in the path;
the destination file already exists: depending on the value of FileMode, you will see a different behavior. FileMode.Create overwrites the image, while FileMode.CreateNew throws an IOException in case the image already exists.
We all use switch statements in our code. Do you use them at their full potential?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
We all use switch statements in our code: they are a helpful way to run different code paths based on an check on a variable.
In this short article, we’re gonna learn different ways to write switch blocks, and some nice tricks to create clean and easy-to-read filters on such statements.
For the sake of this example, we will use a dummy hierarchy of types: a base User record with three subtypes: Player, Gamer, and Dancer.
Let’s see different usages of switch statements and switch expressions.
Switch statements
Switch statements are those with the standard switch (something) block. They allow for different executions of paths, acting as a list of if – else if blocks.
They can be used to return a value, but it’s not mandatory: you can simply use switch statements to execute code that does not return any value.
Switch statements with checks on the type
The most simple example we can have is the plain check on the type.
User user = new Gamer(30, "Nintendo Switch");
string message = "";
switch (user)
{
case Gamer:
{
message = "I'm a gamer";
break;
}
case Player:
{
message = "I'm a player";
break;
}
default:
{
message = "My type is not handled!";
break;
}
}
Console.WriteLine(message); // I'm a player
Here we execute a different path based on the value the user variable has at runtime.
We can also have an automatic casting to the actual type, and then use the runtime data within the case block:
User user = new Gamer(30, "Nintendo Switch");
string message = "";
switch (user)
{
case Gamer g:
{
message = "I'm a gamer, and I have a " + g.Console;
break;
}
case Player:
{
message = "I'm a player";
break;
}
default:
{
message = "My type is not handled!";
break;
}
}
Console.WriteLine(message); //I'm a gamer, and I have a Nintendo Switch
As you can see, since useris aGamer, within the related branch we cast the user to Gamer in a variable named g, so that we can use its public properties and methods.
Filtering using the WHEN keyword
We can add additional filters on the actual value of the variable by using the when clause:
User user = new Gamer(3, "Nintendo");
string message = "";
switch (user)
{
case Gamer g when g.Age < 10:
{
message = "I'm a gamer, but too young";
break;
}
case Gamer g:
{
message = "I'm a gamer, and I have a " + g.Console;
break;
}
case Player:
{
message = "I'm a player";
break;
}
default:
{
message = "My type is not handled!";
break;
}
}
Console.WriteLine(message); // I'm a gamer, but too young
Here we have the when g.Age < 10 filter applied to the Gamer g variable.
Clearly, if we set the age to 30, we will see I’m a gamer, and I have a Nintendo Switch.
Switch Expression
Switch expressions act like Switch Statements, but they return a value that can be assigned to a variable or, in general, used immediately.
They look like a lightweight, inline version of Switch Statements, and have a slightly different syntax.
To reach the same result we saw before, we can write:
User user = new Gamer(30, "Nintendo Switch");
string message = user switch{
Gamer g => "I'm a gamer, and I have a " + g.Console,
Player => "I'm a player",
_ => "My type is not handled!"};
Console.WriteLine(message);
By looking at the syntax, we can notice a few things:
instead of having switch(variable_name){}, we now have variable_name switch {};
we use the arrow notation => to define the cases;
we don’t have the default keyword, but we use the discard value _.
When keyword vs Property Pattern in Switch Expressions
Similarly, we can use the when keyword to define better filters on the cases.
string message = user switch{
Gamer gg when gg.Age < 10 => "I'm a gamer, but too young",
Gamer g => "I'm a gamer, and I have a " + g.Console,
Player => "I'm a player",
_ => "My type is not handled!"};
You can finally use a slightly different syntax to achieve the same result. Instead of using when gg.Age < 10 you can write Gamer { Age: < 10 }. This is called Property Pattern
string message = user switch{
Gamer { Age: < 10 } => "I'm a gamer, but too young",
Gamer g => "I'm a gamer, and I have a " + g.Console,
Player => "I'm a player",
_ => "My type is not handled!"};
Further readings
We actually just scratched the surface of all the functionalities provided by the C# language.
First of all, you can learn more about how to use Relational Patterns in a switch expression.