In a microservices architecture, an API Gateway hides your real endpoints. We will create one using Azure API Management
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
If you’re building an application that exposes several services you might not want to expose them on different hosts. Consumers will have a hard time configuring their application with all the different hostnames, and you will be forced to maintain the same URLs even if you need to move to other platforms or, for instance, you want to transform a REST endpoint into an Azure Function.
In this case, you should mask the real endpoints beneath a facade: maybe… an API Gateway? 🙂
In this article, we will learn how to configure Azure API Management (from now on: APIM) service to create an API Gateway and “hide” our real services.
Demo: publish .NET API services and locate the OpenAPI definition
For the sake of this article, we will work with 2 API services: BooksService and VideosService.
They are both .NET 6 APIs, deployed on Azure using GitHub Actions (using the steps I described in a previous article).
Both services expose their Swagger pages and a bunch of endpoints that we will gonna hide behind Azure APIM.
How to create Azure API Management (APIM) Service from Azure Portal
Now, we want to hide their real endpoints. The clients will then only know about the existence of the API Gateway, and not of the two separate API services:
It’s time to create our APIM resource.👷♂️
Head to the Azure Portal, and create a new API Management instance. I suggest reading the short overview of the functionalities provided by Azure API Management services as listed in the screenshot below.
The wizard will ask you for some info, such as the resource name, the region, and an email used to send communications (honestly speaking, I still haven’t figured out why they’re asking for your email).
Fill in all the fields, pick your preferred pricing tier (mine is Developer: it doesn’t have an SLA and is quite cheap), and then proceed with the service creation.
After several minutes (it took 50 minutes – fifty!💢 – to scaffold my instance), you will have your instance ready to be used.
We are now ready to add our APIs and expose them to our clients.
How to add APIs to Azure API Management using Swagger definition (OpenAPI)
As we’ve seen in a previous article, Swagger creates a JSON file that describes the operations available in your APIs, as well as the object structures accepted as input and returned as output.
Let me use as an example the Books API: once that API project is deployed on the cloud (it’s not mandatory to use Azure: it will work the same using other cloud vendors), you will see the Swagger UI and the related JSON definition.
We have 3 endpoints, /, /echo, and /books; those endpoints are described in the swagger.json file linked in the Swagger page; put that link aside: we will use it soon.
Finally, we can add our Books APIs to our Azure Management API Service! Head to the resource on Azure, locate the APIs menu item on the left panel, and create a new API definition using OpenAPI (which is the standard used by Swagger to create its UI).
You will see a form that allows you to create new resources from OpenAPI specifications.
Paste here the link to the swagger.json file you located before, populate the required fields and, if you want, add a prefix to identify these endpoints: I choose MyBooks.
You will then see your APIs appear in the panel shown below. It is composed of different parts:
The list of services exposed. In the screenshot below, BooksAPI, Echo API, and VideosAPI;
The list of endpoints exposed for each service: here, BooksAPI exposes endpoints at /, /echo, and /books;
A list of policies that are applied to the inbound requests before hitting the real endpoint;
The real endpoint used when calling the facade exposed by APIM;
A list of policies applied to the outbound requests after the origin has processed the requests.
For now, we will ignore both Inbound and Outbound processing, as they will be the topic of a future article.
Consuming APIs exposed on the API Gateway
We’re ready to go! Head back to the Azure API Management service dashboard and locate the URL of the API Gateway under Custom domains > Gateway URL.
This will be the root URL that our clients will use.
We can then access Books API and Videos API both on the Origin and the Gateway (we’re doing it just for demonstrating that things are working; clients will only use the APIs exposed by the API Gateway).
The Videos API maintains the exact same structure, mapping the endpoints as they are defined in Origin.
On the contrary, to access the Books APIs we have to access the /mybooks path (because we defined it a few steps ago when we imported the BooksAPI from OpenAPI definition: it’s the API Url Suffix field), as shown below:
Further readings
As usual, a bunch of interesting readings 📚
In this article, we’ve only scratched the surface of Azure API Management. There’s way lot – and you can read about it on the Microsoft Docs website:
To integrate Azure APIM, we used two simple dotNET 6 Web APIs deployed on Azure. If you wanna know how to set up GitHub Actions to build and deploy dotNET APIs, I recently published an article on that topic.
This can be just the beginning of a long journey; APIM allows you to highly customize your API Gateway by defining API access by user role, creating API documentation using custom templates and themes, and a lot of different stuff.
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
There may be times when you need to process a specific task on a timely basis, such as polling an endpoint to look for updates or refreshing a Refresh Token.
If you need infinite processing, you can pick two roads: the obvious one or the better one.
For instance, you can use an infinite loop and put a Sleep command to delay the execution of the next task:
The constructor accepts in input an interval (a double value that represents the milliseconds for the interval), whose default value is 100.
This class implements IDisposable: if you’re using it as a dependency of another component that must be Disposed, don’t forget to call Dispose on that Timer.
Note: use this only for synchronous tasks: there are other kinds of Timers that you can use for asynchronous operations, such as PeriodicTimer, which also can be stopped by canceling a CancellationToken.
A PriorityQueue represents a collection of items that have a value and a priority. Now this data structure is built-in in dotNET!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Starting from .NET 6 and C# 10, we finally have built-in support for PriorityQueues 🥳
A PriorityQueue is a collection of items that have a value and a priority; as you can imagine, they act as a queue: the main operations are “add an item to the queue”, called Enqueue, and “remove an item from the queue”, named Dequeue. The main difference from a simple Queue is that on dequeue, the item with lowest priority is removed.
In this article, we’re gonna use a PriorityQueue and wrap it into a custom class to solve one of its design issues (that I hope they’ll be addressed in a future release of dotNET).
Welcoming Priority Queues in .NET
Defining a priority queue is straightforward: you just have to declare it specifying the type of items and the type of priority.
So, if you need a collection of Child items, and you want to use int as a priority type, you can define it as
PriorityQueue<Child, int> pq = new PriorityQueue<Child, int>();
And you can retrieve the one on the top of the queue by calling Peek(), if you want to just look at the first item without removing it from the queue:
Child child3 = BuildChild3();
Child child2 = BuildChild2();
Child child1 = BuildChild1();
queue.Enqueue(child3, 3);
queue.Enqueue(child1, 1);
queue.Enqueue(child2, 2);
//queue.Count = 3Child first = queue.Peek();
//first will be child1, because its priority is 1//queue.Count = 3, because we did not remove the item on top
or Dequeue if you want to retrieve it while removing it from the queue:
Child child3 = BuildChild3();
Child child2 = BuildChild2();
Child child1 = BuildChild1();
queue.Enqueue(child3, 3);
queue.Enqueue(child1, 1);
queue.Enqueue(child2, 2);
//queue.Count = 3Child first = queue.Dequeue();
//first will be child1, because its priority is 1//queue.Count = 2, because we removed the item with the lower priority
This is the essence of a Priority Queue: insert items, give them a priority, then remove them starting from the one with lower priority.
Creating a Wrapper to automatically handle priority in Priority Queues
There’s a problem with this definition: you have to manually specify the priority of each item.
I don’t like it that much: I’d like to automatically assign each item a priority. So we have to wrap it in another class.
Since we’re near Christmas, and this article is part of the C# Advent 2022, let’s use an XMAS-themed example: a Christmas list used by Santa to handle gifts for children.
Now we can create a Priority Queue of type <Child, int>:
PriorityQueue<Child, int> pq = new PriorityQueue<Child, int>();
And wrap it all within a ChristmasList class:
publicclassChristmasList{
privatereadonly PriorityQueue<Child, int> queue;
public ChristmasList()
{
queue = new PriorityQueue<Child, int>();
}
publicvoid Add(Child child)
{
int priority =// ??; queue.Enqueue(child, priority);
}
public Child Get()
{
return queue.Dequeue();
}
}
A question for you: what happens when we call the Get method on an empty queue? What should we do instead? Drop a message below! 📩
We need to define a way to assign each child a priority.
Define priority as private behavior
The easiest way is to calculate the priority within the Add method: define a function that accepts a Child and returns an int, and then pass that int value to the Enqueue method.
This approach is useful because you’re encapsulating the behavior in the ChristmasList class, but has the downside that it’s not extensible, and you cannot use different priority algorithms in different places of your application. On the other side, GetPriority is a private operation within the ChristmasList class, so it can be fine for our example.
Pass priority calculation from outside
We can then pass a Func<Child, int> in the ChristmasList constructor, centralizing the priority definition and giving the caller the responsibility to define it:
This implementation presents the opposite problems and solutions we saw in the previous example.
What I’d like to see in the future
This is a personal thought: it’d be great if we had a slightly different definition of PriorityQueue to automate the priority definition.
One idea could be to add in the constructor a parameter that we can use to calculate the priority, just to avoid specifying it explicitly. So, I’d expect that the current definition of the constructor and of the Enqueue method change from this:
PriorityQueue<Child, int> pq = new PriorityQueue<Child, int>();
int priority = _priorityCalculation(child);
queue.Enqueue(child, priority);
to this:
PriorityQueue<Child, int> pq = new PriorityQueue<Child, int>(_priorityCalculation);
queue.Enqueue(child);
It’s not perfect, and it raises some new problems.
Another way could be to force the item type to implement an interface that exposes a way to retrieve its priority, such as
publicinterfaceIHavePriority<T>{
public T GetPriority();
}
publicclassChild : IHavePriority<int>{}
Again, this approach is not perfect but can be helpful.
Talking about its design, which approach would you suggest, and why?
Further readings
As usual, the best way to learn about something is by reading its official documentation:
PriorityQueue is a good-to-know functionality that is now out-of-the-box in dotNET. Do you like its design? Have you used another library to achieve the same result? In what do they differ?
Exposing Swagger UI is a good way to help developers consume your APIs. But don’t be boring: customize your UI with some fancy CSS
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Brace yourself, Christmas is coming! 🎅
If you want to add a more festive look to your Swagger UI, it’s just a matter of creating a CSS file and injecting it.
You should create a custom CSS for your Swagger endpoints, especially if you are exposing them outside your company: if your company has a recognizable color palette, using it in your Swagger pages can make your brand stand out.
In this article, we will learn how to inject a CSS file in the Swagger UI generated using .NET Minimal APIs.
How to add Swagger in your .NET Minimal APIs
There are plenty of tutorials about how to add Swagger to your APIs. I wrote some too, where I explained how every configuration impacts what you see in the UI.
That article was targeting older dotNET versions without Minimal APIs. Now everything’s easier.
When you create your API project, Visual Studio asks you if you want to add OpenAPI support (aka Swagger). By adding it, you will have everything in place to get started with Swagger.
The key parts are builder.Services.AddEndpointsApiExplorer(), builder.Services.AddSwaggerGen(), app.UseSwagger(), app.UseSwaggerUI() and WithOpenApi(). Do you know that those methods do? If so, drop a comment below! 📩
Now, if we run our application, we will see a UI similar to the one below.
That’s a basic UI. Quite boring, uh? Let’s add some style
Create the CSS file for Swagger theming
All the static assets must be stored within the wwwroot folder. It does not exist by default, so you have to create it manually. Click on the API project, add a new folder, and name it “wwwroot”. Since it’s a special folder, by default Visual Studio will show it with a special icon (it’s a sort of blue world, similar to 🌐).
Now you can add all the folders and static resources needed.
I’ve created a single CSS file under /wwwroot/assets/css/xmas-style.css. Of course, name it as you wish – as long as it is within the wwwroot folder, it’s fine.
the element selectors are taken directly from the Swagger UI – you’ll need a bit of reverse-engineering skills: just open the Browser Console and find the elements you want to update;
unless the element does not already have the rule you want to apply, you have to add the !important CSS operator. Otherwise, your code won’t affect the UI;
you can add assets from other folders: I’ve added background-image: url("../images/snowflakes.webp"); to the body style. That image is, as you can imagine, under the wwwroot folder we created before.
Just as a recap, here’s my project structure:
Of course, it’s not enough: we have to tell Swagger to take into consideration that file
How to inject a CSS file in Swagger UI
This part is quite simple: you have to update the UseSwaggerUI command within the Main method:
CSS is not the only part you can customize, there’s way more. Here’s an article I wrote about Swagger integration in .NET Core 3 APIs, but it’s still relevant (I hope! 😁)
Theming is often not considered an important part of API development. That’s generally correct: why should I bother adding some fancy colors to APIs that are not expected to have a UI?
This makes sense if you’re working on private APIs. In fact, theming is often useful to improve brand recognition for public-facing APIs.
You should also consider using theming when deploying APIs to different environments: maybe Blue for Development, Yellow for Staging, and Green for Production. That way your developers can understand which environment they’re exploring right easily.
LINQ is a set of methods that help developers perform operations on sets of items. There are tons of methods – do you know which is the one for you?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
LINQ is one of the most loved functionalities by C# developers. It allows you to perform calculations and projections over a collection of items, making your code easy to build and, even more, easy to understand.
As of C# 11, there are tens of methods and overloads you can choose from. Some of them seem similar, but there are some differences that might not be obvious to C# beginners.
In this article, we’re gonna learn the differences between couples of methods, so that you can choose the best one that fits your needs.
First vs FirstOrDefault
Both First and FirstOrDefault allow you to get the first item of a collection that matches some requisites passed as a parameter, usually with a Lambda expression:
int[] numbers = newint[] { -2, 1, 6, 12 };
var mod3OrDefault = numbers.FirstOrDefault(n => n % 3 == 0);
var mod3 = numbers.First(n => n % 3 == 0);
Using FirstOrDefault you get the first item that matches the condition. If no items are found you’ll get the default value for that type. The default value depends on the data type:
Data type
Default value
int
0
string
null
bool
false
object
null
To know the default value for a specific type, just run default(string).
So, coming back to FirstOrDefault, we have these two possible outcomes:
On the other hand, First throws an InvalidOperationException with the message “Sequence contains no matching element” if no items in the collection match the filter criterion:
While First returns the first item that satisfies the condition, even if there are more than two or more, Single ensures that no more than one item matches that condition.
If there are two or more items that passing the filter, an InvalidOperationException is thrown with the message “Sequence contains more than one matching element”.
int[] numbers = newint[] { -2, 1, 6, 12 };
numbers.First(n => n % 3 == 0); // 6numbers.Single(n => n % 3 == 0); // throws exception because both 6 and 12 are accepted values
Both methods have their corresponding -OrDefault counterpart: SingleOrDefault returns the default value if no items are valid.
int[] numbers = newint[] { -2, 1, 6, 12 };
numbers.SingleOrDefault(n => n % 4 == 0); // 12numbers.SingleOrDefault(n => n % 7 == 0); // 0, because no items are %7numbers.SingleOrDefault(n => n % 3 == 0); // throws exception
Any vs Count
Both Any and Count give you indications about the presence or absence of items for which the specified predicate returns True.
In this article, we learned the differences between couples of LINQ methods.
Each of them has a purpose, and you should use the right one for each case.
❓ A question for you: talking about performance, which is more efficient: First or Single? And what about Count() == 0 vs Any()? Drop a message below if you know the answer! 📩
I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛
Why buy a whole tool when you can build your own? Learn how the Type system works in .NET, and create your own minimal type analyser.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Analysing your code is helpful to get an idea of the overall quality. At the same time, having an automatic tool that identifies determinate characteristics or performs some analysis for you can be useful.
Sure, there are many fantastic tools available, but having a utility class that you can build as needed and run without setting up a complex infrastructure is sufficient.
In this article, we are going to see how to navigate assemblies, classes, methods and parameters to perfor some custom analysis.
For this article, my code is structured into 3 Assemblies:
CommonClasses, a Class Library that contains some utility classes;
NetCoreScripts, a Class Library that contains the code we are going to execute;
ScriptsRunner, a Console Application that runs the scripts defined in the NetCoreScripts library.
The dependencies between the modules are shown below: ScriptsRunner depends on NetCoreScripts, and NetCoreScripts depends on CommonClasses.
In this article, we are going to write the examples in the NetCoreScripts class library, in a class named AssemblyAnalysis.
How to load an Assembly in C#, with different methods
The starting point to analyse an Assembly is, well, to have an Assembly.
So, in the Scripts Class Library (the middle one), I wrote:
var assembly = DefineAssembly();
In the DefineAssembly method we can choose the Assembly we are going to analyse.
In short, you can access the Assembly info of whichever class you know – if you can reference it directly, of course!
Load the current, the calling, and the executing Assembly
The Assembly class provides you with some methods that may look similar, but give you totally different info depending on how your code is structured.
Remember the ScriptsRunner –> NetCoreScripts –> CommonClasses sequence? To better explain how things work, let’s run the following examples in a method in the CommonClasses class library (the last one in the dependency chain).
var executing = System.Reflection.Assembly.GetExecutingAssembly();
var calling = System.Reflection.Assembly.GetCallingAssembly();
var entry = System.Reflection.Assembly.GetEntryAssembly();
Assembly.GetExecutingAssembly returns the Assembly that contains the actual code instructions (so, in short, the Assembly that actually contains the code). In this case, it’s the CommonClasses Assembly.
Assembly.GetCallingAssembly returns the caller Assembly, so the one that references the Executing Assembly. In this case, given that the CommonClasses library is referenced only by the NetCoreScripts library, well, we are getting info about the NetCoreScripts class library.
Assembly.GetEntryAssembly returns the info of the Assembly that is executing the whole application – so, the entry point. In our case, it’s the ScriptsRunner Console Application.
Deciding which one to choose is crucial, especially when you are going to distribute your libraries, for example, as NuGet packages. For sure, you’ll know the Executing Assembly. Most probably, depending on how the project is structured, you’ll also know the Calling Assembly. But almost certainly you won’t know the Entry Assembly.
Method name
Meaning
In this example…
GetExecutingAssembly
The current Assembly
CommonClasses
GetCallingAssembly
The caller Assembly
NetCoreScripts
GetEntryAssembly
The top-level executor
ScriptsRunner
How to retrieve classes of a given .NET Assembly
Now you have an Assembly to analyse. It’s time to load the classes belonging to your Assembly.
You can start with assembly.GetTypes(): this method returns all the types (in the form of a Type array) belonging to the Assembly.
For each Type you can access several properties, such as IsClass, IsPublic, IsAbstract, IsGenericType, IsEnum and so on. The full list of properties of a Type is available 🔗here.
You may want to analyse public classes: therefore, you can do something like:
If we have a look at its parameters, we will find the following values:
Bonus tip: Auto-properties act as Methods
Let’s focus a bit more on the properties of a class.
Consider this class:
publicclassUser{
publicstring Name { get; set; }
}
There are no methods; only one public property.
But hey! It turns out that properties, under the hood, are treated as methods. In fact, you can find two methods, named get_Name and set_Name, that act as an access point to the Name property.
Further readings
Do you remember that exceptions are, in the end, Types?
And that, in the catch block, you can filter for exceptions of a specific type or with a specific condition?
From here, you can use all this info to build whatever you want. Personally, I used it to analyse my current project, checking how many methods accept more than N parameters as input, and which classes have the highest number of public methods.
In short, an example of a simple code analyser can be this one:
publicvoid Execute()
{
var assembly = DefineAssembly();
var paramsInfo = AnalyzeAssembly(assembly);
AnalyzeParameters(paramsInfo);
}
privatestatic Assembly DefineAssembly()
=> Assembly.GetExecutingAssembly();
publicstatic List<ParamsMethodInfo> AnalyzeAssembly(Assembly assembly)
{
List<ParamsMethodInfo> all = new List<ParamsMethodInfo>();
var types = GetAllPublicTypes(assembly);
foreach (var type in types)
{
var publicMethods = GetPublicMethods(type);
foreach (var method in publicMethods)
{
var parameters = method.GetParameters();
if (parameters.Length > 0)
{
var f = parameters.First();
}
all.Add(new ParamsMethodInfo(
assembly.GetName().Name,
type.Name,
method
));
}
}
return all;
}
privatestatic MethodInfo[] GetPublicMethods(Type type) =>
type.GetMethods(BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.DeclaredOnly);
privatestatic List<Type> GetAllPublicTypes(Assembly assembly) => assembly.GetTypes()
.Where(t => t.IsClass && t.IsPublic)
.ToList();
publicclassParamsMethodInfo(string AssemblyName, string ClassName, MethodInfo Method)
{
publicstring MethodName => Method.Name;
public ParameterInfo[] Parameters => Method.GetParameters();
}
And then, in the AnalyzeParameters, you can add your own logic.
As you can see, you don’t need to adopt complex tools to perform operations like this: just knowing that you can access the static details of each class and method can be enough (of course, it depends on the use!).
I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
When you need to generate a sequence of numbers in ascending order, you can just use a while loop with an enumerator, or you can use Enumerable.Range.
This method, which you can find in the System.Linq namespace, allows you to generate a sequence of numbers by passing two parameters: the start number and the total numbers to add.
But it will not work if the count parameter is negative: in fact, it will throw an ArgumentOutOfRangeException:
Enumerable.Range(start:1, count:-23) // Throws ArgumentOutOfRangeException// with message "Specified argument was out of the range of valid values"(Parameter 'count')
⚠ Beware of overflows: it’s not a circular array, so if you pass the int.MaxValue value while building the collection you will get another ArgumentOutOfRangeException.
Notice that this pattern is not very efficient: you first have to build a collection with N integers to then generate a collection of N strings. If you care about performance, go with a simple while loop – if you need a quick and dirty solution, this other approach works just fine.
Further readings
There are lots of ways to achieve a similar result: another interesting one is by using the yield return statement:
In this C# tip, we learned how to generate collections of numbers using LINQ.
This is an incredibly useful LINQ method, but you have to remember that the second parameter does not indicate the last value of the collection, rather it’s the length of the collection itself.
I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛
In C#, nameof can be quite useful. But it has some drawbacks, if used the wrong way.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
As per Microsoft’s definition,
A nameof expression produces the name of a variable, type, or member as the string constant.
that will print “items”, and not “hello”: this is because we are printing the name of the variable, items, and not its runtime value.
A real example I saw in my career
In some of the projects I’ve worked on during these years, I saw an odd approach that I highly recommend NOT to use: populate constants with the name of the constant itself:
conststring User_Table = nameof(User_Table);
and then use the constant name to access stuff on external, independent systems, such as API endpoints or Databases:
conststring User_Table = nameof(User_Table);
var users = db.GetAllFromTable(User_Table);
The reasons behind this, in my teammates opinion, are that:
It’s easier to write
It’s more performant: we’re using constants that are filled at compile time, not at runtime
You can just rename the constant if you need to access a new database table.
I do not agree with them: expecially the third point is pretty problematic.
Why this approach should not be used
We are binding the data access to the name of a constant, and not to its value.
We could end up in big trouble because if, from one day to the next, the system might not be able to reach the User table because the name does not exist.
How is it possible? It’s a constant, it can’t change! No: it’s a constant whose value changes if the contant name changes.
It can change for several reasons:
A developer, by mistake, renames the constant. For example, from User_Table to Users_Table.
An automatic tool (like a Linter) with wrong configurations updates the constants’ names: from User_Table to USER_TABLE.
New team styleguides are followed blindly: if the new rule is that “constants must not contain hyphens” and you apply it everywhere, you’ll end in trouble.
To me, those are valid reasons not to use nameof to give a value to a constant.
How to overcome it
If this approach is present in your codebase and it’s too time-consuming to update it everywhere, not everything is lost.
You must absolutely do just one thing to prevent all the issues I listed above: add tests, and test on the actual value.
If you’re using Moq, for instance, you should test the database access we saw before as:
// initialize and run the method[...]// test for the Table name_mockDb.Verify(db => db.GetAllFromTable("User_Table"));
Notice that here you must test against the actual name of the table: if you write something like
_mockDb.Verify(db => db.GetAllFromTable(DbAccessClass.User_Table));
//say that DbAccessClass is the name of the class the uses the data access showed above
Using Conventional Commits you can define a set of rules useful for writing meaningful commit messages. Using NPM. Yes, in a dotNET application!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Setting teams conventions is a crucial step to have the project prepared to live long and prosper 🖖
A good way to set some clarity is by enforcing rules on GIT commit messages: you can enforce devs to specify the reason behind some code changes so that you can understand the history and the reason for each of those commits. Also, if you have well-crafted commit messages, Pull Requests become easier to understand, leading to better code.
Conventional Commits help you set such rules, and help you level up your commit history. In this article, we will learn how to add Conventional Commits in a .NET application.
Conventional Commits
Conventional Commits are a set of rules that help you write commit messages using a format that has multiple purposes:
they help developers understand the history of a git branch;
they help PR reviewers focus on the Pull Request by understanding the changes proposed by the developer;
using automated tools, they help versioning the application – this is useful when using Semantic Versioning;
they allow you to create automated Changelog files.
So, what does an average Conventional Commit look like?
There’s not just one way to specify such formats.
For example, you can specify that you’ve added a new feature (feat) to your APIs and describe it shortly:
feat(api): send an email to the customer
Or you can explain that you’ve fixed a bug (using fix) and add a full description of the scope of the commit.
fix: prevent racing condition
Introduce a request id and a reference to latest request. Dismiss
incoming responses other than from latest request.
There are several types of commits that you can support, such as:
feat, used when you add a new feature to the application;
fix, when you fix a bug;
docs, used to add or improve documentation to the project;
refactor, used – well – after some refactoring;
test, when adding tests or fixing broken ones
All of this prevents developers write commit messages such as “something”, “fixed bug”, “some stuff”.
So, now, it’s time to include Conventional Commits in our .NET applications.
What is our goal?
For the sake of this article, I’m going to add Conventional Commits in a .NET 7 API project. The same approach works for all the other types of .NET projects: as long as you have a Solution to work with, I’ve got you covered.
Well, actually, the following approach can be used by every project, not only those based on .NET: the reason I wrote this article is that many dotnet developers are not confident in using and configuring NPM packages, so my personal goal with this article is to give you the basics of such tools and configurations.
For the sake of this article, I’m going to explain how to add Conventional Commits with a custom format.
Say that you want to associate each commit to a Jira task. As you may know, Jira tasks have an ID composed of a project prefix and a numeric Id. So, for a project named FOO, you can have a task with Id FOO-123.
The goal of this article is, then, to force developers to create Commit messages such as
feat/FOO-123: commit short description
or, if you want to add a full description of the commit,
feat/FOO-123: commit short description
Here we can have the full description of the task.
And it can also be on multiple lines.
We are going to work at Solution level; you don’t even need an IDE: just Notepad and a Terminal are fine. Before continuing, open your solution folder and a Console pointing to the same folder.
Install NPM in your folder
Yes, even if the main application is built with .NET, we are gonna need some NPM packages to set up our Conventional Commits.
First things first: head to the Command Line and run
After specifying some configurations (Package name? Licence? Author?), you will have a brand new package.json file.
Now we can move on and add a GIT Hook.
Husky: integrate GIT Hooks to improve commit messages
To use conventional commits we have to “intercept” our GIT actions: we will need to run a specific tool right after having written a commit message; we have to validate it and, in case it does not follow the rules we’ve set, abort the operations.
We will use Husky 🔗: it’s a facility package that allows us to do stuff with our commit messages and, in general, integrate work with Git Hooks.
Head to the terminal, and install Husky by running
npm install husky --save-dev
This command will add a dependency to Husky, as you can see from the new item listed in the package.json file:
"devDependencies": {
"husky": "^8.0.3"}
Finally, to enable Git Hooks, we have to run
npm pkg set scripts.prepare="husky install"
and notice the new section in the package.json.
"scripts": {
"prepare": "husky install"},
Even with just these simple steps, we can see a first result: if you run git commit you will see a text editor open. Here you can write your commit message.
Save and close the file. The commit message has been applied, as you can see by running git log --oneline.
CommitLint: a package to validate Commit messages
We need to install and configure CommitLint, the NPM package that does the dirty job.
This snippet tells Commitlint to use the default conventions, such as feat(api): send an email.
To test the default rules without issuing any real commit, we have to install the previous packages globally, so that they can be accessed outside the scope of the git hooks:
echo 'foo: a message with wrong format' | commitlint
and see the error messages
At this point, we still don’t have CommitLint ready to validate our commit messages. In fact, if you try to commit your changes with an invalid message, you will see that the message passes the checks (because there are no checks!), and your changes get committed.
We need to do some more steps.
First of all, we have to create a folder named .husky that will be used by Husky to understand which commands are supported.
Notice: you have to keep the dot at the beginning of the folder name: it’s .husky, not husky.
Then we need to add a new file within that folder to tell Husky that it needs to run CommitLint.
The first value is a number that expresses the severity of the rule:
0: the rule is disabled;
1: show a warning;
2: it’s an error.
The second value defines if the rule must be applied (using always), or if it must be reversed (using never).
The third value provides generic arguments for the related rule. For example, "header-max-length": [2, "always", 50], tells that the header must always have a length with <= 50 characters.
But not before helping devs with a simple trick! As you remember, when you run git commit without specifying the message, an editor appears with some hints about the structure of the commit message.
You can set your own text with hints about the structure of the messages.
You just need to create a file named .gitmessage and put some text in it, such as:
# <type>/FOO-<jira-ticket-id>: <title>
# YOU CAN WRITE WHATEVER YOU WANT HERE
# allowed types: feat | fix | hot | chore
# Example:
#
# feat/FOO-01: first commit
#
# No more than 50 chars. #### 50 chars is here: #
# Remember blank line between title and body.
# Body: Explain *what* and *why* (not *how*)
# Wrap at 72 chars. ################################## which is here: #
#
Now, we have to tell Git to use that file as a template:
git config commit.template ./.gitmessage
and.. TA-DAH! Here’s your message template!
Putting all together
Finally, we have everything in place: git hooks, commit template, and template hints.
If we run git commit, we will see an IDE open and the message we’ve defined before. Now, type A message with wrong format, save, close the editor, and you’ll see that the commit is aborted.
Now you run git commit again, you’ll see again the IDE, and type feat/FOO-123: a valid message, and you’ll see it working
Further readings
Conventional Commits is a project that lists a set of specifications for writing such good messages. You can read more here:
This new kind of commit message works well with Semantic Versioning, which can be useful to publish package versions with a meaningful version number, such as 2.0.1: 🔗 Semantic Versioning
And, to close the loop, Semantic Versioning can be easily integrated with CI pipelines. If you use .NET APIs and want to deploy your APIs to Azure using GitHub Actions, you can start from this article and add SemVer: 🔗 How to deploy .NET APIs on Azure using GitHub actions
Wrapping up
In this article, we’ve learned what are Conventional Commits, how to add them using Husky and NPM, and how to configure our folder to use such tools.
The steps we’ve seen before work for every type of application, even not related to dotnet.
So, to recap everything, we have to:
Install NPM: npm init;
Install Husky: npm install husky --save-dev;
Enable Husky: npm pkg set scripts.prepare="husky install";
By using list patterns on an array or a list you can check whether a it contains the values you expect in a specific position.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
With C# 11 we have an interesting new feature: list patterns.
You can, in fact, use the is operator to check if an array has the exact form that you expect.
Take this method as an example.
Introducing List Patterns
string YeahOrError(int[] s)
{
if (s is [1, 2, 3]) return"YEAH";
return"error!";
}
As you can imagine, the previous method returns YEAH if the input array is exactly [1, 2, 3]. You can, in fact, try it by running some tests:
You can also assign one or more of such values to a variable, and discard all the others:
string SelfOrMessageWithVar(int[] s)
{
if (s is [_, 2, int third]) return"YEAH_" + third;
return"error!";
}
The previous condition, s is [_, 2, int third], returns true only if the array has 3 elements, and the second one is “2”. Then, it stores the third element in a new variable, int third, and uses it to build the returned string.