Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
You already know it: using meaningful names for variables, methods, and classes allows you to write more readable and maintainable code.
It may happen that a good name for your business entity matches one of the reserved keywords in C#.
What to do, now?
There are tons of reserved keywords in C#. Some of these are
int
interface
else
null
short
event
params
Some of these names may be a good fit for describing your domain objects or your variables.
Talking about variables, have a look at this example:
var eventList = GetFootballEvents();
foreach(vareventin eventList)
{
// do something}
That snippet will not work, since event is a reserved keyword.
You can solve this issue in 3 ways.
You can use a synonym, such as action:
var eventList = GetFootballEvents();
foreach(var action in eventList)
{
// do something}
But, you know, it doesn’t fully match the original meaning.
You can use the my prefix, like this:
var eventList = GetFootballEvents();
foreach(var myEvent in eventList)
{
// do something}
But… does it make sense? Is it really your event?
The third way is by using the @ prefix:
var eventList = GetFootballEvents();
foreach(var @event in eventList)
{
// do something}
That way, the code is still readable (even though, I admit, that @ is a bit weird to see around the code).
Of course, the same works for every keyword, like @int, @class, @public, and so on
Further readings
If you are interested in a list of reserved keywords in C#, have a look at this article:
Building APIs with .NET is easy. Deploying them on Azure is easy too, with GitHub Actions!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
With Continuous Delivery (CD), you can deploy your code in a fast-paced and stable way.
To deploy applications, you’ll need workflows that run and automate the process. In that way, you don’t have to perform repetitive tasks and the whole process becomes less error-prone.
In this article, we will learn how to implement CD pipelines using GitHub Actions. In particular, we will focus on the case of a .NET API application that will be deployed on Azure.
Create a .NET API project
Since the focus of this article is on the deployment part, we won’t create complex APIs. Just a simple Hello Word is enough.
To do that, we’re gonna use dotnet Minimal API – a way to create APIs without scaffolding lots of files and configurations.
Our API, the BooksAPI, has a single endpoint: /, the root, simply returns “Hello World!”.
All our code is stored in the Program file:
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.UseHttpsRedirection();
app.MapGet("/", () => "Hello World!");
app.Run();
Nothing fancy: run the application locally, and navigate to the root. You will see the Hello World message.
Lastly, put your code on GitHub: initialize a repository and publish it on GitHub – it can either be a public or a private repository.
Create an App Service on Azure
Now, to deploy an application, we need to define its destination. We’re going to deploy it on Azure, so you need an Azure account before moving on.
Open the Azure Portal, navigate to the App Service section, and create a new one.
Configure it as you wish, and then proceed until you have it up and running.
Once everything is done, you should have something like this:
Now the application is ready to be used: we now need to deploy our code here.
Generate the GitHub Action YAML file for deploying .NET APIs on Azure
It’s time to create our Continuous Delivery pipeline.
Luckily, GitHub already provides lots of templates for GitHub Actions. We will need one specific for our .NET APIs.
On GitHub, navigate to your repository, head to the Actions menu, and select New workflow.
You will see several predefined actions that allow you to do stuff with your repository. We are now interested in the one called “Deploy a .NET Core app to an Azure Web App”:
Clicking on “Configure” you will see a template. Read carefully the instructions, as they will guide you to the correct configuration of the GitHub action.
In particular, you will have to update the environment variables specified in this section:
env:
AZURE_WEBAPP_NAME: your-app-name# set this to the name of your Azure Web AppAZURE_WEBAPP_PACKAGE_PATH: "."# set this to the path to your web app project, defaults to the repository rootDOTNET_VERSION: "5"# set this to the .NET Core version to use
Clearly, AZURE_WEBAPP_NAME must match the name you’ve defined on Azure, while DOTNET_VERSION must match the version you’re using to create your dotnet APIs.
For my specific project, I’ve replaced that section with
env:
AZURE_WEBAPP_NAME: BooksAPI<myName># set this to the name of your Azure Web AppAZURE_WEBAPP_PACKAGE_PATH: "."# set this to the path to your web app project, defaults to the repository rootDOTNET_VERSION: "6.0"# set this to the .NET Core version to use
🟧 DOTNET_VERSION requires also the minor version of dotnet. Setting 6 will now work: you need to specify 6.0. 🟧
Now you can save your YAML file in your repository: it will be saved under ./.github/workflows.
So, as a reference, here’s the full YAML file I’m using to deploy my APIs:
name: Build and deploy ASP.Net Core app to an Azure Web Appenv:
AZURE_WEBAPP_NAME: BooksAPI<myName>AZURE_WEBAPP_PACKAGE_PATH: "."DOTNET_VERSION: "6.0"on:
push:
branches: ["master"]
workflow_dispatch:
permissions:
contents: readjobs:
build:
runs-on: ubuntu-lateststeps:
- uses: actions/checkout@v3 - name: Set up .NET Coreuses: actions/setup-dotnet@v2with:
dotnet-version: ${{ env.DOTNET_VERSION }} - name: Set up dependency caching for faster buildsuses: actions/cache@v3with:
path: ~/.nuget/packageskey: ${{ runner.os }}-nuget-${{ hashFiles('**/packages.lock.json') }}restore-keys: |${{ runner.os }}-nuget- - name: Build with dotnetrun: dotnet build --configuration Release - name: dotnet publishrun: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp - name: Upload artifact for deployment jobuses: actions/upload-artifact@v3with:
name: .net-apppath: ${{env.DOTNET_ROOT}}/myappdeploy:
permissions:
contents: noneruns-on: ubuntu-latestneeds: buildenvironment:
name: "Development"url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}steps:
- name: Download artifact from build jobuses: actions/download-artifact@v3with:
name: .net-app - name: Deploy to Azure Web Appid: deploy-to-webappuses: azure/webapps-deploy@v2with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
As you can see, we have 2 distinct steps: build and deploy.
In the build phase, we check out our code, restore the NuGet dependencies, build the project, pack it and store the final result as an artifact.
In the deploy step, we retrieve the newly created artifact and publish it on Azure.
Store the Publish profile as GitHub Secret
As you can see in the instructions of the workflow file, you have to
Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE, paste the publish profile contents as the value of the secret.
That Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE statement was not clear to me: I thought you had to create that key within your .NET project. Turns out you can create secrets related to repositories on GitHub (so, it’s language-agnostic).
A Publish profile is a file that contains information and settings used to deploy applications to Azure. It’s nothing but an XML file that lists the possible ways to deploy your application, such as FTP, Web Deploy, Zip Deploy, and so on.
We have to get our publish profile and save it into GitHub secrets.
To retrieve the Publish profile, head to the Azure App Service page and click Get publish profile to download the file.
Now, get back on GitHub, Head to Settings > Security > Secrets > Actions.
Here you can create a new secret related to your repository.
Create a new one, name it AZURE_WEBAPP_PUBLISH_PROFILE, and paste the content of the Publish profile file you’ve just downloaded.
You will then see something like this:
Notice that the secret name must be AZURE_WEBAPP_PUBLISH_PROFILE. That constraint is set because we are accessing the Publish profile by key:
- name: Deploy to Azure Web Appid: deploy-to-webappuses: azure/webapps-deploy@v2with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
In particular, notice the publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} part.
Clearly, the two names must match: nothing stops you from changing the name of the secret in both the YAML file and the GitHub Secret page.
Final result
It’s time to see the final result.
Update the application code (I’ve slightly modified the Hello world message), and push your changes to GitHub.
Under the Actions tab, you will see your CD pipeline run.
Once it’s completed, you can head to your application root and see the final result.
Further readings
Automating repetitive tasks allows you to perform more actions with fewer errors. Generally speaking, the more stuff you can automate, the better.
My own blog heavily relies on automation: scaffolding content, tracking ideas, and publishing online…
If you want to peek at what I do, here are my little secrets:
In this article, we’ve only built and deployed our application. We can do more: run tests and keep track of code coverage. If you want to learn how you can do it using Azure DevOps, here we go:
I have to admit that I struggled a lot in setting up the CD pipeline. I was using the one proposed by default on Visual Studio – but it didn’t work.
Using the template found on GitHub worked almost instantly – I just had to figure out what did they mean by repository secrets.
Now we have everything in place. Since the workflow is stored in a text file within my repository, if I have to create and deploy a new API project I can simply do that by copying that file and fixing the references.
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Mixed levels of abstraction make the code harder to understand.
At the first sight, the reader should be able to understand what the code does without worrying about the details of the operations.
Take this code snippet as an example:
publicvoid PrintPriceWithDiscountForProduct(string productId)
{
var product = sqlRepository.FindProduct(productId);
var withDiscount = product.Price * 0.9;
Console.WriteLine("The final price is " + withDiscount);
}
We are mixing multiple levels of operations. In the same method, we are
integrating with an external service
performing algebraic operations
concatenating strings
printing using .NET Console class
Some operations have a high level of abstraction (call an external service, I don’t care how) while others are very low-level (calculate the price discount using the formula ProductPrice*0.9).
Here the readers lose focus on the overall meaning of the method because they’re distracted by the actual implementation.
When I’m talking about abstraction, I mean how high-level an operation is: the more we stay away from bit-to-bit and mathematical operations, the more our code is abstract.
Cleaner code should let the reader understand what’s going on without the need of understanding the details: if they’re interested in the details, they can just read the internals of the methods.
We can improve the previous method by splitting it into smaller methods:
Here you can see the different levels of abstraction: the operations within PrintPriceWithDiscountForProduct have a coherent level of abstraction: they just tell you what the steps performed in this method are; all the methods describe an operation at a high level, without expressing the internal operations.
Yes, now the code is much longer. But we have gained some interesting advantages:
readers can focus on the “what” before getting to the “how”;
we have more reusable code (we can reuse GetProduct, CalculateDiscountedPrice, and PrintPrice in other methods);
if an exception is thrown, we can easily understand where it happened, because we have more information on the stack trace.
In a microservices architecture, an API Gateway hides your real endpoints. We will create one using Azure API Management
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
If you’re building an application that exposes several services you might not want to expose them on different hosts. Consumers will have a hard time configuring their application with all the different hostnames, and you will be forced to maintain the same URLs even if you need to move to other platforms or, for instance, you want to transform a REST endpoint into an Azure Function.
In this case, you should mask the real endpoints beneath a facade: maybe… an API Gateway? 🙂
In this article, we will learn how to configure Azure API Management (from now on: APIM) service to create an API Gateway and “hide” our real services.
Demo: publish .NET API services and locate the OpenAPI definition
For the sake of this article, we will work with 2 API services: BooksService and VideosService.
They are both .NET 6 APIs, deployed on Azure using GitHub Actions (using the steps I described in a previous article).
Both services expose their Swagger pages and a bunch of endpoints that we will gonna hide behind Azure APIM.
How to create Azure API Management (APIM) Service from Azure Portal
Now, we want to hide their real endpoints. The clients will then only know about the existence of the API Gateway, and not of the two separate API services:
It’s time to create our APIM resource.👷♂️
Head to the Azure Portal, and create a new API Management instance. I suggest reading the short overview of the functionalities provided by Azure API Management services as listed in the screenshot below.
The wizard will ask you for some info, such as the resource name, the region, and an email used to send communications (honestly speaking, I still haven’t figured out why they’re asking for your email).
Fill in all the fields, pick your preferred pricing tier (mine is Developer: it doesn’t have an SLA and is quite cheap), and then proceed with the service creation.
After several minutes (it took 50 minutes – fifty!💢 – to scaffold my instance), you will have your instance ready to be used.
We are now ready to add our APIs and expose them to our clients.
How to add APIs to Azure API Management using Swagger definition (OpenAPI)
As we’ve seen in a previous article, Swagger creates a JSON file that describes the operations available in your APIs, as well as the object structures accepted as input and returned as output.
Let me use as an example the Books API: once that API project is deployed on the cloud (it’s not mandatory to use Azure: it will work the same using other cloud vendors), you will see the Swagger UI and the related JSON definition.
We have 3 endpoints, /, /echo, and /books; those endpoints are described in the swagger.json file linked in the Swagger page; put that link aside: we will use it soon.
Finally, we can add our Books APIs to our Azure Management API Service! Head to the resource on Azure, locate the APIs menu item on the left panel, and create a new API definition using OpenAPI (which is the standard used by Swagger to create its UI).
You will see a form that allows you to create new resources from OpenAPI specifications.
Paste here the link to the swagger.json file you located before, populate the required fields and, if you want, add a prefix to identify these endpoints: I choose MyBooks.
You will then see your APIs appear in the panel shown below. It is composed of different parts:
The list of services exposed. In the screenshot below, BooksAPI, Echo API, and VideosAPI;
The list of endpoints exposed for each service: here, BooksAPI exposes endpoints at /, /echo, and /books;
A list of policies that are applied to the inbound requests before hitting the real endpoint;
The real endpoint used when calling the facade exposed by APIM;
A list of policies applied to the outbound requests after the origin has processed the requests.
For now, we will ignore both Inbound and Outbound processing, as they will be the topic of a future article.
Consuming APIs exposed on the API Gateway
We’re ready to go! Head back to the Azure API Management service dashboard and locate the URL of the API Gateway under Custom domains > Gateway URL.
This will be the root URL that our clients will use.
We can then access Books API and Videos API both on the Origin and the Gateway (we’re doing it just for demonstrating that things are working; clients will only use the APIs exposed by the API Gateway).
The Videos API maintains the exact same structure, mapping the endpoints as they are defined in Origin.
On the contrary, to access the Books APIs we have to access the /mybooks path (because we defined it a few steps ago when we imported the BooksAPI from OpenAPI definition: it’s the API Url Suffix field), as shown below:
Further readings
As usual, a bunch of interesting readings 📚
In this article, we’ve only scratched the surface of Azure API Management. There’s way lot – and you can read about it on the Microsoft Docs website:
To integrate Azure APIM, we used two simple dotNET 6 Web APIs deployed on Azure. If you wanna know how to set up GitHub Actions to build and deploy dotNET APIs, I recently published an article on that topic.
This can be just the beginning of a long journey; APIM allows you to highly customize your API Gateway by defining API access by user role, creating API documentation using custom templates and themes, and a lot of different stuff.
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
There may be times when you need to process a specific task on a timely basis, such as polling an endpoint to look for updates or refreshing a Refresh Token.
If you need infinite processing, you can pick two roads: the obvious one or the better one.
For instance, you can use an infinite loop and put a Sleep command to delay the execution of the next task:
The constructor accepts in input an interval (a double value that represents the milliseconds for the interval), whose default value is 100.
This class implements IDisposable: if you’re using it as a dependency of another component that must be Disposed, don’t forget to call Dispose on that Timer.
Note: use this only for synchronous tasks: there are other kinds of Timers that you can use for asynchronous operations, such as PeriodicTimer, which also can be stopped by canceling a CancellationToken.
“C# 11 and .NET 7 – Modern Cross-Platform Development Fundamentals” is a HUGE book – ~750 pages – that guides readers from the very basics of C# and dotnet to advanced topics and approaches.
This book starts from the very beginning, explaining the history of C# and .NET, then moving to C# syntax and exploring OOP topics.
If you already have some experience with C#, you might be tempted to skip those chapters. Don’t skip them! Yes, they’re oriented to newbies, but you’ll find some gems that you might find interesting or that you might have ignored before.
Then, things get really interesting: some of my favourite topics were:
how to build and distribute packages;
how to publish Console Apps;
Entity Framework (which I used in the past, before ASP.NET Core, so it was an excellent way to see how things evolved);
Blazor
What I liked
the content is well explained;
you have access to the code example to follow along (also, the author explains how to install and configure stuff necessary to follow along, such as SQL Lite when talking about Entity Framework)
it also teaches you how to use Visual Studio
What I did not like
in the printed version, some images are not that readable
experienced developers might find some chapters boring (well, they’re not the target audience of the book, so it makes sense 🤷♂️ )
A PriorityQueue represents a collection of items that have a value and a priority. Now this data structure is built-in in dotNET!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Starting from .NET 6 and C# 10, we finally have built-in support for PriorityQueues 🥳
A PriorityQueue is a collection of items that have a value and a priority; as you can imagine, they act as a queue: the main operations are “add an item to the queue”, called Enqueue, and “remove an item from the queue”, named Dequeue. The main difference from a simple Queue is that on dequeue, the item with lowest priority is removed.
In this article, we’re gonna use a PriorityQueue and wrap it into a custom class to solve one of its design issues (that I hope they’ll be addressed in a future release of dotNET).
Welcoming Priority Queues in .NET
Defining a priority queue is straightforward: you just have to declare it specifying the type of items and the type of priority.
So, if you need a collection of Child items, and you want to use int as a priority type, you can define it as
PriorityQueue<Child, int> pq = new PriorityQueue<Child, int>();
And you can retrieve the one on the top of the queue by calling Peek(), if you want to just look at the first item without removing it from the queue:
Child child3 = BuildChild3();
Child child2 = BuildChild2();
Child child1 = BuildChild1();
queue.Enqueue(child3, 3);
queue.Enqueue(child1, 1);
queue.Enqueue(child2, 2);
//queue.Count = 3Child first = queue.Peek();
//first will be child1, because its priority is 1//queue.Count = 3, because we did not remove the item on top
or Dequeue if you want to retrieve it while removing it from the queue:
Child child3 = BuildChild3();
Child child2 = BuildChild2();
Child child1 = BuildChild1();
queue.Enqueue(child3, 3);
queue.Enqueue(child1, 1);
queue.Enqueue(child2, 2);
//queue.Count = 3Child first = queue.Dequeue();
//first will be child1, because its priority is 1//queue.Count = 2, because we removed the item with the lower priority
This is the essence of a Priority Queue: insert items, give them a priority, then remove them starting from the one with lower priority.
Creating a Wrapper to automatically handle priority in Priority Queues
There’s a problem with this definition: you have to manually specify the priority of each item.
I don’t like it that much: I’d like to automatically assign each item a priority. So we have to wrap it in another class.
Since we’re near Christmas, and this article is part of the C# Advent 2022, let’s use an XMAS-themed example: a Christmas list used by Santa to handle gifts for children.
Now we can create a Priority Queue of type <Child, int>:
PriorityQueue<Child, int> pq = new PriorityQueue<Child, int>();
And wrap it all within a ChristmasList class:
publicclassChristmasList{
privatereadonly PriorityQueue<Child, int> queue;
public ChristmasList()
{
queue = new PriorityQueue<Child, int>();
}
publicvoid Add(Child child)
{
int priority =// ??; queue.Enqueue(child, priority);
}
public Child Get()
{
return queue.Dequeue();
}
}
A question for you: what happens when we call the Get method on an empty queue? What should we do instead? Drop a message below! 📩
We need to define a way to assign each child a priority.
Define priority as private behavior
The easiest way is to calculate the priority within the Add method: define a function that accepts a Child and returns an int, and then pass that int value to the Enqueue method.
This approach is useful because you’re encapsulating the behavior in the ChristmasList class, but has the downside that it’s not extensible, and you cannot use different priority algorithms in different places of your application. On the other side, GetPriority is a private operation within the ChristmasList class, so it can be fine for our example.
Pass priority calculation from outside
We can then pass a Func<Child, int> in the ChristmasList constructor, centralizing the priority definition and giving the caller the responsibility to define it:
This implementation presents the opposite problems and solutions we saw in the previous example.
What I’d like to see in the future
This is a personal thought: it’d be great if we had a slightly different definition of PriorityQueue to automate the priority definition.
One idea could be to add in the constructor a parameter that we can use to calculate the priority, just to avoid specifying it explicitly. So, I’d expect that the current definition of the constructor and of the Enqueue method change from this:
PriorityQueue<Child, int> pq = new PriorityQueue<Child, int>();
int priority = _priorityCalculation(child);
queue.Enqueue(child, priority);
to this:
PriorityQueue<Child, int> pq = new PriorityQueue<Child, int>(_priorityCalculation);
queue.Enqueue(child);
It’s not perfect, and it raises some new problems.
Another way could be to force the item type to implement an interface that exposes a way to retrieve its priority, such as
publicinterfaceIHavePriority<T>{
public T GetPriority();
}
publicclassChild : IHavePriority<int>{}
Again, this approach is not perfect but can be helpful.
Talking about its design, which approach would you suggest, and why?
Further readings
As usual, the best way to learn about something is by reading its official documentation:
PriorityQueue is a good-to-know functionality that is now out-of-the-box in dotNET. Do you like its design? Have you used another library to achieve the same result? In what do they differ?
Exposing Swagger UI is a good way to help developers consume your APIs. But don’t be boring: customize your UI with some fancy CSS
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Brace yourself, Christmas is coming! 🎅
If you want to add a more festive look to your Swagger UI, it’s just a matter of creating a CSS file and injecting it.
You should create a custom CSS for your Swagger endpoints, especially if you are exposing them outside your company: if your company has a recognizable color palette, using it in your Swagger pages can make your brand stand out.
In this article, we will learn how to inject a CSS file in the Swagger UI generated using .NET Minimal APIs.
How to add Swagger in your .NET Minimal APIs
There are plenty of tutorials about how to add Swagger to your APIs. I wrote some too, where I explained how every configuration impacts what you see in the UI.
That article was targeting older dotNET versions without Minimal APIs. Now everything’s easier.
When you create your API project, Visual Studio asks you if you want to add OpenAPI support (aka Swagger). By adding it, you will have everything in place to get started with Swagger.
The key parts are builder.Services.AddEndpointsApiExplorer(), builder.Services.AddSwaggerGen(), app.UseSwagger(), app.UseSwaggerUI() and WithOpenApi(). Do you know that those methods do? If so, drop a comment below! 📩
Now, if we run our application, we will see a UI similar to the one below.
That’s a basic UI. Quite boring, uh? Let’s add some style
Create the CSS file for Swagger theming
All the static assets must be stored within the wwwroot folder. It does not exist by default, so you have to create it manually. Click on the API project, add a new folder, and name it “wwwroot”. Since it’s a special folder, by default Visual Studio will show it with a special icon (it’s a sort of blue world, similar to 🌐).
Now you can add all the folders and static resources needed.
I’ve created a single CSS file under /wwwroot/assets/css/xmas-style.css. Of course, name it as you wish – as long as it is within the wwwroot folder, it’s fine.
the element selectors are taken directly from the Swagger UI – you’ll need a bit of reverse-engineering skills: just open the Browser Console and find the elements you want to update;
unless the element does not already have the rule you want to apply, you have to add the !important CSS operator. Otherwise, your code won’t affect the UI;
you can add assets from other folders: I’ve added background-image: url("../images/snowflakes.webp"); to the body style. That image is, as you can imagine, under the wwwroot folder we created before.
Just as a recap, here’s my project structure:
Of course, it’s not enough: we have to tell Swagger to take into consideration that file
How to inject a CSS file in Swagger UI
This part is quite simple: you have to update the UseSwaggerUI command within the Main method:
CSS is not the only part you can customize, there’s way more. Here’s an article I wrote about Swagger integration in .NET Core 3 APIs, but it’s still relevant (I hope! 😁)
Theming is often not considered an important part of API development. That’s generally correct: why should I bother adding some fancy colors to APIs that are not expected to have a UI?
This makes sense if you’re working on private APIs. In fact, theming is often useful to improve brand recognition for public-facing APIs.
You should also consider using theming when deploying APIs to different environments: maybe Blue for Development, Yellow for Staging, and Green for Production. That way your developers can understand which environment they’re exploring right easily.
LINQ is a set of methods that help developers perform operations on sets of items. There are tons of methods – do you know which is the one for you?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
LINQ is one of the most loved functionalities by C# developers. It allows you to perform calculations and projections over a collection of items, making your code easy to build and, even more, easy to understand.
As of C# 11, there are tens of methods and overloads you can choose from. Some of them seem similar, but there are some differences that might not be obvious to C# beginners.
In this article, we’re gonna learn the differences between couples of methods, so that you can choose the best one that fits your needs.
First vs FirstOrDefault
Both First and FirstOrDefault allow you to get the first item of a collection that matches some requisites passed as a parameter, usually with a Lambda expression:
int[] numbers = newint[] { -2, 1, 6, 12 };
var mod3OrDefault = numbers.FirstOrDefault(n => n % 3 == 0);
var mod3 = numbers.First(n => n % 3 == 0);
Using FirstOrDefault you get the first item that matches the condition. If no items are found you’ll get the default value for that type. The default value depends on the data type:
Data type
Default value
int
0
string
null
bool
false
object
null
To know the default value for a specific type, just run default(string).
So, coming back to FirstOrDefault, we have these two possible outcomes:
On the other hand, First throws an InvalidOperationException with the message “Sequence contains no matching element” if no items in the collection match the filter criterion:
While First returns the first item that satisfies the condition, even if there are more than two or more, Single ensures that no more than one item matches that condition.
If there are two or more items that passing the filter, an InvalidOperationException is thrown with the message “Sequence contains more than one matching element”.
int[] numbers = newint[] { -2, 1, 6, 12 };
numbers.First(n => n % 3 == 0); // 6numbers.Single(n => n % 3 == 0); // throws exception because both 6 and 12 are accepted values
Both methods have their corresponding -OrDefault counterpart: SingleOrDefault returns the default value if no items are valid.
int[] numbers = newint[] { -2, 1, 6, 12 };
numbers.SingleOrDefault(n => n % 4 == 0); // 12numbers.SingleOrDefault(n => n % 7 == 0); // 0, because no items are %7numbers.SingleOrDefault(n => n % 3 == 0); // throws exception
Any vs Count
Both Any and Count give you indications about the presence or absence of items for which the specified predicate returns True.
In this article, we learned the differences between couples of LINQ methods.
Each of them has a purpose, and you should use the right one for each case.
❓ A question for you: talking about performance, which is more efficient: First or Single? And what about Count() == 0 vs Any()? Drop a message below if you know the answer! 📩
I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛
Why buy a whole tool when you can build your own? Learn how the Type system works in .NET, and create your own minimal type analyser.
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Analysing your code is helpful to get an idea of the overall quality. At the same time, having an automatic tool that identifies determinate characteristics or performs some analysis for you can be useful.
Sure, there are many fantastic tools available, but having a utility class that you can build as needed and run without setting up a complex infrastructure is sufficient.
In this article, we are going to see how to navigate assemblies, classes, methods and parameters to perfor some custom analysis.
For this article, my code is structured into 3 Assemblies:
CommonClasses, a Class Library that contains some utility classes;
NetCoreScripts, a Class Library that contains the code we are going to execute;
ScriptsRunner, a Console Application that runs the scripts defined in the NetCoreScripts library.
The dependencies between the modules are shown below: ScriptsRunner depends on NetCoreScripts, and NetCoreScripts depends on CommonClasses.
In this article, we are going to write the examples in the NetCoreScripts class library, in a class named AssemblyAnalysis.
How to load an Assembly in C#, with different methods
The starting point to analyse an Assembly is, well, to have an Assembly.
So, in the Scripts Class Library (the middle one), I wrote:
var assembly = DefineAssembly();
In the DefineAssembly method we can choose the Assembly we are going to analyse.
In short, you can access the Assembly info of whichever class you know – if you can reference it directly, of course!
Load the current, the calling, and the executing Assembly
The Assembly class provides you with some methods that may look similar, but give you totally different info depending on how your code is structured.
Remember the ScriptsRunner –> NetCoreScripts –> CommonClasses sequence? To better explain how things work, let’s run the following examples in a method in the CommonClasses class library (the last one in the dependency chain).
var executing = System.Reflection.Assembly.GetExecutingAssembly();
var calling = System.Reflection.Assembly.GetCallingAssembly();
var entry = System.Reflection.Assembly.GetEntryAssembly();
Assembly.GetExecutingAssembly returns the Assembly that contains the actual code instructions (so, in short, the Assembly that actually contains the code). In this case, it’s the CommonClasses Assembly.
Assembly.GetCallingAssembly returns the caller Assembly, so the one that references the Executing Assembly. In this case, given that the CommonClasses library is referenced only by the NetCoreScripts library, well, we are getting info about the NetCoreScripts class library.
Assembly.GetEntryAssembly returns the info of the Assembly that is executing the whole application – so, the entry point. In our case, it’s the ScriptsRunner Console Application.
Deciding which one to choose is crucial, especially when you are going to distribute your libraries, for example, as NuGet packages. For sure, you’ll know the Executing Assembly. Most probably, depending on how the project is structured, you’ll also know the Calling Assembly. But almost certainly you won’t know the Entry Assembly.
Method name
Meaning
In this example…
GetExecutingAssembly
The current Assembly
CommonClasses
GetCallingAssembly
The caller Assembly
NetCoreScripts
GetEntryAssembly
The top-level executor
ScriptsRunner
How to retrieve classes of a given .NET Assembly
Now you have an Assembly to analyse. It’s time to load the classes belonging to your Assembly.
You can start with assembly.GetTypes(): this method returns all the types (in the form of a Type array) belonging to the Assembly.
For each Type you can access several properties, such as IsClass, IsPublic, IsAbstract, IsGenericType, IsEnum and so on. The full list of properties of a Type is available 🔗here.
You may want to analyse public classes: therefore, you can do something like:
If we have a look at its parameters, we will find the following values:
Bonus tip: Auto-properties act as Methods
Let’s focus a bit more on the properties of a class.
Consider this class:
publicclassUser{
publicstring Name { get; set; }
}
There are no methods; only one public property.
But hey! It turns out that properties, under the hood, are treated as methods. In fact, you can find two methods, named get_Name and set_Name, that act as an access point to the Name property.
Further readings
Do you remember that exceptions are, in the end, Types?
And that, in the catch block, you can filter for exceptions of a specific type or with a specific condition?
From here, you can use all this info to build whatever you want. Personally, I used it to analyse my current project, checking how many methods accept more than N parameters as input, and which classes have the highest number of public methods.
In short, an example of a simple code analyser can be this one:
publicvoid Execute()
{
var assembly = DefineAssembly();
var paramsInfo = AnalyzeAssembly(assembly);
AnalyzeParameters(paramsInfo);
}
privatestatic Assembly DefineAssembly()
=> Assembly.GetExecutingAssembly();
publicstatic List<ParamsMethodInfo> AnalyzeAssembly(Assembly assembly)
{
List<ParamsMethodInfo> all = new List<ParamsMethodInfo>();
var types = GetAllPublicTypes(assembly);
foreach (var type in types)
{
var publicMethods = GetPublicMethods(type);
foreach (var method in publicMethods)
{
var parameters = method.GetParameters();
if (parameters.Length > 0)
{
var f = parameters.First();
}
all.Add(new ParamsMethodInfo(
assembly.GetName().Name,
type.Name,
method
));
}
}
return all;
}
privatestatic MethodInfo[] GetPublicMethods(Type type) =>
type.GetMethods(BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.DeclaredOnly);
privatestatic List<Type> GetAllPublicTypes(Assembly assembly) => assembly.GetTypes()
.Where(t => t.IsClass && t.IsPublic)
.ToList();
publicclassParamsMethodInfo(string AssemblyName, string ClassName, MethodInfo Method)
{
publicstring MethodName => Method.Name;
public ParameterInfo[] Parameters => Method.GetParameters();
}
And then, in the AnalyzeParameters, you can add your own logic.
As you can see, you don’t need to adopt complex tools to perform operations like this: just knowing that you can access the static details of each class and method can be enough (of course, it depends on the use!).
I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛