برچسب: .NET

  • PriorityQueues on .NET 7 and C# 11 | Code4IT

    PriorityQueues on .NET 7 and C# 11 | Code4IT


    A PriorityQueue represents a collection of items that have a value and a priority. Now this data structure is built-in in dotNET!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Starting from .NET 6 and C# 10, we finally have built-in support for PriorityQueues 🥳

    A PriorityQueue is a collection of items that have a value and a priority; as you can imagine, they act as a queue: the main operations are “add an item to the queue”, called Enqueue, and “remove an item from the queue”, named Dequeue. The main difference from a simple Queue is that on dequeue, the item with lowest priority is removed.

    In this article, we’re gonna use a PriorityQueue and wrap it into a custom class to solve one of its design issues (that I hope they’ll be addressed in a future release of dotNET).

    Welcoming Priority Queues in .NET

    Defining a priority queue is straightforward: you just have to declare it specifying the type of items and the type of priority.

    So, if you need a collection of Child items, and you want to use int as a priority type, you can define it as

    PriorityQueue<Child, int> pq = new PriorityQueue<Child, int>();
    

    Now you can add items using the Enqueue method:

    Child child = //something;
    int priority = 3;
    queue.Enqueue(child, priority);
    

    And you can retrieve the one on the top of the queue by calling Peek(), if you want to just look at the first item without removing it from the queue:

    Child child3 = BuildChild3();
    Child child2 = BuildChild2();
    Child child1 = BuildChild1();
    
    queue.Enqueue(child3, 3);
    queue.Enqueue(child1, 1);
    queue.Enqueue(child2, 2);
    
    //queue.Count = 3
    
    Child first = queue.Peek();
    //first will be child1, because its priority is 1
    //queue.Count = 3, because we did not remove the item on top
    

    or Dequeue if you want to retrieve it while removing it from the queue:

    Child child3 = BuildChild3();
    Child child2 = BuildChild2();
    Child child1 = BuildChild1();
    
    queue.Enqueue(child3, 3);
    queue.Enqueue(child1, 1);
    queue.Enqueue(child2, 2);
    
    //queue.Count = 3
    
    Child first = queue.Dequeue();
    //first will be child1, because its priority is 1
    //queue.Count = 2, because we removed the item with the lower priority
    

    This is the essence of a Priority Queue: insert items, give them a priority, then remove them starting from the one with lower priority.

    Creating a Wrapper to automatically handle priority in Priority Queues

    There’s a problem with this definition: you have to manually specify the priority of each item.

    I don’t like it that much: I’d like to automatically assign each item a priority. So we have to wrap it in another class.

    Since we’re near Christmas, and this article is part of the C# Advent 2022, let’s use an XMAS-themed example: a Christmas list used by Santa to handle gifts for children.

    Let’s assume that the Child class has this shape:

    public class Child
    {
        public bool HasSiblings { get; set; }
        public int Age { get; set; }
        public List<Deed> Deeds { get; set; }
    }
    
    public abstract class Deed
    {
        public string What { get; set; }
    }
    
    public class GoodDeed : Deed
    { }
    
    public class BadDeed : Deed
    { }
    

    Now we can create a Priority Queue of type <Child, int>:

    PriorityQueue<Child, int> pq = new PriorityQueue<Child, int>();
    

    And wrap it all within a ChristmasList class:

    public class ChristmasList
    {
        private readonly PriorityQueue<Child, int> queue;
    
        public ChristmasList()
        {
            queue = new PriorityQueue<Child, int>();
        }
    
        public void Add(Child child)
        {
            int priority =// ??;
            queue.Enqueue(child, priority);
        }
    
         public Child Get()
        {
            return queue.Dequeue();
        }
    }
    

    A question for you: what happens when we call the Get method on an empty queue? What should we do instead? Drop a message below! 📩

    We need to define a way to assign each child a priority.

    Define priority as private behavior

    The easiest way is to calculate the priority within the Add method: define a function that accepts a Child and returns an int, and then pass that int value to the Enqueue method.

    public void Add(Child child)
    {
        int priority = GetPriority(child);
        queue.Enqueue(child, priority);
    }
    

    This approach is useful because you’re encapsulating the behavior in the ChristmasList class, but has the downside that it’s not extensible, and you cannot use different priority algorithms in different places of your application. On the other side, GetPriority is a private operation within the ChristmasList class, so it can be fine for our example.

    Pass priority calculation from outside

    We can then pass a Func<Child, int> in the ChristmasList constructor, centralizing the priority definition and giving the caller the responsibility to define it:

    public class ChristmasList
    {
        private readonly PriorityQueue<Child, int> queue;
        private readonly Func<Child, int> _priorityCalculation;
    
        public ChristmasList(Func<Child, int> priorityCalculation)
        {
            queue = new PriorityQueue<Child, int>();
            _priorityCalculation = priorityCalculation;
        }
    
    
        public void Add(Child child)
        {
            int priority = _priorityCalculation(child);
            queue.Enqueue(child, priority);
        }
    
         public Child Get()
        {
            return queue.Dequeue();
        }
    }
    

    This implementation presents the opposite problems and solutions we saw in the previous example.

    What I’d like to see in the future

    This is a personal thought: it’d be great if we had a slightly different definition of PriorityQueue to automate the priority definition.

    One idea could be to add in the constructor a parameter that we can use to calculate the priority, just to avoid specifying it explicitly. So, I’d expect that the current definition of the constructor and of the Enqueue method change from this:

    PriorityQueue<Child, int> pq = new PriorityQueue<Child, int>();
    
    int priority = _priorityCalculation(child);
    queue.Enqueue(child, priority);
    

    to this:

    PriorityQueue<Child, int> pq = new PriorityQueue<Child, int>(_priorityCalculation);
    
    queue.Enqueue(child);
    

    It’s not perfect, and it raises some new problems.

    Another way could be to force the item type to implement an interface that exposes a way to retrieve its priority, such as

    public interface IHavePriority<T>{
        public T GetPriority();
    }
    
    public class Child : IHavePriority<int>{}
    

    Again, this approach is not perfect but can be helpful.

    Talking about its design, which approach would you suggest, and why?

    Further readings

    As usual, the best way to learn about something is by reading its official documentation:

    🔗 PriorityQueue documentation | Microsoft Learn

    This article is part of the 2022 C# Advent (that’s why I chose a Christmas-ish topic for this article),

    🔗 C# Advent Calendar 2022

    This article first appeared on Code4IT 🐧

    Conclusion

    PriorityQueue is a good-to-know functionality that is now out-of-the-box in dotNET. Do you like its design? Have you used another library to achieve the same result? In what do they differ?

    Let me know in the comments section! 📩

    For now, happy coding!

    🐧



    Source link

  • How to customize Swagger UI with custom CSS in .NET 7 &vert; Code4IT

    How to customize Swagger UI with custom CSS in .NET 7 | Code4IT


    Exposing Swagger UI is a good way to help developers consume your APIs. But don’t be boring: customize your UI with some fancy CSS

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Brace yourself, Christmas is coming! 🎅

    If you want to add a more festive look to your Swagger UI, it’s just a matter of creating a CSS file and injecting it.

    You should create a custom CSS for your Swagger endpoints, especially if you are exposing them outside your company: if your company has a recognizable color palette, using it in your Swagger pages can make your brand stand out.

    In this article, we will learn how to inject a CSS file in the Swagger UI generated using .NET Minimal APIs.

    How to add Swagger in your .NET Minimal APIs

    There are plenty of tutorials about how to add Swagger to your APIs. I wrote some too, where I explained how every configuration impacts what you see in the UI.

    That article was targeting older dotNET versions without Minimal APIs. Now everything’s easier.

    When you create your API project, Visual Studio asks you if you want to add OpenAPI support (aka Swagger). By adding it, you will have everything in place to get started with Swagger.

    You Minimal APIs will look like this:

    public static void Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);
    
        builder.Services.AddEndpointsApiExplorer();
        builder.Services.AddSwaggerGen();
    
        var app = builder.Build();
    
        app.UseSwagger();
        app.UseSwaggerUI();
    
    
        app.MapGet("/weatherforecast", (HttpContext httpContext) =>
        {
            // return something
        })
        .WithName("GetWeatherForecast")
        .WithOpenApi();
    
        app.Run();
    }
    

    The key parts are builder.Services.AddEndpointsApiExplorer(), builder.Services.AddSwaggerGen(), app.UseSwagger(), app.UseSwaggerUI() and WithOpenApi(). Do you know that those methods do? If so, drop a comment below! 📩

    Now, if we run our application, we will see a UI similar to the one below.

    Basic Swagger UI

    That’s a basic UI. Quite boring, uh? Let’s add some style

    Create the CSS file for Swagger theming

    All the static assets must be stored within the wwwroot folder. It does not exist by default, so you have to create it manually. Click on the API project, add a new folder, and name it “wwwroot”. Since it’s a special folder, by default Visual Studio will show it with a special icon (it’s a sort of blue world, similar to 🌐).

    Now you can add all the folders and static resources needed.

    I’ve created a single CSS file under /wwwroot/assets/css/xmas-style.css. Of course, name it as you wish – as long as it is within the wwwroot folder, it’s fine.

    My CSS file is quite minimal:

    body {
      background-image: url("../images/snowflakes.webp");
    }
    
    div.topbar {
      background-color: #34a65f !important;
    }
    
    h2,
    h3 {
      color: #f5624d !important;
    }
    
    .opblock-summary-get > button > span.opblock-summary-method {
      background-color: #235e6f !important;
    }
    
    .opblock-summary-post > button > span.opblock-summary-method {
      background-color: #0f8a5f !important;
    }
    
    .opblock-summary-delete > button > span.opblock-summary-method {
      background-color: #cc231e !important;
    }
    

    There are 3 main things to notice:

    1. the element selectors are taken directly from the Swagger UI – you’ll need a bit of reverse-engineering skills: just open the Browser Console and find the elements you want to update;
    2. unless the element does not already have the rule you want to apply, you have to add the !important CSS operator. Otherwise, your code won’t affect the UI;
    3. you can add assets from other folders: I’ve added background-image: url("../images/snowflakes.webp"); to the body style. That image is, as you can imagine, under the wwwroot folder we created before.

    Just as a recap, here’s my project structure:

    Static assets files

    Of course, it’s not enough: we have to tell Swagger to take into consideration that file

    How to inject a CSS file in Swagger UI

    This part is quite simple: you have to update the UseSwaggerUI command within the Main method:

    app.UseSwaggerUI(c =>
    +   c.InjectStylesheet("/assets/css/xmas-style.css")
    );
    

    Notice how that path begins: no wwwroot, no ~, no .. It starts with /assets.

    One last step: we have to tell dotNET to consider static files when building and running the application.

    You just have to add UseStaticFiles()

    After builder.Build():

    var app = builder.Build();
    + app.UseStaticFiles();
    
    app.UseSwagger();
    app.UseSwaggerUI(c =>
        c.InjectStylesheet("/assets/css/xmas-style.css")
    );
    

    Now we can run our APIs as admire our wonderful Xmas-style UI 🎅

    XMAS-style Swagger UI

    Further readings

    This article is part of 2022 .NET Advent, created by Dustin Moris 🐤:

    🔗 dotNET Advent Calendar 2022

    CSS is not the only part you can customize, there’s way more. Here’s an article I wrote about Swagger integration in .NET Core 3 APIs, but it’s still relevant (I hope! 😁)

    🔗 Understanding Swagger integration in .NET Core | Code4IT

    This article first appeared on Code4IT

    Wrapping up

    Theming is often not considered an important part of API development. That’s generally correct: why should I bother adding some fancy colors to APIs that are not expected to have a UI?

    This makes sense if you’re working on private APIs. In fact, theming is often useful to improve brand recognition for public-facing APIs.

    You should also consider using theming when deploying APIs to different environments: maybe Blue for Development, Yellow for Staging, and Green for Production. That way your developers can understand which environment they’re exploring right easily.

    Happy coding!
    🐧





    Source link

  • Build your own Static Code Analysis tool in .NET by knowing how Assembly, Type, MethodInfo, ParameterInfo work. &vert; Code4IT

    Build your own Static Code Analysis tool in .NET by knowing how Assembly, Type, MethodInfo, ParameterInfo work. | Code4IT


    Why buy a whole tool when you can build your own? Learn how the Type system works in .NET, and create your own minimal type analyser.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Analysing your code is helpful to get an idea of the overall quality. At the same time, having an automatic tool that identifies determinate characteristics or performs some analysis for you can be useful.

    Sure, there are many fantastic tools available, but having a utility class that you can build as needed and run without setting up a complex infrastructure is sufficient.

    In this article, we are going to see how to navigate assemblies, classes, methods and parameters to perfor some custom analysis.

    For this article, my code is structured into 3 Assemblies:

    • CommonClasses, a Class Library that contains some utility classes;
    • NetCoreScripts, a Class Library that contains the code we are going to execute;
    • ScriptsRunner, a Console Application that runs the scripts defined in the NetCoreScripts library.

    The dependencies between the modules are shown below: ScriptsRunner depends on NetCoreScripts, and NetCoreScripts depends on CommonClasses.

    Class library dependencies

    In this article, we are going to write the examples in the NetCoreScripts class library, in a class named AssemblyAnalysis.

    How to load an Assembly in C#, with different methods

    The starting point to analyse an Assembly is, well, to have an Assembly.

    So, in the Scripts Class Library (the middle one), I wrote:

    var assembly = DefineAssembly();
    

    In the DefineAssembly method we can choose the Assembly we are going to analyse.

    Load the Assembly containing a specific class

    The easiest way is to do something like this:

    private static Assembly DefineAssembly()
        => typeof(AssemblyAnalysis).Assembly;
    

    Where AssemblyAnalysis is the class that contains our scripts.

    Similarly, we can get the Assembly info for a class belonging to another Assembly, like this:

    private static Assembly DefineAssembly()
        => typeof(CommonClasses.BaseExecutable).Assembly;
    

    In short, you can access the Assembly info of whichever class you know – if you can reference it directly, of course!

    Load the current, the calling, and the executing Assembly

    The Assembly class provides you with some methods that may look similar, but give you totally different info depending on how your code is structured.

    Remember the ScriptsRunner –> NetCoreScripts –> CommonClasses sequence? To better explain how things work, let’s run the following examples in a method in the CommonClasses class library (the last one in the dependency chain).

    var executing = System.Reflection.Assembly.GetExecutingAssembly();
    var calling = System.Reflection.Assembly.GetCallingAssembly();
    var entry = System.Reflection.Assembly.GetEntryAssembly();
    

    Assembly.GetExecutingAssembly returns the Assembly that contains the actual code instructions (so, in short, the Assembly that actually contains the code). In this case, it’s the CommonClasses Assembly.

    Assembly.GetCallingAssembly returns the caller Assembly, so the one that references the Executing Assembly. In this case, given that the CommonClasses library is referenced only by the NetCoreScripts library, well, we are getting info about the NetCoreScripts class library.

    Assembly.GetEntryAssembly returns the info of the Assembly that is executing the whole application – so, the entry point. In our case, it’s the ScriptsRunner Console Application.

    Deciding which one to choose is crucial, especially when you are going to distribute your libraries, for example, as NuGet packages. For sure, you’ll know the Executing Assembly. Most probably, depending on how the project is structured, you’ll also know the Calling Assembly. But almost certainly you won’t know the Entry Assembly.

    Method name Meaning In this example…
    GetExecutingAssembly The current Assembly CommonClasses
    GetCallingAssembly The caller Assembly NetCoreScripts
    GetEntryAssembly The top-level executor ScriptsRunner

    How to retrieve classes of a given .NET Assembly

    Now you have an Assembly to analyse. It’s time to load the classes belonging to your Assembly.

    You can start with assembly.GetTypes(): this method returns all the types (in the form of a Type array) belonging to the Assembly.

    For each Type you can access several properties, such as IsClass, IsPublic, IsAbstract, IsGenericType, IsEnum and so on. The full list of properties of a Type is available 🔗here.

    You may want to analyse public classes: therefore, you can do something like:

    private static List<Type> GetAllPublicTypes(Assembly assembly) => assembly
                .GetTypes()
                .Where(t => t.IsClass && t.IsPublic)
                .ToList();
    

    How to list the Methods belonging to a C# Type

    Given a Type, you can extract the info about all the available methods.

    The Type type contains several methods that can help you find useful information, such as GetConstructors.

    In our case, we are only interested in public methods, declared in that class (and not inherited from a base class):

    private static MethodInfo[] GetPublicMethods(Type type) =>
        type.GetMethods(BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.DeclaredOnly);
    

    The BindingFlags enum is a 🔗Flagged Enum: it’s an enum with special values that allow you to perform an OR operation on the values.

    Value Description Example
    Public Includes public members. public void Print()
    NonPublic Includes non-public members (private, protected, etc.). private void Calculate()
    Instance Includes instance (non-static) members. public void Save()
    Static Includes static members. public static void Log(string msg)
    FlattenHierarchy Includes static members up the inheritance chain. public static void Helper() (this method exists in the base class)
    DeclaredOnly Only members declared in the given type, not inherited. public void MyTypeSpecific() (this method does not exist in the base class)

    How to get the parameters of a MethodInfo object

    The final step is to retrieve the list of parameters from a MethodInfo object.

    This step is pretty easy: just call the GetParameter() method:

    public ParameterInfo[] GetParameters(MethodInfo method) => method.GetParameters();
    

    A ParameterInfo object contains several pieces of information, such as the name, the type and the default value of the parameter.

    Let’s consider this silly method:

    public static void RandomCity(string[] cities, string fallback = "Rome")
    { }
    

    If we have a look at its parameters, we will find the following values:

    Properties of a ParameterInfo object

    Bonus tip: Auto-properties act as Methods

    Let’s focus a bit more on the properties of a class.

    Consider this class:

    public class User
    {
      public string Name { get; set; }
    }
    

    There are no methods; only one public property.

    But hey! It turns out that properties, under the hood, are treated as methods. In fact, you can find two methods, named get_Name and set_Name, that act as an access point to the Name property.

    Automatic Getter and Setter of the Name property in C#

    Further readings

    Do you remember that exceptions are, in the end, Types?

    And that, in the catch block, you can filter for exceptions of a specific type or with a specific condition?

    If not, check this article out!

    🔗 Exception handling with WHEN clause | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up (plus the full example)

    From here, you can use all this info to build whatever you want. Personally, I used it to analyse my current project, checking how many methods accept more than N parameters as input, and which classes have the highest number of public methods.

    In short, an example of a simple code analyser can be this one:

    public void Execute()
    {
        var assembly = DefineAssembly();
        var paramsInfo = AnalyzeAssembly(assembly);
    
        AnalyzeParameters(paramsInfo);
    }
    
    private static Assembly DefineAssembly()
        => Assembly.GetExecutingAssembly();
    
    public static List<ParamsMethodInfo> AnalyzeAssembly(Assembly assembly)
    {
        List<ParamsMethodInfo> all = new List<ParamsMethodInfo>();
        var types = GetAllPublicTypes(assembly);
    
        foreach (var type in types)
        {
            var publicMethods = GetPublicMethods(type);
    
            foreach (var method in publicMethods)
            {
                var parameters = method.GetParameters();
                if (parameters.Length > 0)
                {
                    var f = parameters.First();
                }
    
                all.Add(new ParamsMethodInfo(
                    assembly.GetName().Name,
                    type.Name,
                    method
                    ));
            }
        }
        return all;
    }
    
    private static MethodInfo[] GetPublicMethods(Type type) =>
        type.GetMethods(BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.DeclaredOnly);
    
    private static List<Type> GetAllPublicTypes(Assembly assembly) => assembly.GetTypes()
                .Where(t => t.IsClass && t.IsPublic)
                .ToList();
    
    public class ParamsMethodInfo(string AssemblyName, string ClassName, MethodInfo Method)
    {
        public string MethodName => Method.Name;
        public ParameterInfo[] Parameters => Method.GetParameters();
    }
    

    And then, in the AnalyzeParameters, you can add your own logic.

    As you can see, you don’t need to adopt complex tools to perform operations like this: just knowing that you can access the static details of each class and method can be enough (of course, it depends on the use!).

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to customize Conventional Commits in a .NET application using GitHooks &vert; Code4IT

    How to customize Conventional Commits in a .NET application using GitHooks | Code4IT


    Using Conventional Commits you can define a set of rules useful for writing meaningful commit messages. Using NPM. Yes, in a dotNET application!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Setting teams conventions is a crucial step to have the project prepared to live long and prosper 🖖

    A good way to set some clarity is by enforcing rules on GIT commit messages: you can enforce devs to specify the reason behind some code changes so that you can understand the history and the reason for each of those commits. Also, if you have well-crafted commit messages, Pull Requests become easier to understand, leading to better code.

    Conventional Commits help you set such rules, and help you level up your commit history. In this article, we will learn how to add Conventional Commits in a .NET application.

    Conventional Commits

    Conventional Commits are a set of rules that help you write commit messages using a format that has multiple purposes:

    • they help developers understand the history of a git branch;
    • they help PR reviewers focus on the Pull Request by understanding the changes proposed by the developer;
    • using automated tools, they help versioning the application – this is useful when using Semantic Versioning;
    • they allow you to create automated Changelog files.

    So, what does an average Conventional Commit look like?

    There’s not just one way to specify such formats.

    For example, you can specify that you’ve added a new feature (feat) to your APIs and describe it shortly:

    feat(api): send an email to the customer
    

    Or you can explain that you’ve fixed a bug (using fix) and add a full description of the scope of the commit.

    fix: prevent racing condition
    
    Introduce a request id and a reference to latest request. Dismiss
    incoming responses other than from latest request.
    

    There are several types of commits that you can support, such as:

    • feat, used when you add a new feature to the application;
    • fix, when you fix a bug;
    • docs, used to add or improve documentation to the project;
    • refactor, used – well – after some refactoring;
    • test, when adding tests or fixing broken ones

    All of this prevents developers write commit messages such as “something”, “fixed bug”, “some stuff”.

    Some Changes

    So, now, it’s time to include Conventional Commits in our .NET applications.

    What is our goal?

    For the sake of this article, I’m going to add Conventional Commits in a .NET 7 API project. The same approach works for all the other types of .NET projects: as long as you have a Solution to work with, I’ve got you covered.

    Well, actually, the following approach can be used by every project, not only those based on .NET: the reason I wrote this article is that many dotnet developers are not confident in using and configuring NPM packages, so my personal goal with this article is to give you the basics of such tools and configurations.

    For the sake of this article, I’m going to explain how to add Conventional Commits with a custom format.

    Say that you want to associate each commit to a Jira task. As you may know, Jira tasks have an ID composed of a project prefix and a numeric Id. So, for a project named FOO, you can have a task with Id FOO-123.

    The goal of this article is, then, to force developers to create Commit messages such as

    feat/FOO-123: commit short description
    

    or, if you want to add a full description of the commit,

    feat/FOO-123: commit short description
    
    Here we can have the full description of the task.
    And it can also be on multiple lines.
    

    We are going to work at Solution level; you don’t even need an IDE: just Notepad and a Terminal are fine. Before continuing, open your solution folder and a Console pointing to the same folder.

    Install NPM in your folder

    Yes, even if the main application is built with .NET, we are gonna need some NPM packages to set up our Conventional Commits.

    First things first: head to the Command Line and run

    After specifying some configurations (Package name? Licence? Author?), you will have a brand new package.json file.

    Now we can move on and add a GIT Hook.

    Husky: integrate GIT Hooks to improve commit messages

    To use conventional commits we have to “intercept” our GIT actions: we will need to run a specific tool right after having written a commit message; we have to validate it and, in case it does not follow the rules we’ve set, abort the operations.

    We will use Husky 🔗: it’s a facility package that allows us to do stuff with our commit messages and, in general, integrate work with Git Hooks.

    Head to the terminal, and install Husky by running

    npm install husky --save-dev
    

    This command will add a dependency to Husky, as you can see from the new item listed in the package.json file:

    "devDependencies": {
        "husky": "^8.0.3"
    }
    

    Finally, to enable Git Hooks, we have to run

    npm pkg set scripts.prepare="husky install"
    

    and notice the new section in the package.json.

    "scripts": {
        "prepare": "husky install"
    },
    

    Even with just these simple steps, we can see a first result: if you run git commit you will see a text editor open. Here you can write your commit message.

    Git commit message editor

    Save and close the file. The commit message has been applied, as you can see by running git log --oneline.

    CommitLint: a package to validate Commit messages

    We need to install and configure CommitLint, the NPM package that does the dirty job.

    On the same terminal as before, run

    npm install --save-dev @commitlint/config-conventional @commitlint/cli
    

    to install both commitlint/config-conventional, which add the generic functionalities, and commitlint/cli, which allows us to run the scripts via CLI.

    You will see both packages listed in your package.json file:

    "devDependencies": {
        "@commitlint/cli": "^17.4.2",
        "@commitlint/config-conventional": "^17.4.2",
        "husky": "^8.0.3"
    }
    

    Next step: scaffold the file that handles the configurations on how we want our Commit Messages to be structured.

    On the root, create a brand new file, commitlint.config.js, and paste this snippet:

    module.exports = {
      extends: ["@commitlint/config-conventional"],
    }
    

    This snippet tells Commitlint to use the default conventions, such as feat(api): send an email.

    To test the default rules without issuing any real commit, we have to install the previous packages globally, so that they can be accessed outside the scope of the git hooks:

    npm install -g @commitlint/cli @commitlint/config-conventional
    

    and, in a console, we can run

    echo 'foo: a message with wrong format' | commitlint
    

    and see the error messages

    Testing commitlint with errors

    At this point, we still don’t have CommitLint ready to validate our commit messages. In fact, if you try to commit your changes with an invalid message, you will see that the message passes the checks (because there are no checks!), and your changes get committed.

    We need to do some more steps.

    First of all, we have to create a folder named .husky that will be used by Husky to understand which commands are supported.

    Notice: you have to keep the dot at the beginning of the folder name: it’s .husky, not husky.

    Then we need to add a new file within that folder to tell Husky that it needs to run CommitLint.

    npx husky add .husky/commit-msg  'npx --no -- commitlint --edit ${1}'
    

    We’re almost ready: everything is set, but we need to activate the functionality. So you just have to run

    to see it working:

    CommitLint correctly validates the commit message

    Commitlint.config.js: defining explicit rules on Git Messages

    Now, remember that we want to enforce certain rules on the commit message.

    We don’t want them to be like

    feat(api): send an email to the customer when a product is shipped
    

    but rather like

    feat/FOO-123: commit short description
    
    Here we can have the full description of the task.
    And it can also be on multiple lines.
    

    This means that we have to configure the commitlint.config.js file to override default values.

    Let’s have a look at a valid Commitlint file:

    module.exports = {
      extends: ["./node_modules/@commitlint/config-conventional"],
      parserPreset: {
        parserOpts: {
          headerPattern: /^(\w*)\/FOO-(\w*): (.*)$/,
          headerCorrespondence: ["type", "scope", "subject"],
        },
      },
      rules: {
        "type-enum": [2, "always", ["feat", "fix", "hot", "chore"]],
        "header-min-length": [2, "always", 10],
        "header-max-length": [2, "always", 50],
        "body-max-line-length": [2, "always", 72],
        "subject-case": [
          2,
          "never",
          ["sentence-case", "start-case", "pascal-case", "upper-case"],
        ],
      },
    }
    

    Time to deep dive into those sections:

    The ParserOpts section: define how CommitList should parse text

    The first part tells the parser how to parse the header message:

    parserOpts: {
        headerPattern: /^(\w*)\/FOO-(\w*): (.*)$/,
        headerCorrespondence: ["type", "scope", "subject"],
    },
    

    It’s a regular expression, where every matching part has its correspondence in the headerCorrespondence array:

    So, in the message hello/FOO-123: my tiny message, we will have type=hello, scope=123, subject=my tiny message.

    Rules: define specific rules for each message section

    The rules section defines the rules to be applied to each part of the message structure.

    rules:
    {
        "type-enum": [2, "always", ["feat", "fix", "hot", "chore"]],
        "header-min-length": [2, "always", 10],
        "header-max-length": [2, "always", 50],
        "body-max-line-length": [2, "always", 72],
        "subject-case": [
            2,
            "never",
            ["sentence-case", "start-case", "pascal-case", "upper-case"],
        ],
    },
    

    The first value is a number that expresses the severity of the rule:

    • 0: the rule is disabled;
    • 1: show a warning;
    • 2: it’s an error.

    The second value defines if the rule must be applied (using always), or if it must be reversed (using never).

    The third value provides generic arguments for the related rule. For example, "header-max-length": [2, "always", 50], tells that the header must always have a length with <= 50 characters.

    You can read more about each and every configuration on the official documentation 🔗.

    Setting the commit structure using .gitmessage

    Now that everything is set, we can test it.

    But not before helping devs with a simple trick! As you remember, when you run git commit without specifying the message, an editor appears with some hints about the structure of the commit message.

    Default commit editor

    You can set your own text with hints about the structure of the messages.

    You just need to create a file named .gitmessage and put some text in it, such as:

    # <type>/FOO-<jira-ticket-id>: <title>
    # YOU CAN WRITE WHATEVER YOU WANT HERE
    # allowed types: feat | fix | hot | chore
    # Example:
    #
    # feat/FOO-01: first commit
    #
    # No more than 50 chars. #### 50 chars is here:  #
    
    # Remember blank line between title and body.
    
    # Body: Explain *what* and *why* (not *how*)
    # Wrap at 72 chars. ################################## which is here:  #
    #
    

    Now, we have to tell Git to use that file as a template:

    git config commit.template ./.gitmessage
    

    and.. TA-DAH! Here’s your message template!

    Customized message template

    Putting all together

    Finally, we have everything in place: git hooks, commit template, and template hints.

    If we run git commit, we will see an IDE open and the message we’ve defined before. Now, type A message with wrong format, save, close the editor, and you’ll see that the commit is aborted.

    Commit message with wrong format gets rejected

    Now you run git commit again, you’ll see again the IDE, and type feat/FOO-123: a valid message, and you’ll see it working

    Further readings

    Conventional Commits is a project that lists a set of specifications for writing such good messages. You can read more here:

    🔗 Conventional Commits

    As we saw before, there are a lot of configurations that you can set for your commits. You can see the full list here:

    🔗 CommitLint rules

    This article first appeared on Code4IT 🐧

    This new kind of commit message works well with Semantic Versioning, which can be useful to publish package versions with a meaningful version number, such as 2.0.1:
    🔗 Semantic Versioning

    And, to close the loop, Semantic Versioning can be easily integrated with CI pipelines. If you use .NET APIs and want to deploy your APIs to Azure using GitHub Actions, you can start from this article and add SemVer:
    🔗 How to deploy .NET APIs on Azure using GitHub actions

    Wrapping up

    In this article, we’ve learned what are Conventional Commits, how to add them using Husky and NPM, and how to configure our folder to use such tools.

    The steps we’ve seen before work for every type of application, even not related to dotnet.

    So, to recap everything, we have to:

    1. Install NPM: npm init;
    2. Install Husky: npm install husky --save-dev;
    3. Enable Husky: npm pkg set scripts.prepare="husky install";
    4. Install CommitLint: npm install --save-dev @commitlint/config-conventional @commitlint/cli;
    5. Create the commitlint.config.js file: module.exports = { extends: '@commitlint/config-conventional']};;
    6. Create the Husky folder: mkdir .husky;
    7. Link Husky and CommitLint: npx husky add .husky/commit-msg 'npx --no -- commitlint --edit ${1}';
    8. Activate the whole functionality: npx husky install;

    Then, you can customize the commitlint.config.js file and, if you want, create a better .gitmessage file.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in .NET 7 &vert; Code4IT

    Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in .NET 7 | Code4IT


    There are several ways to handle configurations in a .NET Application. In this article, we’re going to learn how to use IOptions<T>, IOptionsSnapshot<T>, and IOptionsMonitor<T>

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When dealing with configurations in a .NET application, we can choose different strategies. For example, you can simply inject an IConfiguration instance in your constructor, and retrieve every value by name.

    Or you can get the best out of Strongly Typed Configurations – you won’t have to care about casting those values manually because everything is already done for you by .NET.

    In this article, we are going to learn about IOptions, IOptionsSnapshot, and IOptionsMonitor. They look similar, but there are some key differences that you need to understand to pick the right one.

    For the sake of this article, I’ve created a dummy .NET API that exposes only one endpoint.

    In my appsettings.json file, I added a node:

    {
      "MyConfig": {
        "Name": "Davide"
      }
    }
    

    that will be mapped to a POCO class:

    public class GeneralConfig
    {
        public string Name { get; set; }
    }
    

    To add it to the API project, we can add this line to the Program.cs file:

    builder.Services.Configure<GeneralConfig>(builder.Configuration.GetSection("MyConfig"));
    

    As you can see, it takes the content with the name “MyConfig” and maps it to an object of type GeneralConfig.

    To test such types, I’ve created a set of dummy API Controllers. Each Controller exposes one single method, Get(), that reads the value from the configuration and returns it in an object.

    We are going to inject IOptions, IOptionsSnapshot, and IOptionsMonitor into the constructor of the related Controllers so that we can try the different approaches.

    IOptions: Simple, Singleton, doesn’t support config reloads

    IOptions<T> is the most simple way to inject such configurations. We can inject it into our constructors and access the actual value using the Value property:

    private readonly GeneralConfig _config;
    
    public TestingController(IOptions<GeneralConfig> config)
    {
        _config = config.Value;
    }
    

    Now we have direct access to the GeneralConfig object, with the values populated as we defined in the appsettings.json file.

    There are a few things to consider when using IOptions<T>:

    • This service is injected as a Singleton instance: the whole application uses the same instance, and it is valid throughout the whole application lifetime.
    • all the configurations are read at startup time. Even if you update the appsettings file while the application is running, you won’t see the values updated.

    Some people prefer to store the IOptions<T> instance as a private field, and access the value when needed using the Value property, like this:

    private readonly IOptions<GeneralConfig> _config;
    private int updates = 0;
    
    public TestingController(IOptions<GeneralConfig> config)
    {
        _config = config;
    }
    
    [HttpGet]
    public ActionResult<Result> Get()
    {
        var name = _config.Value.Name;
        return new Result
        {
            MyName = name,
            Updates = updates
        };
    }
    

    It works, but it’s useless: since IOptions<T> is a Singleton, we don’t need to access the Value property every time. It won’t change over time, and accessing it every single time is a useless operation. We can use the former approach: it’s easier to write and (just a bit) more performant.

    One more thing.
    When writing Unit Tests, we can inject an IOptions<T> in the system under test using a static method, Options.Create<T>, and pass it an instance of the required type.

    [SetUp]
    public void Setup()
    {
        var config = new GeneralConfig { Name = "Test" };
    
        var options = Options.Create(config);
    
        _sut = new TestingController(options);
    }
    

    Demo: the configuration does not change at runtime

    Below you can find a GIF that shows that the configurations do not change when the application is running.

    With IOptions the configuration does not change at runtime

    As you can see, I performed the following steps:

    1. start the application
    2. call the /TestingIOptions endpoint: it returns the name Davide
    3. now I update the content of the appsettings.json file, setting the name to Davide Bellone.
    4. when I call again the same endpoint, I don’t see the updated value.

    IOptionsSnapshot: Scoped, less-performant, supports config reload

    Similar to IOptions<T> we have IOptionsSnapshot<T>. They work similarly, but there is a huge difference: IOptionsSnapshot<T> is recomputed at every request.

    public TestingController(IOptionsSnapshot<GeneralConfig> config)
    {
        _config = config.Value;
    }
    

    With IOptionsSnapshot<T> you have always the most updated values: this service is injected with a Scoped lifetime, meaning that the values are read from configuration at every HTTP request. This also means that you can update the settings values while the application is running, and you’ll be able to see the updated results.

    Since .NET rebuilds the configurations at every HTTP call, there is a slight performance overhead. So, if not necessary, always use IOptions<T>.

    There is no way to test an IOptionsSnapshot<T> as we did with IOptions<T>, so you have to use stubs or mocks (maybe with Moq or NSubstitute 🔗).

    Demo: the configuration changes while the application is running

    Look at the GIF below: here I run the application and call the /TestingIOptionsSnapshot endpoint.

    With IOptionsSnapshot the configuration changes while the application is running

    I performed the following steps:

    1. run the application
    2. call the /TestingIOptionsSnapshot endpoint. The returned value is the same on the appsettings.json file: Davide Bellone.
    3. I then update the value on the configuration file
    4. when calling again /TestingIOptionsSnapshot, I can see that the returned value reflects the new value in the appsettings file.

    IOptionsMonitor: Complex, Singleton, supports config reload

    Finally, the last one of the trio: IOptionsMonitor<T>.

    Using IOptionsMonitor<T> you can have the most updated value on the appsettings.json file.

    We also have a callback event that is triggered every time the configuration file is updated.

    It’s injected as a Singleton service, so the same instance is shared across the whole application lifetime.

    There are two main differences with IOptions<T>:

    1. the name of the property that stores the config value is CurrentValue instead of Value;
    2. there is a callback that is called every time you update the settings file: OnChange(Action<TOptions, string?> listener). You can use it to perform operations that must be triggered every time the configuration changes.

    Note: OnChange returns an object that implements IDisposable that you need to dispose. Otherwise, as Chris Elbert noticed (ps: follow him on Twitter!) , the instance of the class that uses IOptionsMonitor<T> will never be disposed.

    Again, there is no way to test an IOptionsMonitor<T> as we did with IOptions<T>. So you should rely on stubs and mocks (again, maybe with Moq or NSubstitute 🔗).

    Demo: the configuration changes, and the callback is called

    In the GIF below I demonstrate the usage of IOptionsMonitor.

    I created an API controller that listens to changes in the configuration, updates a static counter, and returns the final result from the API:

    public class TestingIOptionsMonitorController : ControllerBase
    {
        private static int updates = 0;
        private readonly GeneralConfig _config;
    
        public TestingIOptionsMonitorController(IOptionsMonitor<GeneralConfig> config)
        {
            _config = config.CurrentValue;
            config.OnChange((_, _) => updates++);
        }
    
        [HttpGet]
        public ActionResult<Result> Get() => new Result
        {
            MyName = _config.Name,
            Updates = updates
        };
    }
    

    By running it and modifying the config content while the application is up and running, you can see the full usage of IOptionsMonitor<T>:

    With IOptionsMonitor the configuration changes, the callback is called

    As you can see, I performed these steps:

    1. run the application
    2. call the /TestionIOptionsMonitor endpoint. The MyName field is read from config, and Updates is 0;
    3. I then update and save the config file. In the background, the OnChange callback is fired, and the Updates value is updated;

    Oddly, the callback is called more times than expected. I updated the file only twice, but the counter is set to 6. That’s weird behavior. If you know why it happens, drop a message below 📩

    IOptions vs IOptionsSnapshot vs IOptionsMonitor in .NET

    We’ve seen a short introduction to IOptions, IOptionsSnapshot, and IOptionsMonitor.

    There are some differences, of course. Here’s a table with a recap of what we learned from this article.

    Type DI Lifetime Best way to inject in unit tests Allows live reload Has callback function
    IOptions<T> Singleton Options.Create<T>
    IOptionsSnapshot<T> Scoped Stub / Mock 🟢
    IOptionsMonitor<T> Singleton Stub / Mock 🟢 🟢

    There’s actually more: for example, with IOptionsSnapshot and IOptionsMonitor you can use named options, so that you can inject more instances of the same type that refer to different nodes in the JSON file.

    But that will be the topic for a future article, so stay tuned 😎

    Further readings

    There is a lot more about how to inject configurations.

    For sure, one of the best resources is the official documentation:

    🔗 Options pattern in ASP.NET Core | Microsoft docs

    I insisted on explaining that IOptions and IOptionsMonitor are Singleton, while IOptionsSnapshot is Scoped.
    If you don’t know what they mean, here’s a short but thorough explanation:

    🔗 Dependency Injection lifetimes in .NET | Code4IT

    In particular, I want you to focus on the Bonus tip, where I explain the problems of having Transient or Scoped services injected into a Singleton service:

    🔗Bonus tip: Transient dependency inside a Singleton | Code4IT

    This article first appeared on Code4IT 🐧

    In this article, I stored my configurations in an appsettings.json file. There are more ways to set configuration values – for example, Environment Variables and launchSettings.json.

    🔗 3 (and more) ways to set configuration values in .NET | Code4IT

    Wrapping up

    In this article, we learned how to use IOptions<T>, IOptionsSnapshot<T>, and IOptionsMonitor<T> in a .NET 7 application.

    There are many other ways to handle configurations: for instance, you can simply inject the whole object as a singleton, or use IConfiguration to get single values.

    When would you choose an approach instead of another? Drop a comment below! 📩

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to add Dependency Injection, Configurations, and Logging in a .NET 7 Console Application &vert; Code4IT

    How to add Dependency Injection, Configurations, and Logging in a .NET 7 Console Application | Code4IT


    By default, you cannot use Dependency Injection, custom logging, and configurations from settings in a Console Application. Unless you create a custom Host!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, you just want to create a console application to run a complex script. Just because it is a “simple” console application, it doesn’t mean that you should not use best practices, such as using Dependency Injection.

    Also, you might want to test the code: Dependency Injection allows you to test the behavior of a class without having a strict dependency on the referenced concrete classes: you can use stubs and mocks, instead.

    In this article, we’re going to learn how to add Dependency Injection in a .NET 7 console application. The same approach can be used for other versions of .NET. We will also add logging, using Serilog, and configurations coming from an appsettings.json file.

    We’re going to start small, with the basic parts, and gradually move on to more complex scenarios. We’re gonna create a simple, silly console application: we will inject a bunch of services, and print a message on the console.

    We have a root class:

    public class NumberWorker
    {
        private readonly INumberService _service;
    
        public NumberWorker(INumberService service) => _service = service;
    
        public void PrintNumber()
        {
            var number = _service.GetPositiveNumber();
            Console.WriteLine($"My wonderful number is {number}");
        }
    }
    

    that injects an INumberService, implemented by NumberService:

    public interface INumberService
    {
        int GetPositiveNumber();
    }
    
    public class NumberService : INumberService
    {
        private readonly INumberRepository _repo;
    
        public NumberService(INumberRepository repo) => _repo = repo;
    
        public int GetPositiveNumber()
        {
            int number = _repo.GetNumber();
            return Math.Abs(number);
        }
    }
    

    which, in turn, uses an INumberRepository implemented by NumberRepository:

    public interface INumberRepository
    {
        int GetNumber();
    }
    
    public class NumberRepository : INumberRepository
    {
        public int GetNumber()
        {
            return -42;
        }
    }
    

    The console application will create a new instance of NumberWorker and call the PrintNumber method.

    Now, we have to build the dependency tree and inject such services.

    How to create an IHost to use a host for a Console Application

    The first step to take is to install some NuGet packages that will allow us to add a custom IHost container so that we can add Dependency Injection and all the customization we usually add in projects that have a StartUp (or a Program) class, such as .NET APIs.

    We need to install 2 NuGet packages: Microsoft.Extensions.Hosting.Abstractions and Microsoft.Extensions.Hosting will be used to create a new IHost that will be used to build the dependencies tree.

    By navigating your csproj file, you should be able to see something like this:

    <ItemGroup>
        <PackageReference Include="Microsoft.Extensions.Hosting" Version="7.0.1" />
        <PackageReference Include="Microsoft.Extensions.Hosting.Abstractions" Version="7.0.0" />
    </ItemGroup>
    

    Now we are ready to go! First, add the following using statements:

    using Microsoft.Extensions.DependencyInjection;
    using Microsoft.Extensions.Hosting;
    

    and then, within the Program class, add this method:

    private static IHost CreateHost() =>
      Host.CreateDefaultBuilder()
          .ConfigureServices((context, services) =>
          {
              services.AddSingleton<INumberRepository, NumberRepository>();
              services.AddSingleton<INumberService, NumberService>();
          })
          .Build();
    }
    

    Host.CreateDefaultBuilder() creates the default IHostBuilder – similar to the IWebHostBuilder, but without any reference to web components.

    Then we add all the dependencies, using services.AddSingleton<T, K>. Notice that it’s not necessary to add services.AddSingleton<NumberWorker>: when we will use the concrete instance, the dependency tree will be resolved, without the need of having an indication of the root itself.

    Finally, once we have everything in place, we call Build() to create a new instance of IHost.

    Now, we just have to run it!

    In the Main method, create the IHost instance by calling CreateHost(). Then, by using the ActivatorUtilities class (coming from the Microsoft.Externsions.DependencyInjection namespace), create a new instance of NumberWorker, so that you can call PrintNumber();

    private static void Main(string[] args)
    {
      IHost host = CreateHost();
      NumberWorker worker = ActivatorUtilities.CreateInstance<NumberWorker>(host.Services);
      worker.PrintNumber();
    }
    

    Now you are ready to run the application, and see the message on the console:

    Basic result on Console

    Read configurations from appsettings.json for a Console Library

    We want to make our system configurable and place our configurations in an appsettings.json file.

    As we saw in a recent article 🔗, we can use IOptions<T> to inject configurations in the constructor. For the sake of this article, I’m gonna use a POCO class, NumberConfig, that is mapped to a configuration section and injected into the classes.

    public class NumberConfig
    {
        public int DefaultNumber { get; set; }
    }
    

    Now we need to manually create an appsettings.json file within the project folder, and add a new section that will hold the values of the configuration:

    {
      "Number": {
        "DefaultNumber": -899
      }
    }
    

    and now we can add the configuration binding in our CreateHost() method, within the ConfigureServices section:

    services.Configure<NumberConfig>(context.Configuration.GetSection("Number"));
    

    Finally, we can update the NumberRepository to accept the configurations in input and use them to return the value:

    public class NumberRepository : INumberRepository
    {
        private readonly NumberConfig _config;
    
        public NumberRepository(IOptions<NumberConfig> options) => _config = options.Value;
    
        public int GetNumber() => _config.DefaultNumber;
    }
    

    Run the project to admire the result, and… BOOM! It will not work! You should see the message “My wonderful number is 0”, even though the number we set on the config file is -899.

    This happens because we must include the appsettings.json file in the result of the compilation. Right-click on that file, select the Properties menu, and set the “Copy to Output Directory” to “Copy always”:

    Copy always the appsettings file to the Output Directory

    Now, build and run the project, and you’ll see the correct message: “My wonderful number is 899”.

    Clearly, the same values can be accessed via IConfigurations.

    Add Serilog logging to log on Console and File

    Finally, we can add Serilog logs to our console applications – as well as define Sinks.

    To add Serilog, you first have to install these NuGet packages:

    • Serilog.Extensions.Hosting and Serilog.Formatting.Compact to add the basics of Serilog;
    • Serilog.Settings.Configuration to read logging configurations from settings (if needed);
    • Serilog.Sinks.Console and Serilog.Sinks.File to add the Console and the File System as Sinks.

    Let’s get back to the CreateHost() method, and add a new section right after ConfigureServices:

    .UseSerilog((context, services, configuration) => configuration
        .ReadFrom.Configuration(context.Configuration)
        .ReadFrom.Services(services)
        .Enrich.FromLogContext()
        .WriteTo.Console()
        .WriteTo.File($"report-{DateTimeOffset.UtcNow.ToString("yyyy-MM-dd-HH-mm-ss")}.txt", restrictedToMinimumLevel: LogEventLevel.Warning)
        )
    

    Here we’re telling that we need to read the config from Settings, add logging context, and write both on Console and on File (only if the log message level is greater or equal than Warning).

    Then, add an ILogger here and there, and admire the final result:

    Serilog Logging is visible on the Console

    Final result

    To wrap up, here’s the final implementation of the Program class and the
    CreateHost method:

    private static void Main(string[] args)
    {
        IHost host = CreateHost();
        NumberWorker worker = ActivatorUtilities.CreateInstance<NumberWorker>(host.Services);
        worker.PrintNumber();
    }
    
    private static IHost CreateHost() =>
      Host
      .CreateDefaultBuilder()
      .ConfigureServices((context, services) =>
      {
          services.Configure<NumberConfig>(context.Configuration.GetSection("Number"));
    
          services.AddSingleton<INumberRepository, NumberRepository>();
          services.AddSingleton<INumberService, NumberService>();
      })
      .UseSerilog((context, services, configuration) => configuration
          .ReadFrom.Configuration(context.Configuration)
          .ReadFrom.Services(services)
          .Enrich.FromLogContext()
          .WriteTo.Console()
          .WriteTo.File($"report-{DateTimeOffset.UtcNow.ToString("yyyy-MM-dd-HH-mm-ss")}.txt", restrictedToMinimumLevel: LogEventLevel.Warning)
          )
      .Build();
    

    Further readings

    As always, a few resources to learn more about the topics discussed in this article.

    First and foremost, have a look at this article with a full explanation of Generic Hosts in a .NET Core application:

    🔗 .NET Generic Host in ASP.NET Core | Microsoft docs

    Then, if you recall, we’ve already learned how to print Serilog logs to the Console:

    🔗 How to log to Console with .NET Core and Serilog | Code4IT

    This article first appeared on Code4IT 🐧

    Lastly, we accessed configurations using IOptions<NumberConfig>. Did you know that there are other ways to access config?

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in .NET 7 | Code4IT

    as well as defining configurations for your project?

    🔗 3 (and more) ways to set configuration values in .NET | Code4IT

    Wrapping up

    In this article, we’ve learned how we can customize a .NET Console application to use dependency injection, external configurations, and Serilog logging.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • [ENG] .NET 5, 6, and 7 for busy developers | Dotnet Sheff



    [ENG] .NET 5, 6, and 7 for busy developers | Dotnet Sheff



    Source link

  • Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit &vert; Code4IT

    Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit | Code4IT


    Integration Tests are incredibly useful: a few Integration Tests are often more useful than lots of Unit Tests. Let’s learn some advanced capabilities of WebApplicationFactory.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In a previous article, we learned a quick way to create Integration Tests for ASP.NET API by using WebApplicationFactory. That was a nice introductory article. But now we will delve into more complex topics and examples.

    In my opinion, a few Integration Tests and just the necessary number of Unit tests are better than hundreds of Unit Tests and no Integration Tests at all. In general, the Testing Diamond should be preferred over the Testing Pyramid (well, in most cases).

    In this article, we are going to create advanced Integration Tests by defining custom application settings, customizing dependencies to be used only during tests, defining custom logging, and performing complex operations in our tests.

    For the sake of this article, I created a sample API application that exposes one single endpoint whose purpose is to retrieve some info about the URL passed in the query string. For example,

    GET /SocialPostLink?uri=https%3A%2F%2Ftwitter.com%2FBelloneDavide%2Fstatus%2F1682305491785973760
    

    will return

    {
      "instanceName": "Real",
      "info": {
        "socialNetworkName": "Twitter",
        "sourceUrl": "https://twitter.com/BelloneDavide/status/1682305491785973760",
        "username": "BelloneDavide",
        "id": "1682305491785973760"
      }
    }
    

    For completeness, instanceName is a value coming from the appsettings.json file, while info is an object that holds some info about the social post URL passed as input.

    Internally, the code is using the Chain of Responsibility pattern: there is a handler that “knows” if it can handle a specific URL; if so, it just elaborates the input; otherwise, it calls the next handler.

    There is also a Factory that builds the chain, and finally, a Service that instantiates the Factory and then resolves the dependencies.

    As you can see, this solution can become complex. We could run lots of Unit Tests to validate that the Chain of Responsibility works as expected. We can even write a Unit Tests suite for the Factory.

    Class Diagram

    But, at the end of the day, we don’t really care about the internal structure of the project: as long as it works as expected, we could even use a huge switch block (clearly, with all the consequences of this choice). So, let’s write some Integration Tests.

    How to create a custom WebApplicationFactory in .NET

    When creating Integration Tests for .NET APIs you have to instantiate a new instance of WebApplicationFactory, a class coming from the Microsoft.AspNetCore.Mvc.Testing NuGet Package.

    Since we are going to define it once and reuse it across all the tests, let’s create a new class that extends WebApplicationFactory, and add some custom behavior to it.

    public class IntegrationTestWebApplicationFactory : WebApplicationFactory<Program>
    {
    
    }
    

    Let’s focus on the Program class: as you can see, the WebApplicationFactory class requires an entry point. Generally speaking, it’s the Program class of our application.

    If you hover on WebApplicationFactory<Program> and hit CTRL+. on Visual Studio, the autocomplete proposes two alternatives: one is the Program class defined in your APIs, while the other one is the Program class defined in Microsoft.VisualStudio.TestPlatform.TestHost. Choose the one for your API application! The WebApplicationFactory class will then instantiate your API following the instructions defined in your Program class, thus resolving all the dependencies and configurations as if you were running your application locally.

    What to do if you don’t have the Program class? If you use top-level statements, you don’t have the Program class, because it’s “implicit”. So you cannot reference the whole class. Unless… You have to create a new partial class named Program, and leave it empty: this way, you have a class name that can be used to reference the API definition:

    public partial class Program { }
    

    Here you can override some definitions of the WebHost to be created by calling ConfigureWebHost:

    public class IntegrationTestWebApplicationFactory : WebApplicationFactory<Program>
    {
        protected override void ConfigureWebHost(IWebHostBuilder builder)
        {
              builder.ConfigureAppConfiguration((host, configurationBuilder) => { });
        }
    }
    

    How to use WebApplicationFactory in your NUnit tests

    It’s time to start working on some real Integration Tests!

    As we said before, we have only one HTTP endpoint, defined like this:

    
    private readonly ISocialLinkParser _parser;
    private readonly ILogger<SocialPostLinkController> _logger;
    private readonly IConfiguration _config;
    
    public SocialPostLinkController(ISocialLinkParser parser, ILogger<SocialPostLinkController> logger, IConfiguration config)
    {
        _parser = parser;
        _logger = logger;
        _config = config;
    }
    
    [HttpGet]
    public IActionResult Get([FromQuery] string uri)
    {
        _logger.LogInformation("Received uri {Uri}", uri);
        if (Uri.TryCreate(uri, new UriCreationOptions {  }, out Uri _uri))
        {
            var linkInfo = _parser.GetLinkInfo(_uri);
            _logger.LogInformation("Uri {Uri} is of type {Type}", uri, linkInfo.SocialNetworkName);
    
            var instance = new Instance
            {
                InstanceName = _config.GetValue<string>("InstanceName"),
                Info = linkInfo
            };
            return Ok(instance);
        }
        else
        {
            _logger.LogWarning("Uri {Uri} is not a valid Uri", uri);
            return BadRequest();
        }
    }
    

    We have 2 flows to validate:

    • If the input URI is valid, the HTTP Status code should be 200;
    • If the input URI is invalid, the HTTP Status code should be 400;

    We could simply write Unit Tests for this purpose, but let me write Integration Tests instead.

    First of all, we have to create a test class and create a new instance of IntegrationTestWebApplicationFactory. Then, we will create a new HttpClient every time a test is run that will automatically include all the services and configurations defined in the API application.

    public class ApiIntegrationTests : IDisposable
    {
        private IntegrationTestWebApplicationFactory _factory;
        private HttpClient _client;
    
        [OneTimeSetUp]
        public void OneTimeSetup() => _factory = new IntegrationTestWebApplicationFactory();
    
        [SetUp]
        public void Setup() => _client = _factory.CreateClient();
    
        public void Dispose() => _factory?.Dispose();
    }
    

    As you can see, the test class implements IDisposable so that we can call Dispose() on the IntegrationTestWebApplicationFactory instance.

    From now on, we can use the _client instance to work with the in-memory instance of the API.

    One of the best parts of it is that, since it’s an in-memory instance, we can even debug our API application. When you create a test and put a breakpoint in the production code, you can hit it and see the actual values as if you were running the application in a browser.

    Now that we have the instance of HttpClient, we can create two tests to ensure that the two cases we defined before are valid. If the input string is a valid URI, return 200:

    [Test]
    public async Task Should_ReturnHttp200_When_UrlIsValid()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetAsync($"SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.OK));
    }
    

    Otherwise, return Bad Request:

    [Test]
    public async Task Should_ReturnBadRequest_When_UrlIsNotValid()
    {
        string inputUrl = "invalid-url";
    
        var result = await _client.GetAsync($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.BadRequest));
    }
    

    How to create test-specific configurations using InMemoryCollection

    WebApplicationFactory is highly configurable thanks to the ConfigureWebHost method. For instance, you can customize the settings injected into your services.

    Usually, you want to rely on the exact same configurations defined in your appsettings.json file to ensure that the system behaves correctly with the “real” configurations.

    For example, I defined the key “InstanceName” in the appsettings.json file whose value is “Real”, and whose value is used to create the returned Instance object. We can validate that that value is being read from that source as validated thanks to this test:

    [Test]
    public async Task Should_ReadInstanceNameFromSettings()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.InstanceName, Is.EqualTo("Real"));
    }
    

    But some other times you might want to override a specific configuration key.

    The ConfigureAppConfiguration method allows you to customize how you manage Configurations by adding or removing sources.

    If you want to add some configurations specific to the WebApplicationFactory, you can use AddInMemoryCollection, a method that allows you to add configurations in a key-value format:

    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureAppConfiguration((host, configurationBuilder) =>
        {
            configurationBuilder.AddInMemoryCollection(
                new List<KeyValuePair<string, string?>>
                {
                    new KeyValuePair<string, string?>("InstanceName", "FromTests")
                });
        });
    }
    

    Even if you had the InstanceName configured in your appsettings.json file, the value is now overridden and set to FromTests.

    You can validate this change by simply replacing the expected value in the previous test:

    [Test]
    public async Task Should_ReadInstanceNameFromSettings()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.InstanceName, Is.EqualTo("FromTests"));
    }
    

    If you also want to discard all the other existing configuration sources, you can call configurationBuilder.Sources.Clear() before AddInMemoryCollection and remove all the other existing configurations.

    How to set up custom dependencies for your tests

    Maybe you don’t want to resolve all the existing dependencies, but just a subset of them. For example, you might not want to call external APIs with a limited number of free API calls to avoid paying for the test-related calls. You can then rely on Stub classes that simulate the dependency by giving you full control of the behavior.

    We want to replace an existing class with a Stub one: we are going to create a stub class that will be used instead of SocialLinkParser:

    public class StubSocialLinkParser : ISocialLinkParser
    {
        public LinkInfo GetLinkInfo(Uri postUri) => new LinkInfo
        {
            SocialNetworkName = "test from stub",
            Id = "test id",
            SourceUrl = postUri,
            Username = "test username"
        };
    }
    

    We can then customize Dependency Injection to use StubSocialLinkParser in place of SocialLinkParser by specifying the dependency within the ConfigureTestServices method:

    builder.ConfigureTestServices(services =>
    {
        services.AddScoped<ISocialLinkParser, StubSocialLinkParser>();
    });
    

    Finally, we can create a method to validate this change:

    [Test]
    public async Task Should_UseStubName()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.Info.SocialNetworkName, Is.EqualTo("test from stub"));
    }
    

    How to create Integration Tests on specific resolved dependencies

    Now we are going to test that the SocialLinkParser does its job, regardless of the internal implementation. Right now we have used the Chain of Responsibility pattern, and we rely on the ISocialLinksFactory interface to create the correct sequence of handlers. But we don’t know in the future how we will define the code: maybe we will replace it all with a huge if-else sequence – the most important part is that the code works, regardless of the internal implementation.

    We can proceed in two ways: writing tests on the interface or writing tests on the concrete class.

    For the sake of this article, we are going to run tests on the SocialLinkParser class. Not the interface, but the concrete class. The first step is to add the class to the DI engine in the Program class:

    builder.Services.AddScoped<SocialLinkParser>();
    

    Now we can create a test to validate that it is working:

    [Test]
    public async Task Should_ResolveDependency()
    {
        using (var _scope = _factory.Services.CreateScope())
        {
            var service = _scope.ServiceProvider.GetRequiredService<SocialLinkParser>();
            Assert.That(service, Is.Not.Null);
            Assert.That(service, Is.AssignableTo<SocialLinkParser>());
        }
    }
    

    As you can see, we are creating an IServiceScope by calling _factory.Services.CreateScope(). Since we have to discard this scope after the test run, we have to place it within a using block. Then, we can create a new instance of SocialLinkParser by calling _scope.ServiceProvider.GetRequiredService<SocialLinkParser>() and create all the tests we want on the concrete implementation of the class.

    The benefit of this approach is that you have all the internal dependencies already resolved, without relying on mocks. You can then ensure that everything, from that point on, works as you expect.

    Here I created the scope within a using block. There is another approach that I prefer: create the scope instance in the SetUp method, and call Dispose() on it the the TearDown phase:

    protected IServiceScope _scope;
    protected SocialLinkParser _sut;
    private IntegrationTestWebApplicationFactory _factory;
    
    [OneTimeSetUp]
    public void OneTimeSetup() => _factory = new IntegrationTestWebApplicationFactory();
    
    [SetUp]
    public void Setup()
    {
        _scope = _factory.Services.CreateScope();
        _sut = _scope.ServiceProvider.GetRequiredService<SocialLinkParser>();
    }
    
    [TearDown]
    public void TearDown()
    {
        _sut = null;
        _scope.Dispose();
    }
    
    public void Dispose() => _factory?.Dispose();
    

    You can see an example of the implementation here in the SocialLinkParserTests class.

    Where are my logs?

    Sometimes you just want to see the logs generated by your application to help you debug an issue (yes, you can simply debug the application!). But, unless properly configured, the application logs will not be available to you.

    But you can add logs to the console easily by customizing the adding the Console sink in your ConfigureTestServices method:

    builder.ConfigureTestServices(services =>
    {
        services.AddLogging(builder => builder.AddConsole().AddDebug());
    });
    

    Now you will be able to see all the logs you generated in the Output panel of Visual Studio by selecting the Tests source:

    Logs appear in the Output panel of VisualStudio

    Beware that you are still reading the configurations for logging from the appsettings file! If you have specified in your project to log directly to a sink (such as DataDog or SEQ), your tests will send those logs to the specified sinks. Therefore, you should get rid of all the other logging sources by calling ClearProviders():

    services.AddLogging(builder => builder.ClearProviders() .AddConsole().AddDebug());
    

    Full example

    In this article, we’ve configured many parts of our WebApplicationFactory. Here’s the final result:

    public class IntegrationTestWebApplicationFactory : WebApplicationFactory<Program>
    {
        protected override void ConfigureWebHost(IWebHostBuilder builder)
        {
            builder.ConfigureAppConfiguration((host, configurationBuilder) =>
            {
                // Remove other settings sources, if necessary
                configurationBuilder.Sources.Clear();
    
                //Create custom key-value pairs to be used as settings
                configurationBuilder.AddInMemoryCollection(
                    new List<KeyValuePair<string, string?>>
                    {
                        new KeyValuePair<string, string?>("InstanceName", "FromTests")
                    });
            });
    
            builder.ConfigureTestServices(services =>
            {
                //Add stub classes
                services.AddScoped<ISocialLinkParser, StubSocialLinkParser>();
    
                //Configure logging
                services.AddLogging(builder => builder.ClearProviders().AddConsole().AddDebug());
            });
        }
    }
    

    You can find the source code used for this article on my GitHub; feel free to download it and toy with it!

    Further readings

    This is an in-depth article about Integration Tests in .NET. I already wrote an article about it with a simpler approach that you might enjoy:

    🔗 How to run Integration Tests for .NET API | Code4IT

    This article first appeared on Code4IT 🐧

    As I often say, a few Integration Tests are often more useful than a ton of Unit Tests. Focusing on Integration Tests instead that on Unit Tests has the benefit of ensuring that the system behaves correctly regardless of the internal implementation.

    In this article, I used the Chain of Responsibility pattern, so Unit Tests would be tightly coupled to the Handlers. If we decided to move to another pattern, we would have to delete all the existing tests and rewrite everything from scratch.

    Therefore, in my opinion, the Testing Diamond is often more efficient than the Testing Pyramid, as I explained here:

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage) | Code4IT

    Wrapping up

    This was a huge article, I know.

    Again, feel free to download and run the example code I shared on my GitHub.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Pre-commit hooks with Husky.NET – build, format, and test your .NET application before a Git commit &vert; Code4IT

    Pre-commit hooks with Husky.NET – build, format, and test your .NET application before a Git commit | Code4IT


    A Git commit represents the status of a system. Learn how to validate that your code builds, is well-formatted, and all the tests pass by adding a Git hook!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you need to run operations before completing a Git commit, you can rely on Git Hooks.

    Git hooks are scripts that run automatically whenever a particular event occurs in a Git repository. They let you customize Git’s internal behaviour and trigger customizable actions at key points in the development life cycle.

    Extending Git hooks allows you to plug in custom functionalities to the regular Git flow, such as Git message validation, code formatting, etc.

    I’ve already described how to use Husky with NPM, but here I’m gonna use Husky.NET, the version of Husky created for .NET-based applications.

    Git hooks: a way to extend Git operations

    As we said, Git hooks are actions that run during specific phases of Git operations.

    Git hooks fall into 4 categories:

    • client-side hooks related to the committing workflow: they execute when you run git commit on your local repository;
    • client-side hooks related to the email workflow: they are executed when running git am, which is a command that allows you to integrate mails and Git repositories (I’ve never used it. If you are interested in this functionality, here’s the official documentation);
    • client-side hooks related to other operations: these hooks run on your local repository when performing operations like git rebase;
    • server-side hooks: they run after a commit is received on the remote repository, and they can reject a git push operation.

    Let’s focus on the client-side hooks that run when you commit changes using git commit.

    Hook name Description
    pre-commit This hook is the first invoked by git commit (if you don’t use the -m flag, it is invoked before asking you to insert a commit message) and can be used to inspect the snapshot that is about to be committed.
    prepare-commit-msg This hook is invoked by git commit and can be used to edit the default commit message when it is generated by an automated tool.
    commit-msg This hook is invoked by git commit and can be used to validate or modify the commit message after it is entered by the user.
    post-commit This hook is invoked after the git commit execution has run correctly, and it is generally used to fire notifications.

    How to install Husky.NET and its dependencies in a .NET Application

    Husky.NET must be installed in the root folder of the solution.

    You first have to create a tool-manifest file in the root folder by running:

    This command creates a file named dotnet-tools.json under the .config folder: here you can see the list of external tools used by dotnet.

    After running the command, you will see that the dotnet-tools.json file contains this element:

    {
      "version": 1,
      "isRoot": true,
      "tools": {}
    }
    

    Now you can add Husky as a dotnet tool by running:

    dotnet tool install Husky
    

    After running the command, the file will contain something like this:

    {
      "version": 1,
      "isRoot": true,
      "tools": {
        "husky": {
          "version": "0.6.2",
          "commands": ["husky"]
        }
      }
    }
    

    Now that we have added it to our dependencies, we can add Husky to an existing .NET application by running:

    If you open the root folder, you should be able to see these 3 folders:

    • .git, which contains the info about the Git repository;
    • .config that contains the description of the tools, such as dotnet-tools;
    • .husky that contains the files we are going to use to define our Git hooks.

    Finally, you can add a new hook by running, for example,

    dotnet husky add pre-commit -c "echo 'Hello world!'"
    git add .husky/pre-commit
    

    This command creates a new file, pre-commit (without file extension), under the .husky folder. By default, it appears like this:

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    ## husky task runner examples -------------------
    ## Note : for local installation use 'dotnet' prefix. e.g. 'dotnet husky'
    
    ## run all tasks
    #husky run
    
    ### run all tasks with group: 'group-name'
    #husky run --group group-name
    
    ## run task with name: 'task-name'
    #husky run --name task-name
    
    ## pass hook arguments to task
    #husky run --args "$1" "$2"
    
    ## or put your custom commands -------------------
    #echo 'Husky.Net is awesome!'
    
    echo 'Hello world!'
    

    The default content is pretty useless; it’s time to customize that hook.

    Notice that the latest command has also generated a task-runner.json file; we will use it later.

    Your first pre-commit hook

    To customize the script, open the file located at .husky/pre-commit.

    Here, you can add whatever you want.

    In the example below, I run commands that compile the code, format the text (using dotnet format with the rules defined in the .editorconfig file), and then run all the tests.

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    echo 'Building code'
    dotnet build
    
    echo 'Formatting code'
    dotnet format
    
    echo 'Running tests'
    dotnet test
    

    Then, add it to Git, and you are ready to go. 🚀 But wait…

    3 ways to manage dotnet format with Husky.NET

    There is a problem with the approach in the example above.

    Let’s simulate a usage flow:

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format;
    5. the pre-commit hook runs dotnet test;
    6. after the hooks, the commit is created.

    What is the final result?

    Since dotnet format modifies the source files, and given that the snapshot has already been created before executing the hook, all the modified files will not be part of the final commit!

    Also, dotnet format executes linting on every file in the solution, not only those that are part of the current snapshot. The operation might then take a lot of time, depending on the size of the repository, and most of the time, it will not update any file (because you’ve already formatted everything in a previous run).

    We have to work out a way to fix this issue. I’ll suggest three approaches.

    Include all the changes using Git add

    The first approach is quite simple: run git add . after dotnet format.

    So, the flow becomes:

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format;
    5. the pre-commit hook runs git add .;
    6. the pre-commit hook runs dotnet test;
    7. Git creates the commit.

    This is the most straightforward approach, but it has some downsides:

    • dotnet format is executed on every file in the solution. The more your project grows, the slower your commits become;
    • git add . adds to the current snapshot all the files modified, even those you did not add to this commit on purpose (maybe because you have updated many files and want to create two distinct commits).

    So, it works, but we can do better.

    Execute a dry run of dotnet-format

    You can add the --verify-no-changes to the dotnet format command: this flag returns an error if at least one file needs to be updated because of a formatting rule.

    Let’s see how the flow changes if one file needs to be formatted.

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format --verify-no-changes;
    5. the pre-commit hook returns an error and aborts the operation;
    6. you run dotnet format on the whole solution to fix all the formatting issues;
    7. you run git add .;
    8. you run git commit -m "message";
    9. the pre-commit hook runs dotnet build;
    10. the pre-commit hook runs dotnet format --verify-no-changes. Now, there is nothing to format, and we can proceed;
    11. the pre-commit hook runs dotnet test;
    12. Git creates the commit.

    Notice that, this way, if there is something to format, the whole commit is aborted. You will then have to run dotnet format on the entire solution, fix the errors, add the changes to the snapshot, and restart the flow.

    It’s a longer process, but it allows you to have complete control over the formatted files.

    Also, you won’t risk including in the snapshot the files you want to keep staged in order to add them to a subsequent commit.

    Run dotnet-format only on the staged files using Husky.NET Task Runner

    The third approach is the most complex but with the best result.

    If you recall, during the initialization, Husky added two files in the .husky folder: pre-commit and task-runner.json.

    The key to this solution is the task-runner.json file. This file allows you to create custom scripts with a name, a group, the command to be executed, and its related parameters.

    By default, you will see this content:

    {
      "tasks": [
        {
          "name": "welcome-message-example",
          "command": "bash",
          "args": ["-c", "echo Husky.Net is awesome!"],
          "windows": {
            "command": "cmd",
            "args": ["/c", "echo Husky.Net is awesome!"]
          }
        }
      ]
    }
    

    To make sure that dotnet format runs only on the staged files, you must create a new task like this:

    {
      "name": "dotnet-format-staged-files",
      "group": "pre-commit-operations",
      "command": "dotnet",
      "args": ["format", "--include", "${staged}"],
      "include": ["**/*.cs"]
    }
    

    Here, we have specified a name, dotnet-format-staged-files, the command to run, dotnet, with some parameters listed in the args array. Notice that we can filter the list of files to be formatted by using the ${staged} parameter, which is populated by Husky.NET.

    We have also added this task to a group named pre-commit-operations that we can use to reference a list of tasks to be executed together.

    If you want to run a specific task, you can use dotnet husky run --name taskname. In our example, the command would be dotnet husky run --name dotnet-format-staged-files.

    If you want to run a set of tasks belonging to the same group, you can run dotnet husky run --group groupname. In our example, the command would be dotnet husky run --group pre-commit-operations.

    The last step is to call these tasks from within our pre-commit file. So, replace the old dotnet format command with one of the above commands.

    Final result and optimizations of the pre-commit hook

    Now that everything is in place, we can improve the script to make it faster.

    Let’s see which parts we can optimize.

    The first step is the build phase. For sure, we have to run dotnet build to see if the project builds correctly. You can consider adding the --no-restore flag to skip the restore step before building.

    Then we have the format phase: we can avoid formatting every file using one of the steps defined before. I’ll replace the plain dotnet format with the execution of the script defined in the Task Runner (it’s the third approach we saw).

    Then, we have the test phase. We can add both the --no-restore and the --no-build flag to the command since we have already built everything before. But wait! The format phase updated the content of our files, so we still have to build the whole solution. Unless we swap the build and the format phases.

    So, here we have the final pre-commit file:

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    echo 'Ready to commit changes!'
    
    echo 'Format'
    
    dotnet husky run --name dotnet-format-staged-files
    
    echo 'Build'
    
    dotnet build --no-restore
    
    echo 'Test'
    
    dotnet test --no-restore
    
    echo 'Completed pre-commit changes'
    

    Yes, I know that when you run the dotnet test command, you also build the solution, but I prefer having two separate steps just for clarity!

    Ah, and don’t remove the #!/bin/sh at the beginning of the script!

    How to skip Git hooks

    To trigger the hook, just run git commit -m "message". Before completing the commit, the hook will run all the commands. If one of them fails, the whole commit operation is aborted.

    There are cases when you have to skip the validation. For example, if you have integration tests that rely on an external source currently offline. In that case, some tests will fail, and you will be able to commit your code only once the external system gets working again.

    You can skip the commit validation by adding the --no-verify flag:

    git commit -m "my message" --no-verify
    

    Further readings

    Husky.NET is a porting of the Husky tool we already used in a previous article, using it as an NPM dependency. In that article, we also learned how to customize Conventional Commits using Git hooks.

    🔗 How to customize Conventional Commits in a .NET application using GitHooks | Code4IT

    As we learned, there are many more Git hooks that we can use. You can see the complete list on the Git documentation:

    🔗 Customizing Git – Git Hooks | Git docs

    This article first appeared on Code4IT 🐧

    Of course, if you want to get the best out of Husky.NET, I suggest you have a look at the official documentation:

    🔗 Husky.Net documentation

    One last thing: we installed Husky.NET using dotnet tools. If you want to learn more about this topic, I found an excellent article online that you might want to read:

    🔗 Using dotnet tools | Gustav Ehrenborg

    Wrapping up

    In this article, we learned how to create a pre-commit Git hook and validate all our changes before committing them to our Git repository.

    We also focused on the formatting of our code: how can we format only the files we have changed without impacting the whole solution?

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • [ENG] .NET 5, 6, 7, and 8 for busy developers | .NET Community Austria



    [ENG] .NET 5, 6, 7, and 8 for busy developers | .NET Community Austria



    Source link