برچسب: Application

  • How to customize Conventional Commits in a .NET application using GitHooks | Code4IT

    How to customize Conventional Commits in a .NET application using GitHooks | Code4IT


    Using Conventional Commits you can define a set of rules useful for writing meaningful commit messages. Using NPM. Yes, in a dotNET application!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Setting teams conventions is a crucial step to have the project prepared to live long and prosper 🖖

    A good way to set some clarity is by enforcing rules on GIT commit messages: you can enforce devs to specify the reason behind some code changes so that you can understand the history and the reason for each of those commits. Also, if you have well-crafted commit messages, Pull Requests become easier to understand, leading to better code.

    Conventional Commits help you set such rules, and help you level up your commit history. In this article, we will learn how to add Conventional Commits in a .NET application.

    Conventional Commits

    Conventional Commits are a set of rules that help you write commit messages using a format that has multiple purposes:

    • they help developers understand the history of a git branch;
    • they help PR reviewers focus on the Pull Request by understanding the changes proposed by the developer;
    • using automated tools, they help versioning the application – this is useful when using Semantic Versioning;
    • they allow you to create automated Changelog files.

    So, what does an average Conventional Commit look like?

    There’s not just one way to specify such formats.

    For example, you can specify that you’ve added a new feature (feat) to your APIs and describe it shortly:

    feat(api): send an email to the customer
    

    Or you can explain that you’ve fixed a bug (using fix) and add a full description of the scope of the commit.

    fix: prevent racing condition
    
    Introduce a request id and a reference to latest request. Dismiss
    incoming responses other than from latest request.
    

    There are several types of commits that you can support, such as:

    • feat, used when you add a new feature to the application;
    • fix, when you fix a bug;
    • docs, used to add or improve documentation to the project;
    • refactor, used – well – after some refactoring;
    • test, when adding tests or fixing broken ones

    All of this prevents developers write commit messages such as “something”, “fixed bug”, “some stuff”.

    Some Changes

    So, now, it’s time to include Conventional Commits in our .NET applications.

    What is our goal?

    For the sake of this article, I’m going to add Conventional Commits in a .NET 7 API project. The same approach works for all the other types of .NET projects: as long as you have a Solution to work with, I’ve got you covered.

    Well, actually, the following approach can be used by every project, not only those based on .NET: the reason I wrote this article is that many dotnet developers are not confident in using and configuring NPM packages, so my personal goal with this article is to give you the basics of such tools and configurations.

    For the sake of this article, I’m going to explain how to add Conventional Commits with a custom format.

    Say that you want to associate each commit to a Jira task. As you may know, Jira tasks have an ID composed of a project prefix and a numeric Id. So, for a project named FOO, you can have a task with Id FOO-123.

    The goal of this article is, then, to force developers to create Commit messages such as

    feat/FOO-123: commit short description
    

    or, if you want to add a full description of the commit,

    feat/FOO-123: commit short description
    
    Here we can have the full description of the task.
    And it can also be on multiple lines.
    

    We are going to work at Solution level; you don’t even need an IDE: just Notepad and a Terminal are fine. Before continuing, open your solution folder and a Console pointing to the same folder.

    Install NPM in your folder

    Yes, even if the main application is built with .NET, we are gonna need some NPM packages to set up our Conventional Commits.

    First things first: head to the Command Line and run

    After specifying some configurations (Package name? Licence? Author?), you will have a brand new package.json file.

    Now we can move on and add a GIT Hook.

    Husky: integrate GIT Hooks to improve commit messages

    To use conventional commits we have to “intercept” our GIT actions: we will need to run a specific tool right after having written a commit message; we have to validate it and, in case it does not follow the rules we’ve set, abort the operations.

    We will use Husky 🔗: it’s a facility package that allows us to do stuff with our commit messages and, in general, integrate work with Git Hooks.

    Head to the terminal, and install Husky by running

    npm install husky --save-dev
    

    This command will add a dependency to Husky, as you can see from the new item listed in the package.json file:

    "devDependencies": {
        "husky": "^8.0.3"
    }
    

    Finally, to enable Git Hooks, we have to run

    npm pkg set scripts.prepare="husky install"
    

    and notice the new section in the package.json.

    "scripts": {
        "prepare": "husky install"
    },
    

    Even with just these simple steps, we can see a first result: if you run git commit you will see a text editor open. Here you can write your commit message.

    Git commit message editor

    Save and close the file. The commit message has been applied, as you can see by running git log --oneline.

    CommitLint: a package to validate Commit messages

    We need to install and configure CommitLint, the NPM package that does the dirty job.

    On the same terminal as before, run

    npm install --save-dev @commitlint/config-conventional @commitlint/cli
    

    to install both commitlint/config-conventional, which add the generic functionalities, and commitlint/cli, which allows us to run the scripts via CLI.

    You will see both packages listed in your package.json file:

    "devDependencies": {
        "@commitlint/cli": "^17.4.2",
        "@commitlint/config-conventional": "^17.4.2",
        "husky": "^8.0.3"
    }
    

    Next step: scaffold the file that handles the configurations on how we want our Commit Messages to be structured.

    On the root, create a brand new file, commitlint.config.js, and paste this snippet:

    module.exports = {
      extends: ["@commitlint/config-conventional"],
    }
    

    This snippet tells Commitlint to use the default conventions, such as feat(api): send an email.

    To test the default rules without issuing any real commit, we have to install the previous packages globally, so that they can be accessed outside the scope of the git hooks:

    npm install -g @commitlint/cli @commitlint/config-conventional
    

    and, in a console, we can run

    echo 'foo: a message with wrong format' | commitlint
    

    and see the error messages

    Testing commitlint with errors

    At this point, we still don’t have CommitLint ready to validate our commit messages. In fact, if you try to commit your changes with an invalid message, you will see that the message passes the checks (because there are no checks!), and your changes get committed.

    We need to do some more steps.

    First of all, we have to create a folder named .husky that will be used by Husky to understand which commands are supported.

    Notice: you have to keep the dot at the beginning of the folder name: it’s .husky, not husky.

    Then we need to add a new file within that folder to tell Husky that it needs to run CommitLint.

    npx husky add .husky/commit-msg  'npx --no -- commitlint --edit ${1}'
    

    We’re almost ready: everything is set, but we need to activate the functionality. So you just have to run

    to see it working:

    CommitLint correctly validates the commit message

    Commitlint.config.js: defining explicit rules on Git Messages

    Now, remember that we want to enforce certain rules on the commit message.

    We don’t want them to be like

    feat(api): send an email to the customer when a product is shipped
    

    but rather like

    feat/FOO-123: commit short description
    
    Here we can have the full description of the task.
    And it can also be on multiple lines.
    

    This means that we have to configure the commitlint.config.js file to override default values.

    Let’s have a look at a valid Commitlint file:

    module.exports = {
      extends: ["./node_modules/@commitlint/config-conventional"],
      parserPreset: {
        parserOpts: {
          headerPattern: /^(\w*)\/FOO-(\w*): (.*)$/,
          headerCorrespondence: ["type", "scope", "subject"],
        },
      },
      rules: {
        "type-enum": [2, "always", ["feat", "fix", "hot", "chore"]],
        "header-min-length": [2, "always", 10],
        "header-max-length": [2, "always", 50],
        "body-max-line-length": [2, "always", 72],
        "subject-case": [
          2,
          "never",
          ["sentence-case", "start-case", "pascal-case", "upper-case"],
        ],
      },
    }
    

    Time to deep dive into those sections:

    The ParserOpts section: define how CommitList should parse text

    The first part tells the parser how to parse the header message:

    parserOpts: {
        headerPattern: /^(\w*)\/FOO-(\w*): (.*)$/,
        headerCorrespondence: ["type", "scope", "subject"],
    },
    

    It’s a regular expression, where every matching part has its correspondence in the headerCorrespondence array:

    So, in the message hello/FOO-123: my tiny message, we will have type=hello, scope=123, subject=my tiny message.

    Rules: define specific rules for each message section

    The rules section defines the rules to be applied to each part of the message structure.

    rules:
    {
        "type-enum": [2, "always", ["feat", "fix", "hot", "chore"]],
        "header-min-length": [2, "always", 10],
        "header-max-length": [2, "always", 50],
        "body-max-line-length": [2, "always", 72],
        "subject-case": [
            2,
            "never",
            ["sentence-case", "start-case", "pascal-case", "upper-case"],
        ],
    },
    

    The first value is a number that expresses the severity of the rule:

    • 0: the rule is disabled;
    • 1: show a warning;
    • 2: it’s an error.

    The second value defines if the rule must be applied (using always), or if it must be reversed (using never).

    The third value provides generic arguments for the related rule. For example, "header-max-length": [2, "always", 50], tells that the header must always have a length with <= 50 characters.

    You can read more about each and every configuration on the official documentation 🔗.

    Setting the commit structure using .gitmessage

    Now that everything is set, we can test it.

    But not before helping devs with a simple trick! As you remember, when you run git commit without specifying the message, an editor appears with some hints about the structure of the commit message.

    Default commit editor

    You can set your own text with hints about the structure of the messages.

    You just need to create a file named .gitmessage and put some text in it, such as:

    # <type>/FOO-<jira-ticket-id>: <title>
    # YOU CAN WRITE WHATEVER YOU WANT HERE
    # allowed types: feat | fix | hot | chore
    # Example:
    #
    # feat/FOO-01: first commit
    #
    # No more than 50 chars. #### 50 chars is here:  #
    
    # Remember blank line between title and body.
    
    # Body: Explain *what* and *why* (not *how*)
    # Wrap at 72 chars. ################################## which is here:  #
    #
    

    Now, we have to tell Git to use that file as a template:

    git config commit.template ./.gitmessage
    

    and.. TA-DAH! Here’s your message template!

    Customized message template

    Putting all together

    Finally, we have everything in place: git hooks, commit template, and template hints.

    If we run git commit, we will see an IDE open and the message we’ve defined before. Now, type A message with wrong format, save, close the editor, and you’ll see that the commit is aborted.

    Commit message with wrong format gets rejected

    Now you run git commit again, you’ll see again the IDE, and type feat/FOO-123: a valid message, and you’ll see it working

    Further readings

    Conventional Commits is a project that lists a set of specifications for writing such good messages. You can read more here:

    🔗 Conventional Commits

    As we saw before, there are a lot of configurations that you can set for your commits. You can see the full list here:

    🔗 CommitLint rules

    This article first appeared on Code4IT 🐧

    This new kind of commit message works well with Semantic Versioning, which can be useful to publish package versions with a meaningful version number, such as 2.0.1:
    🔗 Semantic Versioning

    And, to close the loop, Semantic Versioning can be easily integrated with CI pipelines. If you use .NET APIs and want to deploy your APIs to Azure using GitHub Actions, you can start from this article and add SemVer:
    🔗 How to deploy .NET APIs on Azure using GitHub actions

    Wrapping up

    In this article, we’ve learned what are Conventional Commits, how to add them using Husky and NPM, and how to configure our folder to use such tools.

    The steps we’ve seen before work for every type of application, even not related to dotnet.

    So, to recap everything, we have to:

    1. Install NPM: npm init;
    2. Install Husky: npm install husky --save-dev;
    3. Enable Husky: npm pkg set scripts.prepare="husky install";
    4. Install CommitLint: npm install --save-dev @commitlint/config-conventional @commitlint/cli;
    5. Create the commitlint.config.js file: module.exports = { extends: '@commitlint/config-conventional']};;
    6. Create the Husky folder: mkdir .husky;
    7. Link Husky and CommitLint: npx husky add .husky/commit-msg 'npx --no -- commitlint --edit ${1}';
    8. Activate the whole functionality: npx husky install;

    Then, you can customize the commitlint.config.js file and, if you want, create a better .gitmessage file.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to add Dependency Injection, Configurations, and Logging in a .NET 7 Console Application &vert; Code4IT

    How to add Dependency Injection, Configurations, and Logging in a .NET 7 Console Application | Code4IT


    By default, you cannot use Dependency Injection, custom logging, and configurations from settings in a Console Application. Unless you create a custom Host!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, you just want to create a console application to run a complex script. Just because it is a “simple” console application, it doesn’t mean that you should not use best practices, such as using Dependency Injection.

    Also, you might want to test the code: Dependency Injection allows you to test the behavior of a class without having a strict dependency on the referenced concrete classes: you can use stubs and mocks, instead.

    In this article, we’re going to learn how to add Dependency Injection in a .NET 7 console application. The same approach can be used for other versions of .NET. We will also add logging, using Serilog, and configurations coming from an appsettings.json file.

    We’re going to start small, with the basic parts, and gradually move on to more complex scenarios. We’re gonna create a simple, silly console application: we will inject a bunch of services, and print a message on the console.

    We have a root class:

    public class NumberWorker
    {
        private readonly INumberService _service;
    
        public NumberWorker(INumberService service) => _service = service;
    
        public void PrintNumber()
        {
            var number = _service.GetPositiveNumber();
            Console.WriteLine($"My wonderful number is {number}");
        }
    }
    

    that injects an INumberService, implemented by NumberService:

    public interface INumberService
    {
        int GetPositiveNumber();
    }
    
    public class NumberService : INumberService
    {
        private readonly INumberRepository _repo;
    
        public NumberService(INumberRepository repo) => _repo = repo;
    
        public int GetPositiveNumber()
        {
            int number = _repo.GetNumber();
            return Math.Abs(number);
        }
    }
    

    which, in turn, uses an INumberRepository implemented by NumberRepository:

    public interface INumberRepository
    {
        int GetNumber();
    }
    
    public class NumberRepository : INumberRepository
    {
        public int GetNumber()
        {
            return -42;
        }
    }
    

    The console application will create a new instance of NumberWorker and call the PrintNumber method.

    Now, we have to build the dependency tree and inject such services.

    How to create an IHost to use a host for a Console Application

    The first step to take is to install some NuGet packages that will allow us to add a custom IHost container so that we can add Dependency Injection and all the customization we usually add in projects that have a StartUp (or a Program) class, such as .NET APIs.

    We need to install 2 NuGet packages: Microsoft.Extensions.Hosting.Abstractions and Microsoft.Extensions.Hosting will be used to create a new IHost that will be used to build the dependencies tree.

    By navigating your csproj file, you should be able to see something like this:

    <ItemGroup>
        <PackageReference Include="Microsoft.Extensions.Hosting" Version="7.0.1" />
        <PackageReference Include="Microsoft.Extensions.Hosting.Abstractions" Version="7.0.0" />
    </ItemGroup>
    

    Now we are ready to go! First, add the following using statements:

    using Microsoft.Extensions.DependencyInjection;
    using Microsoft.Extensions.Hosting;
    

    and then, within the Program class, add this method:

    private static IHost CreateHost() =>
      Host.CreateDefaultBuilder()
          .ConfigureServices((context, services) =>
          {
              services.AddSingleton<INumberRepository, NumberRepository>();
              services.AddSingleton<INumberService, NumberService>();
          })
          .Build();
    }
    

    Host.CreateDefaultBuilder() creates the default IHostBuilder – similar to the IWebHostBuilder, but without any reference to web components.

    Then we add all the dependencies, using services.AddSingleton<T, K>. Notice that it’s not necessary to add services.AddSingleton<NumberWorker>: when we will use the concrete instance, the dependency tree will be resolved, without the need of having an indication of the root itself.

    Finally, once we have everything in place, we call Build() to create a new instance of IHost.

    Now, we just have to run it!

    In the Main method, create the IHost instance by calling CreateHost(). Then, by using the ActivatorUtilities class (coming from the Microsoft.Externsions.DependencyInjection namespace), create a new instance of NumberWorker, so that you can call PrintNumber();

    private static void Main(string[] args)
    {
      IHost host = CreateHost();
      NumberWorker worker = ActivatorUtilities.CreateInstance<NumberWorker>(host.Services);
      worker.PrintNumber();
    }
    

    Now you are ready to run the application, and see the message on the console:

    Basic result on Console

    Read configurations from appsettings.json for a Console Library

    We want to make our system configurable and place our configurations in an appsettings.json file.

    As we saw in a recent article 🔗, we can use IOptions<T> to inject configurations in the constructor. For the sake of this article, I’m gonna use a POCO class, NumberConfig, that is mapped to a configuration section and injected into the classes.

    public class NumberConfig
    {
        public int DefaultNumber { get; set; }
    }
    

    Now we need to manually create an appsettings.json file within the project folder, and add a new section that will hold the values of the configuration:

    {
      "Number": {
        "DefaultNumber": -899
      }
    }
    

    and now we can add the configuration binding in our CreateHost() method, within the ConfigureServices section:

    services.Configure<NumberConfig>(context.Configuration.GetSection("Number"));
    

    Finally, we can update the NumberRepository to accept the configurations in input and use them to return the value:

    public class NumberRepository : INumberRepository
    {
        private readonly NumberConfig _config;
    
        public NumberRepository(IOptions<NumberConfig> options) => _config = options.Value;
    
        public int GetNumber() => _config.DefaultNumber;
    }
    

    Run the project to admire the result, and… BOOM! It will not work! You should see the message “My wonderful number is 0”, even though the number we set on the config file is -899.

    This happens because we must include the appsettings.json file in the result of the compilation. Right-click on that file, select the Properties menu, and set the “Copy to Output Directory” to “Copy always”:

    Copy always the appsettings file to the Output Directory

    Now, build and run the project, and you’ll see the correct message: “My wonderful number is 899”.

    Clearly, the same values can be accessed via IConfigurations.

    Add Serilog logging to log on Console and File

    Finally, we can add Serilog logs to our console applications – as well as define Sinks.

    To add Serilog, you first have to install these NuGet packages:

    • Serilog.Extensions.Hosting and Serilog.Formatting.Compact to add the basics of Serilog;
    • Serilog.Settings.Configuration to read logging configurations from settings (if needed);
    • Serilog.Sinks.Console and Serilog.Sinks.File to add the Console and the File System as Sinks.

    Let’s get back to the CreateHost() method, and add a new section right after ConfigureServices:

    .UseSerilog((context, services, configuration) => configuration
        .ReadFrom.Configuration(context.Configuration)
        .ReadFrom.Services(services)
        .Enrich.FromLogContext()
        .WriteTo.Console()
        .WriteTo.File($"report-{DateTimeOffset.UtcNow.ToString("yyyy-MM-dd-HH-mm-ss")}.txt", restrictedToMinimumLevel: LogEventLevel.Warning)
        )
    

    Here we’re telling that we need to read the config from Settings, add logging context, and write both on Console and on File (only if the log message level is greater or equal than Warning).

    Then, add an ILogger here and there, and admire the final result:

    Serilog Logging is visible on the Console

    Final result

    To wrap up, here’s the final implementation of the Program class and the
    CreateHost method:

    private static void Main(string[] args)
    {
        IHost host = CreateHost();
        NumberWorker worker = ActivatorUtilities.CreateInstance<NumberWorker>(host.Services);
        worker.PrintNumber();
    }
    
    private static IHost CreateHost() =>
      Host
      .CreateDefaultBuilder()
      .ConfigureServices((context, services) =>
      {
          services.Configure<NumberConfig>(context.Configuration.GetSection("Number"));
    
          services.AddSingleton<INumberRepository, NumberRepository>();
          services.AddSingleton<INumberService, NumberService>();
      })
      .UseSerilog((context, services, configuration) => configuration
          .ReadFrom.Configuration(context.Configuration)
          .ReadFrom.Services(services)
          .Enrich.FromLogContext()
          .WriteTo.Console()
          .WriteTo.File($"report-{DateTimeOffset.UtcNow.ToString("yyyy-MM-dd-HH-mm-ss")}.txt", restrictedToMinimumLevel: LogEventLevel.Warning)
          )
      .Build();
    

    Further readings

    As always, a few resources to learn more about the topics discussed in this article.

    First and foremost, have a look at this article with a full explanation of Generic Hosts in a .NET Core application:

    🔗 .NET Generic Host in ASP.NET Core | Microsoft docs

    Then, if you recall, we’ve already learned how to print Serilog logs to the Console:

    🔗 How to log to Console with .NET Core and Serilog | Code4IT

    This article first appeared on Code4IT 🐧

    Lastly, we accessed configurations using IOptions<NumberConfig>. Did you know that there are other ways to access config?

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in .NET 7 | Code4IT

    as well as defining configurations for your project?

    🔗 3 (and more) ways to set configuration values in .NET | Code4IT

    Wrapping up

    In this article, we’ve learned how we can customize a .NET Console application to use dependency injection, external configurations, and Serilog logging.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to integrate Feature Flags stored on Azure App Configuration in an ASP.NET Core Application &vert; Code4IT

    How to integrate Feature Flags stored on Azure App Configuration in an ASP.NET Core Application | Code4IT


    Learn how to use Feature Flags in ASP.NET Core apps and read values from Azure App Configuration. Understand how to use filters, like the Percentage filter, to control feature activation, and learn how to take full control of the cache expiration of the values.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Feature Flags let you remotely control the activation of features without code changes. They help you to test, release, and manage features safely and quickly by driving changes using centralized configurations.

    In a previous article, we learned how to integrate Feature Flags in ASP.NET Core applications. Also, a while ago, we learned how to integrate Azure App Configuration in an ASP.NET Core application.

    In this article, we are going to join the two streams in a single article: in fact, we will learn how to manage Feature Flags using Azure App Configuration to centralize our configurations.

    It’s a sort of evolution from the previous article. Instead of changing the static configurations and redeploying the whole application, we are going to move the Feature Flags to Azure so that you can enable or disable those flags in just one click.

    A recap of Feature Flags read from the appsettings file

    Let’s reuse the example shown in the previous article.

    We have an ASP.NET Core application (in that case, we were building a Razor application, but it’s not important for the sake of this article), with some configurations defined in the appsettings file under the Feature key:

    {
      "FeatureManagement": {
        "Header": true,
        "Footer": true,
        "PrivacyPage": false,
        "ShowPicture": {
          "EnabledFor": [
            {
              "Name": "Percentage",
              "Parameters": { "Value": 60 }
            }
          ]
        }
      }
    }
    

    We have already dove deep into Feature Flags in an ASP.NET Core application in the previous article. However, let me summarize it.

    First of all, you have to define your flags in the appsettings.json file using the structure we saw before.

    To use Feature Flags in ASP.NET Core you have to install the Microsoft.FeatureManagement.AspNetCore NuGet package.

    Then, you have to tell ASP.NET to use Feature Flags by calling:

    builder.Services.AddFeatureManagement();
    

    Finally, you are able to consume those flags in three ways:

    • inject the IFeatureManager interface and call IsEnabled or IsEnabledAsync;
    • use the FeatureGate attribute on a Controller class or a Razor model;
    • use the <feature> tag in a Razor page to show or hide a portion of HTML

    How to create Feature Flags on Azure App Configuration

    We are ready to move our Feature Flags to Azure App Configuration. Needless to say, you need an Azure subscription 😉

    Log in to the Azure Portal, head to “Create a resource”, and create a new App Configuration:

    Azure App configuration in the Marketplace

    I’m going to reuse the same instance I created in the previous article – you can see the full details in the How to create an Azure App Configuration instance section.

    Now we have to configure the same keys defined in the appsettings file: Header, Footer, and PrivacyPage.

    Open the App Configuration instance and locate the “Feature Manager” menu item in the left panel. This is the central place for creating, removing, and managing your Feature Flags. Here, you can see that I have already added the Header and Footer, and you can see their current state: “Footer” is enabled, while “Header” is not.

    Feature Flags manager dashboard

    How can I add the PrivacyPage flag? It’s elementary: click the “Create” button and fill in the fields.

    You have to define a Name and a Key (they can also be different), and if you want, you can add a Label and a Description. You can also define whether the flag should be active by checking the “Enable feature flag” checkbox.

    Feature Flag definition form

    Read Feature Flags from Azure App Configuration in an ASP.NET Core application

    It’s time to integrate Azure App Configuration with our ASP.NET Core application.

    Before moving to the code, we have to locate the connection string and store it somewhere.

    Head back to the App Configuration resource and locate the “Access keys” menu item under the “Settings” section.

    Access Keys page with connection strings

    From here, copy the connection string (I suggest that you use the Read-only Keys) and store it somewhere.

    Before proceeding, you have to install the Microsoft.Azure.AppConfiguration.AspNetCore NuGet package.

    Now, we can add Azure App Configuration as a source for our configurations by connecting to the connection string and by declaring that we are going to use Feature Flags:

    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString).UseFeatureFlags()
    );
    

    That’s not enough. We need to tell ASP.NET that we are going to consume these configurations by adding such functionalities to the Services property.

    builder.Services.AddAzureAppConfiguration();
    
    builder.Services.AddFeatureManagement();
    

    Finally, once we have built our application with the usual builder.Build(), we have to add the Azure App Configuration middleware:

    app.UseAzureAppConfiguration();
    

    To try it out, run the application and validate that the flags are being applied. You can enable or disable those flags on Azure, restart the application, and check that the changes to the flags are being applied. Otherwise, you can wait 30 seconds to have the flag values refreshed and see the changes applied to your application.

    Using the Percentage filter on Azure App Configuration

    Suppose you want to enable a functionality only to a percentage of sessions (sessions, not users!). In that case, you can use the Percentage filter.

    The previous article had a specific section dedicated to the PercentageFilter, so you might want to check it out.

    As a recap, we defined the flag as:

    {
      "ShowPicture": {
        "EnabledFor": [
          {
            "Name": "Percentage",
            "Parameters": {
              "Value": 60
            }
          }
        ]
      }
    }
    

    And added the PercentageFilter filter to ASP.NET with:

    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    

    Clearly, we can define such flags on Azure as well.

    Head back to the Azure Portal and add a new Feature Flag. This time, you have to add a new Feature Filter to any existing flag. Even though the PercentageFilter is out-of-the-box in the FeatureManagement NuGet package, it is not available on the Azure portal.

    You have to define the filter with the following values:

    • Filter Type must be “Custom”;
    • Custom filter name must be “Percentage”
    • You must add a new key, “Value”, and set its value to “60”.

    Custom filter used to create Percentage Filter

    The configuration we just added reflects the JSON value we previously had in the appsettings file: 60% of the requests will activate the flag, while the remaining 40% will not.

    Define the cache expiration interval for Feature Flags

    By default, Feature Flags are stored in an internal cache for 30 seconds.

    Sometimes, it’s not the best choice for your project; you may prefer a longer duration to avoid additional calls to the App Configuration platform; other times, you’d like to have the changes immediately available.

    You can then define the cache expiration interval you need by configuring the options for the Feature Flags:

    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString).UseFeatureFlags(featureFlagOptions =>
        {
            featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(10);
        })
    );
    

    This way, Feature Flag values are stored in the internal cache for 10 seconds. Then, when you reload the page, the configurations are reread from Azure App Configuration and the flags are applied with the new values.

    Further readings

    This is the final article of a path I built during these months to explore how to use configurations in ASP.NET Core.

    We started by learning how to set configuration values in an ASP.NET Core application, as explained here:

    🔗 3 (and more) ways to set configuration values in ASP.NET Core

    Then, we learned how to read and use them with the IOptions family:

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in ASP.NET Core

    From here, we learned how to read the same configurations from Azure App Configuration, to centralize our settings:

    🔗 Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    Then, we configured our applications to automatically refresh the configurations using a Sentinel value:

    🔗 How to automatically refresh configurations with Azure App Configuration in ASP.NET Core

    Finally, we introduced Feature Flags in our apps:

    🔗 Feature Flags 101: A Guide for ASP.NET Core Developers | Code4IT

    And then we got to this article!

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we have configured an ASP.NET Core application to read the Feature Flags stored on Azure App Configuration.

    Here’s the minimal code you need to add Feature Flags for ASP.NET Core API Controllers:

    var builder = WebApplication.CreateBuilder(args);
    
    string connectionString = "my connection string";
    
    builder.Services.AddControllers();
    
    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString)
        .UseFeatureFlags(featureFlagOptions =>
            {
                featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(10);
            }
        )
    );
    
    builder.Services.AddAzureAppConfiguration();
    
    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    
    var app = builder.Build();
    
    app.UseRouting();
    app.UseAzureAppConfiguration();
    app.MapControllers();
    app.Run();
    

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Pre-commit hooks with Husky.NET – build, format, and test your .NET application before a Git commit &vert; Code4IT

    Pre-commit hooks with Husky.NET – build, format, and test your .NET application before a Git commit | Code4IT


    A Git commit represents the status of a system. Learn how to validate that your code builds, is well-formatted, and all the tests pass by adding a Git hook!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you need to run operations before completing a Git commit, you can rely on Git Hooks.

    Git hooks are scripts that run automatically whenever a particular event occurs in a Git repository. They let you customize Git’s internal behaviour and trigger customizable actions at key points in the development life cycle.

    Extending Git hooks allows you to plug in custom functionalities to the regular Git flow, such as Git message validation, code formatting, etc.

    I’ve already described how to use Husky with NPM, but here I’m gonna use Husky.NET, the version of Husky created for .NET-based applications.

    Git hooks: a way to extend Git operations

    As we said, Git hooks are actions that run during specific phases of Git operations.

    Git hooks fall into 4 categories:

    • client-side hooks related to the committing workflow: they execute when you run git commit on your local repository;
    • client-side hooks related to the email workflow: they are executed when running git am, which is a command that allows you to integrate mails and Git repositories (I’ve never used it. If you are interested in this functionality, here’s the official documentation);
    • client-side hooks related to other operations: these hooks run on your local repository when performing operations like git rebase;
    • server-side hooks: they run after a commit is received on the remote repository, and they can reject a git push operation.

    Let’s focus on the client-side hooks that run when you commit changes using git commit.

    Hook name Description
    pre-commit This hook is the first invoked by git commit (if you don’t use the -m flag, it is invoked before asking you to insert a commit message) and can be used to inspect the snapshot that is about to be committed.
    prepare-commit-msg This hook is invoked by git commit and can be used to edit the default commit message when it is generated by an automated tool.
    commit-msg This hook is invoked by git commit and can be used to validate or modify the commit message after it is entered by the user.
    post-commit This hook is invoked after the git commit execution has run correctly, and it is generally used to fire notifications.

    How to install Husky.NET and its dependencies in a .NET Application

    Husky.NET must be installed in the root folder of the solution.

    You first have to create a tool-manifest file in the root folder by running:

    This command creates a file named dotnet-tools.json under the .config folder: here you can see the list of external tools used by dotnet.

    After running the command, you will see that the dotnet-tools.json file contains this element:

    {
      "version": 1,
      "isRoot": true,
      "tools": {}
    }
    

    Now you can add Husky as a dotnet tool by running:

    dotnet tool install Husky
    

    After running the command, the file will contain something like this:

    {
      "version": 1,
      "isRoot": true,
      "tools": {
        "husky": {
          "version": "0.6.2",
          "commands": ["husky"]
        }
      }
    }
    

    Now that we have added it to our dependencies, we can add Husky to an existing .NET application by running:

    If you open the root folder, you should be able to see these 3 folders:

    • .git, which contains the info about the Git repository;
    • .config that contains the description of the tools, such as dotnet-tools;
    • .husky that contains the files we are going to use to define our Git hooks.

    Finally, you can add a new hook by running, for example,

    dotnet husky add pre-commit -c "echo 'Hello world!'"
    git add .husky/pre-commit
    

    This command creates a new file, pre-commit (without file extension), under the .husky folder. By default, it appears like this:

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    ## husky task runner examples -------------------
    ## Note : for local installation use 'dotnet' prefix. e.g. 'dotnet husky'
    
    ## run all tasks
    #husky run
    
    ### run all tasks with group: 'group-name'
    #husky run --group group-name
    
    ## run task with name: 'task-name'
    #husky run --name task-name
    
    ## pass hook arguments to task
    #husky run --args "$1" "$2"
    
    ## or put your custom commands -------------------
    #echo 'Husky.Net is awesome!'
    
    echo 'Hello world!'
    

    The default content is pretty useless; it’s time to customize that hook.

    Notice that the latest command has also generated a task-runner.json file; we will use it later.

    Your first pre-commit hook

    To customize the script, open the file located at .husky/pre-commit.

    Here, you can add whatever you want.

    In the example below, I run commands that compile the code, format the text (using dotnet format with the rules defined in the .editorconfig file), and then run all the tests.

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    echo 'Building code'
    dotnet build
    
    echo 'Formatting code'
    dotnet format
    
    echo 'Running tests'
    dotnet test
    

    Then, add it to Git, and you are ready to go. 🚀 But wait…

    3 ways to manage dotnet format with Husky.NET

    There is a problem with the approach in the example above.

    Let’s simulate a usage flow:

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format;
    5. the pre-commit hook runs dotnet test;
    6. after the hooks, the commit is created.

    What is the final result?

    Since dotnet format modifies the source files, and given that the snapshot has already been created before executing the hook, all the modified files will not be part of the final commit!

    Also, dotnet format executes linting on every file in the solution, not only those that are part of the current snapshot. The operation might then take a lot of time, depending on the size of the repository, and most of the time, it will not update any file (because you’ve already formatted everything in a previous run).

    We have to work out a way to fix this issue. I’ll suggest three approaches.

    Include all the changes using Git add

    The first approach is quite simple: run git add . after dotnet format.

    So, the flow becomes:

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format;
    5. the pre-commit hook runs git add .;
    6. the pre-commit hook runs dotnet test;
    7. Git creates the commit.

    This is the most straightforward approach, but it has some downsides:

    • dotnet format is executed on every file in the solution. The more your project grows, the slower your commits become;
    • git add . adds to the current snapshot all the files modified, even those you did not add to this commit on purpose (maybe because you have updated many files and want to create two distinct commits).

    So, it works, but we can do better.

    Execute a dry run of dotnet-format

    You can add the --verify-no-changes to the dotnet format command: this flag returns an error if at least one file needs to be updated because of a formatting rule.

    Let’s see how the flow changes if one file needs to be formatted.

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format --verify-no-changes;
    5. the pre-commit hook returns an error and aborts the operation;
    6. you run dotnet format on the whole solution to fix all the formatting issues;
    7. you run git add .;
    8. you run git commit -m "message";
    9. the pre-commit hook runs dotnet build;
    10. the pre-commit hook runs dotnet format --verify-no-changes. Now, there is nothing to format, and we can proceed;
    11. the pre-commit hook runs dotnet test;
    12. Git creates the commit.

    Notice that, this way, if there is something to format, the whole commit is aborted. You will then have to run dotnet format on the entire solution, fix the errors, add the changes to the snapshot, and restart the flow.

    It’s a longer process, but it allows you to have complete control over the formatted files.

    Also, you won’t risk including in the snapshot the files you want to keep staged in order to add them to a subsequent commit.

    Run dotnet-format only on the staged files using Husky.NET Task Runner

    The third approach is the most complex but with the best result.

    If you recall, during the initialization, Husky added two files in the .husky folder: pre-commit and task-runner.json.

    The key to this solution is the task-runner.json file. This file allows you to create custom scripts with a name, a group, the command to be executed, and its related parameters.

    By default, you will see this content:

    {
      "tasks": [
        {
          "name": "welcome-message-example",
          "command": "bash",
          "args": ["-c", "echo Husky.Net is awesome!"],
          "windows": {
            "command": "cmd",
            "args": ["/c", "echo Husky.Net is awesome!"]
          }
        }
      ]
    }
    

    To make sure that dotnet format runs only on the staged files, you must create a new task like this:

    {
      "name": "dotnet-format-staged-files",
      "group": "pre-commit-operations",
      "command": "dotnet",
      "args": ["format", "--include", "${staged}"],
      "include": ["**/*.cs"]
    }
    

    Here, we have specified a name, dotnet-format-staged-files, the command to run, dotnet, with some parameters listed in the args array. Notice that we can filter the list of files to be formatted by using the ${staged} parameter, which is populated by Husky.NET.

    We have also added this task to a group named pre-commit-operations that we can use to reference a list of tasks to be executed together.

    If you want to run a specific task, you can use dotnet husky run --name taskname. In our example, the command would be dotnet husky run --name dotnet-format-staged-files.

    If you want to run a set of tasks belonging to the same group, you can run dotnet husky run --group groupname. In our example, the command would be dotnet husky run --group pre-commit-operations.

    The last step is to call these tasks from within our pre-commit file. So, replace the old dotnet format command with one of the above commands.

    Final result and optimizations of the pre-commit hook

    Now that everything is in place, we can improve the script to make it faster.

    Let’s see which parts we can optimize.

    The first step is the build phase. For sure, we have to run dotnet build to see if the project builds correctly. You can consider adding the --no-restore flag to skip the restore step before building.

    Then we have the format phase: we can avoid formatting every file using one of the steps defined before. I’ll replace the plain dotnet format with the execution of the script defined in the Task Runner (it’s the third approach we saw).

    Then, we have the test phase. We can add both the --no-restore and the --no-build flag to the command since we have already built everything before. But wait! The format phase updated the content of our files, so we still have to build the whole solution. Unless we swap the build and the format phases.

    So, here we have the final pre-commit file:

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    echo 'Ready to commit changes!'
    
    echo 'Format'
    
    dotnet husky run --name dotnet-format-staged-files
    
    echo 'Build'
    
    dotnet build --no-restore
    
    echo 'Test'
    
    dotnet test --no-restore
    
    echo 'Completed pre-commit changes'
    

    Yes, I know that when you run the dotnet test command, you also build the solution, but I prefer having two separate steps just for clarity!

    Ah, and don’t remove the #!/bin/sh at the beginning of the script!

    How to skip Git hooks

    To trigger the hook, just run git commit -m "message". Before completing the commit, the hook will run all the commands. If one of them fails, the whole commit operation is aborted.

    There are cases when you have to skip the validation. For example, if you have integration tests that rely on an external source currently offline. In that case, some tests will fail, and you will be able to commit your code only once the external system gets working again.

    You can skip the commit validation by adding the --no-verify flag:

    git commit -m "my message" --no-verify
    

    Further readings

    Husky.NET is a porting of the Husky tool we already used in a previous article, using it as an NPM dependency. In that article, we also learned how to customize Conventional Commits using Git hooks.

    🔗 How to customize Conventional Commits in a .NET application using GitHooks | Code4IT

    As we learned, there are many more Git hooks that we can use. You can see the complete list on the Git documentation:

    🔗 Customizing Git – Git Hooks | Git docs

    This article first appeared on Code4IT 🐧

    Of course, if you want to get the best out of Husky.NET, I suggest you have a look at the official documentation:

    🔗 Husky.Net documentation

    One last thing: we installed Husky.NET using dotnet tools. If you want to learn more about this topic, I found an excellent article online that you might want to read:

    🔗 Using dotnet tools | Gustav Ehrenborg

    Wrapping up

    In this article, we learned how to create a pre-commit Git hook and validate all our changes before committing them to our Git repository.

    We also focused on the formatting of our code: how can we format only the files we have changed without impacting the whole solution?

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Dynamic column chooser example to enhance web application

    Dynamic column chooser example to enhance web application


    Dynamic Column Chooser Tutorial.

    Unlock the potential of your web applications with our comprehensive guide to implementing a dynamic column chooser. This blog post dives into the step-by-step process of building an interactive column selector using HTML, CSS, and JavaScript. Whether you’re looking to enhance the user experience by providing customizable table views or streamlining data presentation, our tutorial covers everything you need to know.

    Explore the intricacies of:

    • Setting up a flexible and responsive HTML table structure.
    • Styling your table and column chooser for a clean, user-friendly interface.
    • Adding JavaScript functionality to toggle column visibility seamlessly.

    With practical code examples and detailed explanations, you’ll be able to integrate a column chooser into your projects effortlessly. Perfect for web developers aiming to create user-centric solutions that cater to diverse needs and preferences. Elevate your web development skills and improve your application’s usability with this essential feature!

    Example:

    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
        <title>Column Chooser Example</title>
        <style>
            table {
                width: 100%;
                border-collapse: collapse;
            }
            th, td {
                border: 1px solid black;
                padding: 8px;
                text-align: left;
            }
            .column-chooser {
                margin-bottom: 20px;
            }
        </style>
    </head>
    <body>
        <div class="column-chooser">
            <label><input type="checkbox" checked data-column="name"> Name</label>
            <label><input type="checkbox" checked data-column="age"> Age</label>
            <label><input type="checkbox" checked data-column="email"> Email</label>
        </div>
        <table>
            <thead>
                <tr>
                    <th class="name">Name</th>
                    <th class="age">Age</th>
                    <th class="email">Email</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td class="name">John Doe</td>
                    <td class="age">30</td>
                    <td class="email">john@example.com</td>
                </tr>
                <tr>
                    <td class="name">Jane Smith</td>
                    <td class="age">25</td>
                    <td class="email">jane@example.com</td>
                </tr>
            </tbody>
        </table>
        <script>
            document.querySelectorAll('.column-chooser input[type="checkbox"]').forEach(checkbox => {
                checkbox.addEventListener('change', (event) => {
                    const columnClass = event.target.getAttribute('data-column');
                    const isChecked = event.target.checked;
                    document.querySelectorAll(`.${columnClass}`).forEach(cell => {
                        cell.style.display = isChecked ? '' : 'none';
                    });
                });
            });
        </script>
    </body>
    </html>
    
    Explanation:
    1. HTML Structure:
      • A div with the class column-chooser contains checkboxes for each column.
      • A table is defined with thead and tbody sections.
      • Each column and cell have a class corresponding to the column name (name, age, email).
    2. CSS:
      • Basic styling is applied to the table and its elements for readability.
    3. JavaScript:
      • Adds an event listener to each checkbox in the column chooser.
      • When a checkbox is toggled, the corresponding column cells are shown or hidden by changing their display style.

    This example provides a simple, interactive way for users to choose which columns they want to display in a table. You can expand this by adding more functionality or integrating it into a larger application as needed.

     

    Export HTML Table To PDF Using JSPDF Autotable.             Find the maximum value in an array in JavaScript.



    Source link

  • How to log to Azure Application Insights using ILogger in ASP.NET Core &vert; Code4IT

    How to log to Azure Application Insights using ILogger in ASP.NET Core | Code4IT


    Application Insights is a great tool for handling high volumes of logs. How can you configure an ASP.NET application to send logs to Azure Application Insights? What can I do to have Application Insights log my exceptions?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Logging is crucial for any application. However, generating logs is not enough: you must store them somewhere to access them.

    Application Insights is one of the tools that allows you to store your logs in a cloud environment. It provides a UI and a query editor that allows you to drill down into the details of your logs.

    In this article, we will learn how to integrate Azure Application Insights with an ASP.NET Core application and how Application Insights treats log properties such as Log Levels and exceptions.

    For the sake of this article, I’m working on an API project with HTTP Controllers with only one endpoint. The same approach can be used for other types of applications.

    How to retrieve the Azure Application Insights connection string

    Azure Application Insights can be accessed via any browser by using the Azure Portal.

    Once you have an instance ready, you can simply get the value of the connection string for that resource.

    You can retrieve it in two ways.

    You can get the connection string by looking at the Connection String property in the resource overview panel:

    Azure Application Insights overview panel

    The alternative is to navigate to the Configure > Properties page and locate the Connection String field.

    Azure Application Insights connection string panel

    How to add Azure Application Insights to an ASP.NET Core application

    Now that you have the connection string, you can place it in the configuration file or, in general, store it in a place that is accessible from your application.

    To configure ASP.NET Core to use Application Insights, you must first install the Microsoft.Extensions.Logging.ApplicationInsights NuGet package.

    Now you can add a new configuration to the Program class (or wherever you configure your services and the ASP.NET core pipeline):

    builder.Logging.AddApplicationInsights(
    configureTelemetryConfiguration: (config) =>
     config.ConnectionString = "InstrumentationKey=your-connection-string",
     configureApplicationInsightsLoggerOptions: (options) => { }
    );
    

    The configureApplicationInsightsLoggerOptions allows you to configure some additional properties: TrackExceptionsAsExceptionTelemetry, IncludeScopes, and FlushOnDispose. These properties are by default set to true, so you probably don’t want to change the default behaviour (except one, which we’ll modify later).

    And that’s it! You have Application Insights ready to be used.

    How log levels are stored and visualized on Application Insights

    I have this API endpoint that does nothing fancy: it just returns a random number.

    [Route("api/[controller]")]
    [ApiController]
    public class MyDummyController(ILogger<DummyController> logger) : ControllerBase
    {
     private readonly ILogger<DummyController> _logger = logger;
    
        [HttpGet]
        public async Task<IActionResult> Get()
        {
            int number = Random.Shared.Next();
            return Ok(number);
        }
    }
    

    We can use it to run experiments on how logs are treated using Application Insights.

    First, let’s add some simple log messages in the Get endpoint:

    [HttpGet]
    public async Task<IActionResult> Get()
    {
        int number = Random.Shared.Next();
    
        _logger.LogDebug("A debug log");
        _logger.LogTrace("A trace log");
        _logger.LogInformation("An information log");
        _logger.LogWarning("A warning log");
        _logger.LogError("An error log");
        _logger.LogCritical("A critical log");
    
        return Ok(number);
    }
    

    These are just plain messages. Let’s search for them in Application Insights!

    You first have to run the application – duh! – and wait for a couple of minutes for the logs to be ready on Azure. So, remember not to close the application immediately: you have to give it a few seconds to send the log messages to Application Insights.

    Then, you can open the logs panel and access the logs stored in the traces table.

    Log levels displayed on Azure Application Insights

    As you can see, the messages appear in the query result.

    There are three important things to notice:

    • in .NET, the log level is called “Log Level”, while on Application Insights it’s called “severity level”;
    • the log levels lower than Information are ignored by default (in fact, you cannot see them in the query result);
    • the Log Levels are exposed as numbers in the severityLevel column: the higher the value, the higher the log level.

    So, if you want to update the query to show only the log messages that are at least Warnings, you can do something like this:

    traces
    | where severityLevel >= 2
    | order  by timestamp desc
    | project timestamp, message, severityLevel
    

    How to log exceptions on Application Insights

    In the previous example, we logged errors like this:

    _logger.LogError("An error log");
    

    Fortunately, ILogger exposes an overload that accepts an exception in input and logs all the details.

    Let’s try it by throwing an exception (I chose AbandonedMutexException because it’s totally nonsense in this simple context, so it’s easy to spot).

    private void SomethingWithException(int number)
    {
        try
        {
            _logger.LogInformation("In the Try block");
    
            throw new AbandonedMutexException("An exception message");
        }
        catch (Exception ex)
        {
            _logger.LogInformation("In the Catch block");
            _logger.LogError(ex, "Unable to complete the operation");
        }
        finally
        {
            _logger.LogInformation("In the Finally block");
        }
    }
    

    So, when calling it, we expect to see 4 log entries, one of which contains the details of the AbandonedMutexException exception.

    The Exception message in Application Insights

    Hey, where is the exception message??

    It turns out that ILogger, when creating log entries like _logger.LogError("An error log");, generates objects of type TraceTelemetry. However, the overload that accepts as a first parameter an exception (_logger.LogError(ex, "Unable to complete the operation");) is internally handled as an ExceptionTelemetry object. Since internally, it’s a different type of Telemetry object, and it gets ignored by default.

    To enable logging exceptions, you have to update the way you add Application Insights to your application by setting the TrackExceptionsAsExceptionTelemetry property to false:

    builder.Logging.AddApplicationInsights(
    configureTelemetryConfiguration: (config) =>
     config.ConnectionString = connectionString,
     configureApplicationInsightsLoggerOptions: (options) => options.TrackExceptionsAsExceptionTelemetry = false);
    

    This way, ExceptionsTelemetry objects are treated as TraceTelemetry logs, making them available in Application Insights logs:

    The Exception log appears in Application Insights

    Then, to access the details of the exception like the message and the stack trace, you can look into the customDimensions element of the log entry:

    Details of the Exception log

    Even though this change is necessary to have exception logging work, it is barely described in the official documentation.

    Further readings

    It’s not the first time we have written about logging in this blog.

    For example, suppose you don’t want to use Application Insights but prefer an open-source, vendor-independent log sink. In that case, my suggestion is to try Seq:

    🔗 Easy logging management with Seq and ILogger in ASP.NET | Code4IT

    Logging manually is nice, but you may be interested in automatically logging all the data related to incoming HTTP requests and their responses.

    🔗 HTTP Logging in ASP.NET: how to automatically log all incoming HTTP requests (and its downsides!) | Code4IT

    This article first appeared on Code4IT 🐧

    You can read the official documentation here (even though I find it not much complete and does not show the results):

    🔗 Application Insights logging with .NET | Microsoft docs

    Wrapping up

    This article taught us how to set up Azure Application Insights in an ASP.NET application.
    We touched on the basics, discussing log levels and error handling. In future articles, we’ll delve into some other aspects of logging, such as correlating logs, understanding scopes, and more.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link