برچسب: .NET

  • How to log to Console with .NET Core and Serilog | Code4IT


    Serilog is a famous logger for .NET projects. In this article, we will learn how to integrate it in a .NET API project and output the logs on a Console.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Having meaningful logs is crucial for any application: without logs, we would not be able to see if errors occur, what’s the status of the application, if there are strange behaviors that should worry us, and so on.

    To define a good logging strategy, we need two parts, equally important: adding logs to our code and analyzing the data produced by our logs.

    In this article, we will see how to add Serilog, a popular logger library, to our .NET projects: we will learn how to configure it to print the logs on a Console.

    Why logging on console

    I can guess what you’re thinking:

    why should we write logs on Console? We should store them somewhere, to analyze them!

    And… you’d be right!

    But still, printing logs on Console can be useful in many ways.

    First of all, by printing on Console you can check that the logging is actually working, and you haven’t missed a configuration.

    Then, writing on Console is great when debugging locally: just spin up your application, run the code you need, and check what happened on the logs; in this way you can understand the internal state of the application, which warnings and errors occurred, and more.

    Lastly, because of an odd strategy that I’ve seen implemented in many projects: print the logs on console, add an agent that reads them and stores them in memory, and then send all the logs to the destination platform at once; in this way, you’ll perform fewer HTTP requests against those platforms, making you save money and avoiding reaching the connection limit of the destination platform.

    Now that we have good reasons to log on Console, well… let’s do it!

    Adding Serilog on Program class

    For this article, we will add Serilog logs to a simple .NET API project.

    Create a new API project – you know, the one with the WeatherForecast controller.

    Then, navigate to the Program class: by default, it should look like this:

    public class Program
    {
        public static void Main(string[] args)
        {
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });
    }
    

    There are no references to any logger, and, of course, to Serilog.

    So the first thing to do is to install it: via NuGet install Serilog.AspNetCore and Serilog.Extensions.Logging. The first one allows you to add Serilog to an ASP.NET project, while the second one allows you to use the native .NET logger in the code with all the capabilities provided by Serilog.

    Then, we need to add the logger to our project:

    public class Program
    {
        public static void Main(string[] args)
        {
    +        Log.Logger = new LoggerConfiguration()
    +                .CreateLogger();
    
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
    +        .UseSerilog((hostingContext, loggerConfiguration) =>
    +                    loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration))
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                });
    }
    

    There are two snippets to understand:

    Log.Logger = new LoggerConfiguration().CreateLogger();
    

    creates a new logger with the specified configurations (in our case, we use the default values), and then assigns the newly created logger to the globally-shared logger Log.Logger.

    Log.Logger lives in the Serilog namespace, so you have to add it to the using list.

    Then, we have this second part:

    .UseSerilog((hostingContext, loggerConfiguration) =>
            loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration)
        )
    

    This snippet defines where to get the Serilog configurations (in this case, from the same place used by the hosting context), and then sets Serilog as the logging provider.

    Inject the logger into constructors

    Since we have bound the Serilog logger to the one native on .NET – the one coming from Microsoft.Extensions.Logging – we can use the native logger everywhere in the project.

    Add a dependency to ILogger<T> in your constructor, where T is the name of the class itself:

    public class WeatherForecastController : ControllerBase
    {
    
        private readonly ILogger<WeatherForecastController> _logger;
    
        public WeatherForecastController(ILogger<WeatherForecastController> logger)
        {
            _logger = logger;
        }
    }
    

    So that you can use the different levels of logging and the Structured Data (see links below) to add more info:

    _logger.LogInformation("Getting random items. There are {AvailableItems} possible values", Summaries.Count());
    
    _logger.LogWarning("This is a warning");
    
    try
    {
        throw new ArgumentException();
    }
    catch (Exception ex)
    {
        _logger.LogError(ex, "And this is an error");
    }
    

    Update the AppSettings file

    But that’s not enough. We aren’t saying that our logs should be printed on Console. To do that, we must update the appsettings.json file and add some new configurations.

    "Serilog": {
        "Using": [ "Serilog.Sinks.Console" ],
        "MinimumLevel": {
            "Default": "Verbose",
            "Override": {
                "Microsoft": "Warning",
                "Microsoft.AspNetCore": "Warning",
                "System": "Error"
            }
        },
        "WriteTo": [
            {
            "Name": "Async",
            "Args": {
                "configure": [
                {
                    "Name": "Console",
                    "Args": {
                        "formatter": "Serilog.Formatting.Compact.RenderedCompactJsonFormatter, Serilog.Formatting.Compact"
                    }
                }
                ]
            }
            }
        ]
    }
    

    As usual, let’s break it down.

    The first thing to notice is the root of the JSON section: Serilog. This value is the default when defining the configuration values for Serilog (remember the loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration)? It binds the settings automagically!)

    The Using section defines the types of Sinks that will be used. A Sink is just the destination of the logs. So, just download the Serilog.Sinks.Console NuGet package and add that value to the Using array to use the Console as a Sink.

    Then, we have the MinimumLevel object: it defines the minimum levels of logs that will be taken into consideration. Here the default value is Verbose, but you’ll probably want it to be Warning in your production environment: is this way, all the logs with a level lower than Warning will be ignored.

    Lastly, we have the WriteTo section, which defines the exact configurations of the sinks. Notice the Async value: we need this value because writing logs is an asynchronous operation – logs must be printed in real-time. So, after you’ve installed the Serilog.Sinks.Async NuGet package, you must add the Async value to that object. And then you can configure the different Sinks: here I’m adding some simple JSON Formatters to the Console Sink.

    Run the application

    We’re finally ready to run our application.

    Just run it with the usual IIS profile and… nothing happens! Where is the Console??

    With IIS you cannot see any Console, since it simply does not exist – if the application runs as a web application, we don’t need the Console.

    So, you have to change the running profile and select the name of your application (in my case, SerilogLoggingOnConsole).

    Use the correct running profile

    Then you can run the application, navigate to an endpoint, and see the logs!

    Serilog logs as plain text

    But I don’t like how logs are displayed, too many details!

    Let me add a theme: in the AppSettings file, I can add a theme configuration:

    "Args": {
        "configure": [
        {
            "Name": "Console",
            "Args": {
    +        "theme": "Serilog.Sinks.SystemConsole.Themes.AnsiConsoleTheme::Code, Serilog.Sinks.Console",
            "formatter": "Serilog.Formatting.Compact.RenderedCompactJsonFormatter, Serilog.Formatting.Compact"
            }
        }
        ]
    }
    

    This makes Serilog show the logs with a different shape:

    Serilog logs with a simple theme

    So, just by updating the AppSettings file, you can fine-tune the behavior and the output of the logger. In this way, you can customize Release builds to update the AppSettings file and define custom properties for every deploy environment.

    Further reading

    If you want to learn more about the different topics discussed in this article:

    🔗 Serilog Structured Data | Code4IT

    🔗 Serilog Console Sink | GitHub

    🔗 How to integrate Serilog and Seq | Code4IT

    Wrapping up

    In this article, we’ve seen how to integrate Serilog in a .NET application to print the logs on the application Console.

    Time to recap the key points:

    • install the Serilog, Serilog.AspNetCore, and Serilog.Extensions.Logging NuGet packages to integrate the basic functionalities of Serilog
    • download the Serilog.Sinks.Console and Serilog.Sinks.Async NuGet packages to use the Console as a destination of your logs
    • update the Program class to specify that the application must use Serilog
    • use ILogger<T> instead of Serilog.ILogger
    • define the settings in the appsettings.json file instead of directly in the code

    Finally, if you want to see the full example, here’s the GitHub repository used for this article

    Happy coding!

    🐧



    Source link

  • How to resolve dependencies in .NET APIs based on current HTTP Request


    Did you know that in .NET you can resolve specific dependencies using Factories? We’ll use them to switch between concrete classes based on the current HTTP Request

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Say that you have an interface and that you want to specify its concrete class at runtime using the native Dependency Injection engine provided by .NET.

    For instance, imagine that you have a .NET API project and that the flag that tells the application which dependency to use is set in the HTTP Request.

    Can we do it? Of course, yes – otherwise I wouldn’t be here writing this article 😅 Let’s learn how!

    Why use different dependencies?

    But first: does all of this make sense? Is there any case when you want to inject different services at runtime?

    Let me share with you a story: once I had to create an API project which exposed just a single endpoint: Process(string ID).

    That endpoint read the item with that ID from a DB – an object composed of some data and some hundreds of children IDs – and then called an external service to download an XML file for every child ID in the object; then, every downloaded XML file would be saved on the file system of the server where the API was deployed to. Finally, a TXT file with the list of the items correctly saved on the file system was generated.

    Quite an easy task: read from DB, call some APIs, store the file, store the report file. Nothing more.

    But, how to run it locally without saving hundreds of files for every HTTP call?

    I decided to add a simple Query Parameter to the HTTP path and let .NET understand whether use the concrete class or a fake one. Let’s see how.

    Define the services on ConfigureServices

    As you may know, the dependencies are defined in the ConfigureServices method inside the Startup class.

    Here we can define our dependencies. For this example, we have an interface, IFileSystemAccess, which is implemented by two classes: FakeFileSystemAccess and RealFileSystemAccess.

    So, to define those mutable dependencies, you can follow this snippet:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddControllers();
    
        services.AddHttpContextAccessor();
    
        services.AddTransient<FakeFileSystemAccess>();
        services.AddTransient<RealFileSystemAccess>();
    
        services.AddScoped<IFileSystemAccess>(provider =>
        {
            var context = provider.GetRequiredService<IHttpContextAccessor>();
    
            var useFakeFileSystemAccess = context.HttpContext?.Request?.Query?.ContainsKey("fake-fs") ?? false;
    
            if (useFakeFileSystemAccess)
                return provider.GetRequiredService<FakeFileSystemAccess>();
            else
                return provider.GetRequiredService<RealFileSystemAccess>();
        });
    }
    

    As usual, let’s break it down:

    Inject dependencies using a Factory

    Let’s begin with the king of the article:

    services.AddScoped<IFileSystemAccess>(provider =>
    {
    }
    

    We can define our dependencies by using a factory. For instance, now we are using the AddScoped Extension Method (wanna know some interesting facts about Extension Methods?):

    //
    // Summary:
    //     Adds a scoped service of the type specified in TService with a factory specified
    //     in implementationFactory to the specified Microsoft.Extensions.DependencyInjection.IServiceCollection.
    //
    // Parameters:
    //   services:
    //     The Microsoft.Extensions.DependencyInjection.IServiceCollection to add the service
    //     to.
    //
    //   implementationFactory:
    //     The factory that creates the service.
    //
    // Type parameters:
    //   TService:
    //     The type of the service to add.
    //
    // Returns:
    //     A reference to this instance after the operation has completed.
    public static IServiceCollection AddScoped<TService>(this IServiceCollection services, Func<IServiceProvider, TService> implementationFactory) where TService : class;
    

    This Extension Method allows us to get the information about the services already injected in the current IServiceCollection instance and use them to define how to instantiate the actual dependency for the TService – in our case, IFileSystemAccess.

    Why is this a Scoped dependency? As you might remember from a previous article, in .NET we have 3 lifetimes for dependencies: Singleton, Scoped, and Transient. Scoped dependencies are the ones that get loaded once per HTTP request: therefore, those are the best choice for this specific example.

    Reading from Query String

    Since we need to read a value from the query string, we need to access the HttpRequest object.

    That’s why we have:

    var context = provider.GetRequiredService<IHttpContextAccessor>();
    var useFakeFileSystemAccess = context.HttpContext?.Request?.Query?.ContainsKey("fake-fs") ?? false;
    

    Here I’m getting the HTTP Context and checking if the fake-fs key is defined. Yes, I know, I’m not checking its actual value: I’m just checking whether the key exists or not.

    IHttpContextAccessor is the key part of this snippet: this is a service that acts as a wrap around the HttpContext object. You can inject it everywhere in your code, but under one condition: you have to define it in the ConfigureServices method.

    How? Well, that’s simple:

    services.AddHttpContextAccessor();
    

    Injecting the dependencies based on the request

    Finally, we can define which dependency must be injected for the current HTTP Request:

    if (useFakeFileSystemAccess)
        return provider.GetRequiredService<FakeFileSystemAccess>();
    else
        return provider.GetRequiredService<RealFileSystemAccess>();
    

    Remember that we are inside a factory method: this means that, depending on the value of useFakeFileSystemAccess, we are defining the concrete class of IFileSystemAccess.

    GetRequiredService<T> returns the instance of type T injected in the DI engine. This implies that we have to inject the two different services before accessing them. That’s why you see:

    services.AddTransient<FakeFileSystemAccess>();
    services.AddTransient<RealFileSystemAccess>();
    

    Those two lines of code serve two different purposes:

    1. they make those services available to the GetRequiredService method;
    2. they resolve all the dependencies injected in those services

    Running the example

    Now that we have everything in place, it’s time to put it into practice.

    First of all, we need a Controller with the endpoint we will call:

    [ApiController]
    [Route("[controller]")]
    public class StorageController : ControllerBase
    {
        private readonly IFileSystemAccess _fileSystemAccess;
    
        public StorageController(IFileSystemAccess fileSystemAccess)
        {
            _fileSystemAccess = fileSystemAccess;
        }
    
        [HttpPost]
        public async Task<IActionResult> SaveContent([FromBody] FileInfo content)
        {
            string filename = $"file-{Guid.NewGuid()}.txt";
            var saveResult = await _fileSystemAccess.WriteOnFile(filename, content.Content);
            return Ok(saveResult);
        }
    
        public class FileInfo
        {
            public string Content { get; set; }
        }
    }
    

    Nothing fancy: this POST endpoint receives an object with some text, and calls IFileSystemAccess to store the file. Then, it returns the result of the operation.

    Then, we have the interface:

    public interface IFileSystemAccess
    {
        Task<FileSystemSaveResult> WriteOnFile(string fileName, string content);
    }
    
    public class FileSystemSaveResult
    {
        public FileSystemSaveResult(string message)
        {
            Message = message;
        }
    
        public string Message { get; set; }
    }
    

    which is implemented by the two classes:

    public class FakeFileSystemAccess : IFileSystemAccess
    {
        public Task<FileSystemSaveResult> WriteOnFile(string fileName, string content)
        {
            return Task.FromResult(new FileSystemSaveResult("Used mock File System access"));
        }
    }
    

    and

    public class RealFileSystemAccess : IFileSystemAccess
    {
        public async Task<FileSystemSaveResult> WriteOnFile(string fileName, string content)
        {
            await File.WriteAllTextAsync(fileName, content);
            return new FileSystemSaveResult("Used real File System access");
        }
    }
    

    As you could have imagined, only RealFileSystemAccess actually writes on the file system. But both of them return an object with a message that tells us which class completed the operation.

    Let’s see it in practice:

    First of all, let’s call the endpoint without anything in Query String:

    Without specifying the flag in Query String, we are using the real file system access

    And, then, let’s add the key:

    By adding the flag, we are using the mock class, so that we don&rsquo;t create real files

    As expected, depending on the query string, we can see two different results.

    Of course, you can use this strategy not only with values from the Query String, but also from HTTP Headers, cookies, and whatever comes with the HTTP Request.

    Further readings

    If you remember, we’ve defined the dependency to IFileSystemAccess as Scoped. Why? What are the other lifetimes native on .NET?

    🔗 Dependency Injection lifetimes in .NET | Code4IT

    Also, AddScoped is the Extension Method that we used to build our dependencies thanks to a Factory. Here’s an article about some advanced topics about Extension Methods:

    🔗 How you can create Extension Methods in C# | Code4IT

    Finally, the repository for the code used for this article:

    🔗 DependencyInjectionByHttpRequest project | GitHub

    Wrapping up

    In this article, we’ve seen that we can use a Factory to define at runtime which class will be used when resolving a Dependency.

    We’ve used a simple calculation based on the current HTTP request, but of course, there are many other ways to achieve a similar result.

    What would you use instead? Have you ever used a similar approach? And why?

    Happy coding!

    🐧



    Source link

  • Profiling .NET code with MiniProfiler | Code4IT


    Is your application slow? How to find bottlenecks? If so, you can use MiniProfiler to profile a .NET API application and analyze the timings of the different operations.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes your project does not perform well as you would expect. Bottlenecks occur, and it can be hard to understand where and why.

    So, the best thing you should do is to profile your code and analyze the execution time to understand which are the parts that impact the most your application performance.

    In this article, we will learn how to use Miniprofiler to profile code in a .NET 5 API project.

    Setting up the project

    For this article, I’ve created a simple project. This project tells you the average temperature of a place by specifying the country code (eg: IT), and the postal code (eg: 10121, for Turin).

    There is only one endpoint, /Weather, that accepts in input the CountryCode and the PostalCode, and returns the temperature in Celsius.

    To retrieve the data, the application calls two external free services: Zippopotam to get the current coordinates, and OpenMeteo to get the daily temperature using those coordinates.

    Sequence diagram

    Let’s see how to profile the code to see the timings of every operation.

    Installing MiniProfiler

    As usual, we need to install a Nuget package: since we are working on a .NET 5 API project, you can install the MiniProfiler.AspNetCore.Mvc package, and you’re good to go.

    MiniProfiler provides tons of packages you can use to profile your code: for example, you can profile Entity Framework, Redis, PostgreSql, and more.

    MiniProfiler packages on NuGet

    Once you’ve installed it, we can add it to our project by updating the Startup class.

    In the Configure method, you can simply add MiniProfiler to the ASP.NET pipeline:

    Then, you’ll need to configure it in the ConfigureServices method:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMiniProfiler(options =>
            {
                options.RouteBasePath = "/profiler";
                options.ColorScheme = StackExchange.Profiling.ColorScheme.Dark;
            });
    
        services.AddControllers();
        // more...
    }
    

    As you might expect, the king of this method is AddMiniProfiler. It allows you to set MiniProfiler up by configuring an object of type MiniProfilerOptions. There are lots of things you can configure, that you can see on GitHub.

    For this example, I’ve updated the color scheme to use Dark Mode, and I’ve defined the base path of the page that shows the results. The default is mini-profiler-resources, so the results would be available at /mini-profiler-resources/results. With this setting, the result is available at /profiler/results.

    Defining traces

    Time to define our traces!

    When you fire up the application, a MiniProfiler object is created and shared across the project. This object exposes several methods. The most used is Step: it allows you to define a portion of code to profile, by wrapping it into a using block.

    using (MiniProfiler.Current.Step("Getting lat-lng info"))
    {
        (latitude, longitude) = await _locationService.GetLatLng(countryCode, postalCode);
    }
    

    The snippet above defines a step, giving it a name (“Getting lat-lng info”), and profiles everything that happens within those lines of code.

    You can also use nested steps by simply adding a parent step:

    using (MiniProfiler.Current.Step("Get temperature for specified location"))
    {
        using (MiniProfiler.Current.Step("Getting lat-lng info"))
        {
            (latitude, longitude) = await _locationService.GetLatLng(countryCode, postalCode);
        }
    
        using (MiniProfiler.Current.Step("Getting temperature info"))
        {
            temperature = await _weatherService.GetTemperature(latitude, longitude);
        }
    }
    

    In this way, you can create a better structure of traces and perform better analyses. Of course, this method doesn’t know what happens inside the GetLatLng method. If there’s another Step, it will be taken into consideration too.

    You can also use inline steps to trace an operation and return its value on the same line:

    var response = await MiniProfiler.Current.Inline(() => httpClient.GetAsync(fullUrl), "Http call to OpenMeteo");
    

    Inline traces the operation and returns the return value from that method. Notice that it works even for async methods! 🤩

    Viewing the result

    Now that we’ve everything in place, we can run our application.

    To get better data, you should run the application in a specific way.

    First of all, use the RELEASE configuration. You can change it in the project properties, heading to the Build tab:

    Visual Studio tab for choosing the build configuration

    Then, you should run the application without the debugger attached. You can simply hit Ctrl+F5, or head to the Debug menu and click Start Without Debugging.

    Visual Studio menu to run the application without debugger

    Now, run the application and call the endpoint. Once you’ve got the result, you can navigate to the report page.

    Remember the options.RouteBasePath = "/profiler" option? It’s the one that specifies the path to this page.

    If you head to /profiler/results, you will see a page similar to this one:

    MiniProfiler results

    On the left column, you can see the hierarchy of the messages we’ve defined in the code. On the right column, you can see the timings for each operation.

    Association of every MiniProfiler call to the related result

    Noticed that Show trivial button on the bottom-right corner of the report? It displays the operations that took such a small amount of time that can be easily ignored. By clicking on that button, you’ll see many things, such as all the operations that the .NET engine performs to handle your HTTP requests, like the Action Filters.

    Trivial operations on MiniProfiler

    Lastly, the More columns button shows, well… more columns! You will see the aggregate timing (the operation + all its children), and the timing from the beginning of the request.

    More Columns showed on MiniProfiler

    The mystery of x-miniprofiler-ids

    Now, there’s one particular thing that I haven’t understood of MiniProfiler: the meaning of x-miniprofiler-ids.

    This value is an array of IDs that represent every time we’ve profiled something using by MiniProfiler during this session.

    You can find this array in the HTTP response headers:

    x-miniprofiler-ids HTTP header

    I noticed that every time you perform a call to that endpoint, it adds some values to this array.

    My question is: so what? What can we do with those IDs? Can we use them to filter data, or to see the results in some particular ways?

    If you know how to use those IDs, please drop a message in the comments section 👇

    If you want to run this project and play with MiniProfiler, I’ve shared this project on GitHub.

    🔗 ProfilingWithMiniprofiler repository | GitHub

    In this project, I’ve used Zippopotam to retrieve latitude and longitude given a location

    🔗 Zippopotam

    Once I retrieved the coordinates, I used Open Meteo to get the weather info for that position.

    🔗 Open Meteo documentation | OpenMeteo

    And then, obviously, I used MiniProfiler to profile my code.

    🔗 MiniProfiler repository | GitHub

    I’ve already used MiniProfiler for analyzing the performances of an application, and thanks to this library I was able to improve the response time from 14 seconds (yes, seconds!) to less than 3. I’ve explained all the steps in 2 articles.

    🔗 How I improved the performance of an endpoint by 82% – part 1 | Code4IT

    🔗 How I improved the performance of an endpoint by 82% – part 2 | Code4IT

    Wrapping up

    In this article, we’ve seen how we can profile .NET applications using MiniProfiler.

    This NuGet Package works for almost every version of .NET, from the dear old .NET Framework to the most recent one, .NET 6.

    A suggestion: configure it in a way that you can turn it off easily. Maybe using some environment variables. This will give you the possibility to turn it off when this tracing is no more required and to speed up the application.

    Ever used it? Any alternative tools?

    And, most of all, what the f**k is that x-miniprofiler-ids array??😶

    Happy coding!

    🐧



    Source link

  • How to access the HttpContext in .NET API

    How to access the HttpContext in .NET API


    If your application is exposed on the Web, I guess that you get some values from the HTTP Requests, don’t you?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you are building an application that is exposed on the Web, you will probably need to read some data from the current HTTP Request or set some values on the HTTP Response.

    In a .NET API, all the info related to both HTTP Request and HTTP Response is stored in a global object called HttpContext. How can you access it?

    In this article, we will learn how to get rid of the old HttpContext.Current and what we can do to write more testable code.

    Why not HttpContext directly

    Years ago, we used to access the HttpContext directly in our code.

    For example, if we had to access the Cookies collection, we used to do

    var cookies = HttpContext.Current.Request.Cookies;
    

    It worked, right. But this approach has a big problem: it makes our tests hard to set up.

    In fact, we were using a static instance that added a direct dependency between the client class and the HttpContext.

    That’s why the .NET team has decided to abstract the retrieval of that class: we now need to use IHttpContextAccessor.

    Add IHttpContextAccessor

    Now, I have this .NET project that exposes an endpoint, /WeatherForecast, that returns the current weather for a particular city, whose name is stored in the HTTP Header “data-location”.

    The real calculation (well, real… everything’s fake, here 😅) is done by the WeatherService. In particular, by the GetCurrentWeather method.

    public WeatherForecast GetCurrentWeather()
    {
        string currentLocation = GetLocationFromContext();
    
        var rng = new Random();
    
        return new WeatherForecast
        {
            TemperatureC = rng.Next(-20, 55),
            Summary = Summaries[rng.Next(Summaries.Length)],
            Location = currentLocation
        };
    }
    

    We have to retrieve the current location.

    As we said, we cannot anymore rely on the old HttpContext.Current.Request.

    Instead, we need to inject IHttpContextAccessor in the constructor, and use it to access the Request object:

    public WeatherService(IHttpContextAccessor httpContextAccessor)
    {
        _httpContextAccessor = httpContextAccessor;
    }
    

    Once we have the instance of IHttpContextAccessor, we can use it to retrieve the info from the current HttpContext headers:

    string currentLocation = "";
    
    if (_httpContextAccessor.HttpContext.Request.Headers.TryGetValue("data-location", out StringValues locationHeaders) && locationHeaders.Any())
    {
        currentLocation = locationHeaders.First();
    }
    
    return currentLocation;
    

    Easy, right? We’re almost done.

    Configure Startup class

    If you run the application in this way, you will not be able to access the current HTTP request.

    That’s because we haven’t specified that we want to add IHttpContextAccessor as a service in our application.

    To do that, we have to update the ConfigureServices class by adding this instruction:

    services.AddHttpContextAccessor();
    

    Which comes from the Microsoft.Extensions.DependencyInjection namespace.

    Now we can run the project!

    If we call the endpoint specifying a City in the data-location header, we will see its value in the returned WeatherForecast object, in the Location field:

    Location is taken from the HTTP Headers

    Further improvements

    Is it enough?

    Is it really enough?

    If we use it this way, every class that needs to access the HTTP Context will have tests quite difficult to set up, because you will need to mock several objects.

    In fact, for mocking HttpContext.Request.Headers, we need to create mocks for HttpContext, for Request, and for Headers.

    This makes our tests harder to write and understand.

    So, my suggestion is to wrap the HttpContext access in a separate class and expose only the methods you actually need.

    For instance, you could wrap the access to HTTP Request Headers in the GetValueFromRequestHeader of an IHttpContextWrapper service:

    public interface IHttpContextWrapper
    {
        string GetValueFromRequestHeader(string key, string defaultValue);
    }
    

    That will be the only service that accesses the IHttpContextAccessor instance.

    public class HttpContextWrapper : IHttpContextWrapper
    {
        private readonly IHttpContextAccessor _httpContextAccessor;
    
        public HttpContextWrapper(IHttpContextAccessor httpContextAccessor)
        {
            _httpContextAccessor = httpContextAccessor;
        }
    
        public string GetValueFromRequestHeader(string key, string defaultValue)
        {
            if (_httpContextAccessor.HttpContext.Request.Headers.TryGetValue(key, out StringValues headerValues) && headerValues.Any())
            {
                return headerValues.First();
            }
    
            return defaultValue;
        }
    }
    

    In this way, you will be able to write better tests both for the HttpContextWrapper class, by focusing on the building of the HttpRequest, and for the WeatherService class, so that you can write tests without worrying about setting up complex structures just for retrieving a value.

    But pay attention to the dependency lifescope! HTTP Requests info live within – guess what? – their HTTP Request. So, when defining the dependencies in the Startup class, remember to inject the IHttpContextWrapper as Transient or, even better, as Scoped. If you don’t remember the difference, I got you covered here!

    Wrapping up

    In this article, we’ve learned that you can access the current HTTP request by using IHttpContextAccessor. Of course, you can use it to update the Response too, for instance by adding an HTTP Header.

    Happy coding!

    🐧



    Source link

  • How to improve Serilog logging in .NET 6 by using Scopes &vert; Code4IT

    How to improve Serilog logging in .NET 6 by using Scopes | Code4IT


    Logs are important. Properly structured logs can be the key to resolving some critical issues. With Serilog’s Scopes, you can enrich your logs with info about the context where they happened.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Even though it’s not one of the first things we usually set up when creating a new application, logging is a real game-changer in the long run.

    When an error occurred, if we have proper logging we can get more info about the context where it happened so that we can easily identify the root cause.

    In this article, we will use Scopes, one of the functionalities of Serilog, to create better logs for our .NET 6 application. In particular, we’re going to create a .NET 6 API application in the form of Minimal APIs.

    We will also use Seq, just to show you the final result.

    Adding Serilog in our Minimal APIs

    We’ve already explained what Serilog and Seq are in a previous article.

    To summarize, Serilog is an open source .NET library for logging. One of the best features of Serilog is that messages are in the form of a template (called Structured Logs), and you can enrich the logs with some value automatically calculated, such as the method name or exception details.

    To add Serilog to your application, you simply have to run dotnet add package Serilog.AspNetCore.

    Since we’re using Minimal APIs, we don’t have the StartUp file anymore; instead, we will need to add it to the Program.cs file:

    builder.Host.UseSerilog((ctx, lc) => lc
        .WriteTo.Console() );
    

    Then, to create those logs, you will need to add a specific dependency in your classes:

    public class ItemsRepository : IItemsRepository
    {
        private readonly ILogger<ItemsRepository> _logger;
    
        public ItemsRepository(ILogger<ItemsRepository> logger)
        {
            _logger = logger;
        }
    }
    

    As you can see, we’re injecting an ILogger<ItemsRepository>: specifying the related class automatically adds some more context to the logs that we will generate.

    Installing Seq and adding it as a Sink

    Seq is a logging platform that is a perfect fit for Serilog logs. If you don’t have it already installed, head to their download page and install it locally (you can even install it as a Docker container 🤩).

    In the installation wizard, you can select the HTTP port that will expose its UI. Once everything is in place, you can open that page on your localhost and see a page like this:

    Seq empty page on localhost

    On this page, we will see all the logs we write.

    But wait! ⚠ We still have to add Seq as a sink for Serilog.

    A sink is nothing but a destination for the logs. When using .NET APIs we can define our sinks both on the appsettings.json file and on the Program.cs file. We will use the second approach.

    First of all, you will need to install a NuGet package to add Seq as a sink: dotnet add package Serilog.Sinks.Seq.

    Then, you have to update the Serilog definition we’ve seen before by adding a .WriteTo.Seq instruction:

    builder.Host.UseSerilog((ctx, lc) => lc
        .WriteTo.Console()
        .WriteTo.Seq("http://localhost:5341")
        );
    

    Notice that we’ve specified also the port that exposes our Seq instance.

    Now, every time we log something, we will see our logs both on the Console and on Seq.

    How to add scopes

    The time has come: we can finally learn how to add Scopes using Serilog!

    Setting up the example

    For this example, I’ve created a simple controller, ItemsController, which exposes two endpoints: Get and Add. With these two endpoints, we are able to add and retrieve items stored in an in-memory collection.

    This class has 2 main dependencies: IItemsRepository and IUsersItemsRepository. Each of these interfaces has its own concrete class, each with a private logger injected in the constructor:

    public ItemsRepository(ILogger<ItemsRepository> logger)
    {
        _logger = logger;
    }
    

    and, similarly

    public UsersItemRepository(ILogger<UsersItemRepository> logger)
    {
        _logger = logger;
    }
    

    How do those classes use their own _logger instances?

    For example, the UsersItemRepository class exposes an AddItem method that adds a specific item to the list of items already possessed by a specific user.

    public void AddItem(string username, Item item)
    {
        if (!_usersItems.ContainsKey(username))
        {
            _usersItems.Add(username, new List<Item>());
            _logger.LogInformation("User was missing from the list. Just added");
        }
        _usersItems[username].Add(item);
        _logger.LogInformation("Added item for to the user's catalogue");
    }
    

    We are logging some messages, such as “User was missing from the list. Just added”.

    Something similar happens in the ItemsRepository class, where we have a GetItem method that returns the required item if it exists, and null otherwise.

    public Item GetItem(int itemId)
    {
        _logger.LogInformation("Retrieving item {ItemId}", itemId);
        return _allItems.FirstOrDefault(i => i.Id == itemId);
    }
    

    Finally, who’s gonna call these methods?

    [HttpPost(Name = "AddItems")]
    public IActionResult Add(string userName, int itemId)
    {
        var item = _itemsRepository.GetItem(itemId);
    
        if (item == null)
        {
            _logger.LogWarning("Item does not exist");
    
            return NotFound();
        }
        _usersItemsRepository.AddItem(userName, item);
    
        return Ok(item);
    }
    

    Ok then, we’re ready to run the application and see the result.

    When I call that endpoint by passing “davide” as userName and “1” as itemId, we can see these logs:

    Simple logging on Seq

    We can see the 3 log messages but they are unrelated one each other. In fact, if we expand the logs to see the actual values we’ve logged, we can see that only the “Retrieving item 1” log has some information about the item ID we want to associate with the user.

    Expanding logs on Seq

    Using BeginScope with Serilog

    Finally, it’s time to define the Scope.

    It’s as easy as adding a simple using statement; see how I added the scope to the Add method in the Controller:

    [HttpPost(Name = "AddItems")]
    public IActionResult Add(string userName, int itemId)
    {
        using (_logger.BeginScope("Adding item {ItemId} for user {UserName}", itemId, userName))
        {
            var item = _itemsRepository.GetItem(itemId);
    
            if (item == null)
            {
                _logger.LogWarning("Item does not exist");
    
                return NotFound();
            }
            _usersItemsRepository.AddItem(userName, item);
    
            return Ok(item);
        }
    }
    

    Here’s the key!

    using (_logger.BeginScope("Adding item {ItemId} for user {UserName}", itemId, userName))
    

    With this single instruction, we are actually performing 2 operations:

    1. we are adding a Scope to each message – “Adding item 1 for user davide”
    2. we are adding ItemId and UserName to each log entry that falls in this block, in every method in the method chain.

    Let’s run the application again, and we will see this result:

    Expanded logs on Seq with Scopes

    So, now you can use these new properties to get some info about the context of when this log happened, and you can use the ItemId and UserName fields to search for other related logs.

    You can also nest scopes, of course.

    Why scopes instead of Correlation ID?

    You might be thinking

    Why can’t I just use correlation IDs?

    Well, the answer is pretty simple: correlation IDs are meant to correlate different logs in a specific request, and, often, across services. You generally use Correlation IDs that represent a specific call to your API and act as a Request ID.

    For sure, that can be useful. But, sometimes, not enough.

    Using scopes you can also “correlate” distinct HTTP requests that have something in common.

    If I call 2 times the AddItem endpoint, I can filter both for UserName and for ItemId and see all the related logs across distinct HTTP calls.

    Let’s see a real example: I have called the endpoint with different values

    • id=1, username=“davide”
    • id=1, username=“luigi”
    • id=2, username=“luigi”

    Since the scope reference both properties, we can filter for UserName and discover that Luigi has added both Item1 and Item 2.

    Filtering logs by UserName

    At the same time, we can filter by ItemId and discover that the item with id = 2 has been added only once.

    Filtering logs by ItemId

    Ok, then, in the end, Scopes or Correlation IDs? The answer is simple:

    Both is good

    This article first appeared on Code4IT

    Read more

    As always, the best place to find the info about a library is its documentation.

    🔗 Serilog website

    If you prefer some more practical articles, I’ve already written one to help you get started with Serilog and Seq (and with Structured Logs):

    🔗 Logging with Serilog and Seq | Code4IT

    as well as one about adding Serilog to Console applications (which is slightly different from adding Serilog to .NET APIs)

    🔗 How to add logs on Console with .NET Core and Serilog | Code4IT

    Then, you might want to deep dive into Serilog’s BeginScope. Here’s a neat article by Nicholas Blumhardt. Also, have a look at the comments, you’ll find interesting points to consider

    🔗 The semantics of ILogger.BeginScope | Nicholas Blumhardt

    Finally, two must-read articles about logging best practices.

    The first one is by Thiago Nascimento Figueiredo:

    🔗 Logs – Why, good practices, and recommendations | Dev.to

    and the second one is by Llron Tal:

    🔗 9 Logging Best Practices Based on Hands-on Experience | Loom Systems

    Wrapping up

    In this article, we’ve added Scopes to our logs to enrich them with some common fields that can be useful to investigate in case of errors.

    Remember to read the last 3 links I’ve shared above, they’re pure gold – you’ll thank me later 😎

    Happy coding!

    🐧



    Source link

  • How to log Correlation IDs in .NET APIs with Serilog &vert; Code4IT

    How to log Correlation IDs in .NET APIs with Serilog | Code4IT


    APIs often call other APIs to perform operations. If an error occurs in one of them, how can you understand the context that caused that error? You can use Correlation IDs in your logs!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Correlation IDs are values that are passed across different systems to correlate the operations performed during a “macro” operation.

    Most of the time they are passed as HTTP Headers – of course in systems that communicate via HTTP.

    In this article, we will learn how to log those Correlation IDs using Serilog, a popular library that helps handle logs in .NET applications.

    Setting up the demo dotNET project

    This article is heavily code-oriented. So, let me first describe the demo project.

    Overview of the project

    To demonstrate how to log Correlation IDs and how to correlate logs generated by different systems, I’ve created a simple solution that handles bookings for a trip.

    The “main” project, BookingSystem, fetches data from external systems by calling some HTTP endpoints; it then manipulates the data and returns an aggregate object to the caller.

    BookingSystem depends on two projects, placed within the same solution: CarRentalSystem, which returns data about the available cars in a specified date range, and HotelsSystem, which does the same for hotels.

    So, this is the data flow:

    Operations sequence diagram

    If an error occurs in any of those systems, can we understand the full story of the failed request? No. Unless we use Correlation IDs!

    Let’s see how to add them and how to log them.

    We need to propagate HTTP Headers. You could implement it from scratch, as we’ve seen in a previous article. Or we could use a native library that does it all for us.

    Of course, let’s go with the second approach.

    For every project that will propagate HTTP headers, we have to follow these steps.

    First, we need to install Microsoft.AspNetCore.HeaderPropagation: this NuGet package allows us to add the .NET classes needed to propagate HTTP headers.

    Next, we have to update the part of the project that we use to configure our application. For .NET projects with Minimal APIs, it’s the Program class.

    Here we need to add the capability to read the HTTP Context, by using

    builder.Services.AddHttpContextAccessor();
    

    As you can imagine, this is needed because, to propagate HTTP Headers, we need to know which are the incoming HTTP Headers. And they can be read from the HttpContext object.

    Next, we need to specify, as a generic behavior, which headers must be propagated. For instance, to propagate the “my-custom-correlation-id” header, you must add

    builder.Services.AddHeaderPropagation(options => options.Headers.Add("my-custom-correlation-id"));
    

    Then, for every HttpClient that will propagate those headers, you have to add AddHeaderPropagation(), like this:

    builder.Services.AddHttpClient("cars_system", c =>
        {
            c.BaseAddress = new Uri("https://localhost:7159/");
        }).AddHeaderPropagation();
    

    Finally, one last instruction that tells the application that it needs to use the Header Propagation functionality:

    app.UseHeaderPropagation();
    

    To summarize, here’s the minimal configuration to add HTTP Header propagation in a dotNET API.

    public static void Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);
    
        builder.Services.AddControllers();
        builder.Services.AddHttpContextAccessor();
        builder.Services.AddHeaderPropagation(options => options.Headers.Add("my-custom-correlation-id"));
    
        builder.Services.AddHttpClient("cars_system", c =>
        {
            c.BaseAddress = new Uri("https://localhost:7159/");
        }).AddHeaderPropagation();
    
        var app = builder.Build();
        app.UseHeaderPropagation();
        app.MapControllers();
        app.Run();
    }
    

    We’re almost ready to go!

    But we’re missing the central point of this article: logging an HTTP Header as a Correlation ID!

    Initializing Serilog

    We’ve already met Serilog several times in this blog, so I won’t repeat how to install it and how to define logs the best way possible.

    We will write our logs on Seq, and we’re overriding the minimum level to skip the noise generated by .NET:

    builder.Host.UseSerilog((ctx, lc) => lc
        .WriteTo.Seq("http://localhost:5341")
        .MinimumLevel.Information()
        .MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
        .MinimumLevel.Override("Microsoft.AspNetCore", LogEventLevel.Warning)
        .Enrich.FromLogContext();
    

    Since you probably know what’s going on, let me go straight to the point.

    Install Serilog Enricher for Correlation IDs

    We’re gonna use a specific library to log HTTP Headers treating them as Correlation IDs. To use it, you have to install the Serilog.Enrichers.CorrelationId package available on NuGet.

    Therefore, you can simply run

    dotnet add Serilog.Enrichers.CorrelationId
    

    to every .NET project that will use this functionality.

    Once we have that NuGet package ready, we can add its functionality to our logger by adding this line:

    .Enrich.WithCorrelationIdHeader("my-custom-correlation-id")
    

    This simple line tells dotnet that, when we see an HTTP Header named “my-custom-correlation-id”, we should log it as a Correlation ID.

    Run it all together

    Now we have everything in place – it’s time to run it!

    We have to run all the 3 services at the same time (you can do it with VisualStudio or you can run them separately using a CMD), and we need to have Seq installed on our local machine.

    You will see 3 instances of Swagger, and each instance is running under a different port.

    Swagger pages of our systems

    Once we have all the 3 applications up and running, we can call the /Bookings endpoint passing it a date range and an HTTP Header with key “my-custom-correlation-id” and value = “123” (or whatever we want).

    How to use HTTP Headers with Postman

    If everything worked as expected, we can open Seq and see all the logs we’ve written in our applications:

    All logs in SEQ

    Open one of them and have a look at the attributes of the logs: you will see a CorrelationId field with the value set to “123”.

    Our HTTP Header is now treated as Correlation ID

    Now, to better demonstrate how it works, call the endpoint again, but this time set “789” as my-custom-correlation-id, and specify a different date range. You should be able to see another set of logs generated by this second call.

    You can now apply filters to see which logs are related to a specific Correlation ID: open one log, click on the tick button and select “Find”.

    Filter button on SEQ

    You will then see all and only logs that were generated during the call with header my-custom-correlation-id set to “789”.

    List of all logs related to a specific Correlation ID

    Further readings

    That’s it. With just a few lines of code, you can dramatically improve your logging strategy.

    You can download and run the whole demo here:

    🔗 LogCorrelationId demo | GitHub

    To run this project you have to install both Serilog and Seq. You can do that by following this step-by-step guide:

    🔗 Logging with Serilog and Seq | Code4IT

    For this article, we’ve used the Microsoft.AspNetCore.HeaderPropagation package, which is ready to use. Are you interested in building your own solution – or, at least, learning how you can do that?

    🔗 How to propagate HTTP Headers (and Correlation IDs) using HttpClients in C# | Code4IT

    Lastly, why not use Serilog’s Scopes? And what are they? Check it out here:

    🔗 How to improve Serilog logging in .NET 6 by using Scopes | Code4IT

    Wrapping up

    This article concludes a sort of imaginary path that taught us how to use Serilog, how to correlate different logs within the same application using Scopes, and how to correlate logs from different services using Correlation IDs.

    Using these capabilities, you will be able to write logs that can help you understand the context in which a specific log occurred, thus helping you fix errors more efficiently.

    This article first appeared on Code4IT

    Happy coding!

    🐧



    Source link

  • The 2 secret endpoints I create in my .NET APIs &vert; Code4IT

    The 2 secret endpoints I create in my .NET APIs | Code4IT


    In this article, I will show you two simple tricks that help me understand the deployment status of my .NET APIs

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When I create Web APIs with .NET I usually add two “secret” endpoints that I can use to double-check the status of the deployment.

    I generally expose two endpoints: one that shows me some info about the current environment, and another one that lists all the application settings defined after the deployment.

    In this article, we will see how to create those two endpoints, how to update the values when building the application, and how to hide those endpoints.

    Project setup

    For this article, I will use a simple .NET 6 API project. We will use Minimal APIs, and we will use the appsettings.json file to load the application’s configuration values.

    Since we are using Minimal APIs, you will have the endpoints defined in the Main method within the Program class.

    To expose an endpoint that accepts the GET HTTP method, you can write

    endpoints.MapGet("say-hello", async context =>
    {
       await context.Response.WriteAsync("Hello, everybody!");
    });
    

    That’s all you need to know about .NET Minimal APIs for the sake of this article. Let’s move to the main topics ⏩

    How to show environment info in .NET APIs

    Let’s say that your code execution depends on the current Environment definition. Typical examples are that, if you’re running on production you may want to hide some endpoints otherwise visible in the other environments, or that you will use a different error page when an unhandled exception is thrown.

    Once the application has been deployed, how can you retrieve the info about the running environment?

    Here we go:

    app.MapGet("/env", async context =>
    {
        IWebHostEnvironment? hostEnvironment = context.RequestServices.GetRequiredService<IWebHostEnvironment>();
        var thisEnv = new
        {
            ApplicationName = hostEnvironment.ApplicationName,
            Environment = hostEnvironment.EnvironmentName,
        };
    
        var jsonSerializerOptions = new JsonSerializerOptions { WriteIndented = true };
        await context.Response.WriteAsJsonAsync(thisEnv, jsonSerializerOptions);
    });
    

    This endpoint is quite simple.

    The context variable, which is of type HttpContext, exposes some properties. Among them, the RequestServices property allows us to retrieve the services that have been injected when starting up the application. We can then use GetRequiredService to get a service by its type and store it into a variable.

    💡 GetRequiredService throws an exception if the service cannot be found. On the contrary, GetService returns null. I usually prefer GetRequiredService, but, as always, it depends on what you’re using it.

    Then, we create an anonymous object with the information of our interest and finally return them as an indented JSON.

    It’s time to run it! Open a terminal, navigate to the API project folder (in my case, SecretEndpoint), and run dotnet run. The application will compile and start; you can then navigate to /env and see the default result:

    The JSON result of the env endpoint

    How to change the Environment value

    While the applicationName does not change – it is the name of the running assembly, so any other value will make stop your application from running – you can (and, maybe, want to) change the Environment value.

    When running the application using the command line, you can use the --environment flag to specify the Environment value.

    So, running

    dotnet run --environment MySplendidCustomEnvironment
    

    will produce this result:

    MySplendidCustomEnvironment is now set as Environment

    There’s another way to set the environment: update the launchSettings.json and run the application using Visual Studio.

    To do that, open the launchSettings.json file and update the profile you are using by specifying the Environment name. In my case, the current profile section will be something like this:

    "profiles": {
    "SecretEntpoints": {
        "commandName": "Project",
        "dotnetRunMessages": true,
        "launchBrowser": true,
        "launchUrl": "swagger",
        "applicationUrl": "https://localhost:7218;http://localhost:5218",
        "environmentVariables":
            {
                "ASPNETCORE_ENVIRONMENT": "EnvByProfile"
            }
        },
    }
    

    As you can see, the ASPNETCORE_ENVIRONMENT variable is set to EnvByProfile.

    If you run the application using Visual Studio using that profile you will see the following result:

    EnvByProfile as defined in the launchSettings file

    How to list all the configurations in .NET APIs

    In my current company, we deploy applications using CI/CD pipelines.

    This means that final variables definition comes from the sum of 3 sources:

    • the project’s appsettings file
    • the release pipeline
    • the deployment environment

    You can easily understand how difficult it is to debug those applications without knowing the exact values for the configurations. That’s why I came up with these endpoints.

    To print all the configurations, we’re gonna use an approach similar to the one we’ve used in the previous example.

    The endpoint will look like this:

    app.MapGet("/conf", async context =>
    {
        IConfiguration? allConfig = context.RequestServices.GetRequiredService<IConfiguration>();
    
        IEnumerable<KeyValuePair<string, string>> configKv = allConfig.AsEnumerable();
    
        var jsonSerializerOptions = new JsonSerializerOptions { WriteIndented = true };
        await context.Response.WriteAsJsonAsync(configKv, jsonSerializerOptions);
    });
    

    What’s going on? We are retrieving the IConfiguration object, which contains all the configurations loaded at startup; then, we’re listing all the configurations as key-value pairs, and finally, we’re returning the list to the client.

    As an example, here’s my current appsettings.json file:

    {
      "ApiService": {
        "BaseUrl": "something",
        "MaxPage": 5,
        "NestedValues": {
          "Skip": 10,
          "Limit": 56
        }
      },
      "MyName": "Davide"
    }
    

    When I run the application and call the /conf endpoint, I will see the following result:

    Configurations printed as key-value pairs

    Notice how the structure of the configuration values changes. The value

    {
      "ApiService": {
        "NestedValues": {
          "Limit": 56
        }
    }
    

    is transformed into

    {
        "Key": "ApiService:NestedValues:Limit",
        "Value": "56"
    },
    

    That endpoint shows a lot more than you can imagine: take some time to have a look at those configurations – you’ll thank me later!

    How to change the value of a variable

    There are many ways to set the value of your variables.

    The most common one is by creating an environment-specific appsettings file that overrides some values.

    So, if your environment is called “EnvByProfile”, as we’ve defined in the previous example, the file will be named appsettings.EnvByProfile.json.

    There are actually some other ways to override application variables: we will learn them in the next article, so stay tuned! 😎

    3 ways to hide your endpoints from malicious eyes

    Ok then, we have our endpoints up and running, but they are visible to anyone who correctly guesses their addresses. And you don’t want to expose such sensitive info to malicious eyes, right?

    There are, at least, 3 simple values to hide those endpoints:

    • Use a non-guessable endpoint: you can use an existing word, such as “housekeeper”, use random letters, such as “lkfrmlvkpeo”, or use a Guid, such as “E8E9F141-6458-416E-8412-BCC1B43CCB24”;
    • Specify a key on query string: if that key is not found or it has an invalid value, return a 404-not found result
    • Use an HTTP header, and, again, return 404 if it is not valid.

    Both query strings and HTTP headers are available in the HttpContext object injected in the route definition.

    Now it’s your turn to find an appropriate way to hide these endpoints. How would you do that? Drop a comment below 📩

    Edit 2022-10-10: I thought it was quite obvious, but apparently it is not: these endpoints expose critical information about your applications and your infrastructure, so you should not expose them unless it is strictly necessary! If you have strong authentication in place, use it to secure those endpoints. If you don’t, hide those endpoints the best you can, and show only necessary data, and not everything. Strip out sensitive content. And, as soon as you don’t need that info anymore, remove those endpoints (comment them out or generate them only if a particular flag is set at compilation time). Another possible way is by using feature flags. In the end, take that example with a grain of salt: learn that you can expose them, but keep in mind that you should not expose them.

    Further readings

    We’ve used a quite new way to build and develop APIs with .NET, called “Minimal APIs”. You can read more here:

    🔗 Minimal APIs | Microsoft Learn

    If you are not using Minimal APIs, you still might want to create such endpoints. We’ve talked about accessing the HttpContext to get info about the HTTP headers and query string. When using Controllers, accessing the HttpContext requires some more steps. Here’s an article that you may find interesting:

    🔗 How to access the HttpContext in .NET API | Code4IT

    This article first appeared on Code4IT

    Wrapping up

    In this article, we’ve seen how two endpoints can help us with understanding the status of the deployments of our applications.

    It’s a simple trick that you can consider adding to your projects.

    Do you have some utility endpoints?

    Happy coding!

    🐧



    Source link

  • 3 (and more) ways to set configuration values in .NET &vert; Code4IT

    3 (and more) ways to set configuration values in .NET | Code4IT


    Every application relies on some configurations. Many devs set them up using only the appsettings file. But there’s more!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Needless to say, almost every application needs to deal with some configurations. There are tons of use cases, and you already have some of them in mind, don’t you?

    If you’re working with .NET, you’ve probably already used the appsettings.json file. It’s a good starting point, but it may be not enough in the case of complex applications (and complex deployments).

    In this article, we will learn some ways to set configurations in a .NET API application. We will use the appsettings file, of course, and some other ways such as the dotnet CLI. Let’s go! 🚀

    Project setup

    First things first: let’s set up the demo project.

    I have created a simple .NET 6 API application using Minimal APIs. This is my whole application (yes, less than 50 lines!)

    using Microsoft.Extensions.Options;
    
    namespace HowToSetConfigurations
    {
        public class Program
        {
            public static void Main(string[] args)
            {
                WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
    
                builder.Services.Configure<MyRootConfig>(
                    builder.Configuration.GetSection("RootConfig")
                );
    
                builder.Services.Configure<JsonOptions>(o =>
                {
                    o.SerializerOptions.WriteIndented = true;
                });
    
                WebApplication app = builder.Build();
    
                app.MapGet("/config", (IOptionsSnapshot<MyRootConfig> options) =>
                {
                    MyRootConfig config = options.Value;
                    return config;
                });
    
                app.Run();
            }
        }
    
        public class MyRootConfig
        {
            public MyNestedConfig Nested { get; set; }
            public string MyName { get; set; }
        }
    
        public class MyNestedConfig
        {
            public int Skip { get; set; }
            public int Limit { get; set; }
        }
    }
    

    Nothing else! 🤩

    In short, I scaffold the WebApplicationBuilder, configure that I want to map the settings section with root named RootConfig to my class of type MyRootConfig, and then run the application.

    I then expose a single endpoint, /config, which returns the current configurations, wrapped within an IOptionsSnapshot<MyRootConfig> object.

    Where is the source of the application’s configurations?

    As stated on the Microsoft docs website, here 🔗, the WebApplicationBuilder

    Loads app configuration in the following order from:
    appsettings.json.
    appsettings.{Environment}.json.
    User secrets when the app runs in the Development environment using the entry assembly.
    Environment variables.
    Command-line arguments.

    So, yeah, we have several possible sources, and the order does matter.

    Let’s see a bunch of them.

    Define settings within the appsetting.json file

    The most common way is by using the appsettings.json file. Here, in a structured and hierarchical way, you can define all the logs used as a baseline for your application.

    A typical example is this one:

    {
      "Logging": {
        "LogLevel": {
          "Default": "Information",
          "Microsoft.AspNetCore": "Warning"
        }
      },
      "AllowedHosts": "*",
      "RootConfig": {
        "MyName": "Davide",
        "Nested": {
          "Skip": 2,
          "Limit": 3
        }
      }
    }
    

    With this file, all the fields within the RootConfig element will be mapped to the MyRootConfig class at startup. That object can then be returned using the /config endpoint.

    Running the application (using Visual Studio or the dotnet CLI) you will be able to call that endpoint and see the expected result.

    Configuration results from plain Appsettings file

    Use environment-specific appsettings.json

    Now, you probably know that you can use other appsettings files with a name such as appsettings.Development.json.

    appsettings.Development file

    With that file, you can override specific configurations using the same structure, but ignoring all the configs that don’t need to be changed.

    Let’s update the Limit field defined in the “base” appsettings. You don’t need to recreate the whole structure just for one key; you can use this JSON instead:

    {
      "RootConfig": {
        "Nested": {
          "Limit": 9
        }
      }
    }
    

    Now, if we run the application using VS we will see this result:

    The key defined in the appsettings.Development.json file is replaced in the final result

    Ok, but what made .NET understand that I wanted to use that file?? It’s a matter of Environment variables and Launch profiles.

    How to define profiles within the launchSettings.json file

    Within the Properties folder in your project, you can see a launchSettings.json file. As you might expect, that file describes how you can launch the application.

    launchSettings file location in the solution

    Here we have some Launch profiles, and each of them specifies an ASPNETCORE_ENVIRONMENT variable. By default, its value is set to Development.

    "profiles": {
        "HowToSetConfigurations": {
          "commandName": "Project",
          "dotnetRunMessages": true,
          "launchBrowser": true,
          "launchUrl": "config",
          "applicationUrl": "https://localhost:7280;http://localhost:5280",
          "environmentVariables": {
            "ASPNETCORE_ENVIRONMENT": "Development"
          }
        },
    }
    

    Now, recall that the environment-specific appsettings file name is defined as appsettings.{Environment}.json. Therefore, by running your application with Visual Studio using the HowToSetConfigurations launch profile, you’re gonna replace that {Environment} with Development, thus using the appsettings.Development.json.

    Ça va sans dire that you can use every value you prefer – such as Staging, MyCustomEnvironmentName, and so on.

    How to define the current Environment with the CLI

    If you are using the dotnet CLI you can set that environment variable as

    dotnet run --ASPNETCORE_ENVIRONMENT=Development
    

    or, in a simpler way, you can use

    dotnet run --environment Development
    

    and get the same result.

    How do nested configurations get resolved?

    As we’ve seen in a previous article, even if we are using configurations defined in a hierarchical structure, in the end, they are transformed into key-value pairs.

    The Limit key as defined here:

    {
      "RootConfig": {
        "Nested": {
          "Limit": 9
        }
      }
    }
    

    is transformed into

    {
        "Key": "RootConfig:Nested:Limit",
        "Value": "9"
    },
    

    with the : separator. We will use this info shortly.

    Define configurations in the launchSettings file

    As we’ve seen before, each profile defined in the launchSettings file describes a list of environment variables:

    "environmentVariables": {
      "ASPNETCORE_ENVIRONMENT": "Development"
    }
    

    This means that we can also define our configurations here, and have them loaded when using this specific profile.

    From these configurations

    "RootConfig": {
        "MyName": "Davide",
        "Nested": {
          "Skip": 2,
          "Limit": 3
        }
      }
    

    I want to update the MyName field.

    I can then update the current profile as such:

    "environmentVariables": {
      "ASPNETCORE_ENVIRONMENT": "Development",
      "RootConfig:MyName": "Mr Bellone"
    }
    

    so that, when I run the application using that profile, I will get this result:

    The RootConfig:MyName is replaced, its value is taken from the launchSettings file

    Have you noticed the key RootConfig:MyName? 😉

    🔎 Notice that now we have both MyName = Mr Bellone, as defined in the lauchSettings file, and Limit = 9, since we’re still using the appsettings.Development.json file (because of that “ASPNETCORE_ENVIRONMENT”: “Development” ).

    How to define the current profile with the CLI

    Clearly, we can use the dotnet CLI to load the whole environment profile. We just need to specify it using the --launch-profile flag:

    dotnet run --launch-profile=HowToSetConfigurations
    

    Define application settings using the dotnet CLI

    Lastly, we can specify config values directly using the CLI.

    It’s just a matter of specifying the key-value pairs as such:

    dotnet run --RootConfig:Nested:Skip=55
    

    And – TAH-DAH! – you will see this result:

    JSON result with the key specified on the CLI

    ❓ A question for you! Notice that, even though I specified only the Skip value, both Limit and MyName have the value defined before. Do you know why it happens? Drop a message below if you know the answer! 📩

    Further readings

    As always, there’s more!

    If you want to know more about how dotNET APIs load and start, you should have a look at this page:

    🔗 ASP.NET Core Web Host | Microsoft Docs

    Ok, now you know different approaches for setting configurations.
    How do you know the exact values that are set in your application?

    🔗 The 2 secret endpoints I create in my .NET APIs | Code4IT

    This article first appeared on Code4IT

    Wrapping up

    Ok then, in this article we’ve seen different approaches you can use to define configurations in your .NET API projects.

    Knowing what you can do with the CLI can be helpful especially when using CI/CD, in case you need to run the application using specific keys.

    Do you know any other ways to define configs?

    Happy coding!

    🐧



    Source link

  • How to deploy .NET APIs on Azure using GitHub actions &vert; Code4IT

    How to deploy .NET APIs on Azure using GitHub actions | Code4IT


    Building APIs with .NET is easy. Deploying them on Azure is easy too, with GitHub Actions!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    With Continuous Delivery (CD), you can deploy your code in a fast-paced and stable way.

    To deploy applications, you’ll need workflows that run and automate the process. In that way, you don’t have to perform repetitive tasks and the whole process becomes less error-prone.

    In this article, we will learn how to implement CD pipelines using GitHub Actions. In particular, we will focus on the case of a .NET API application that will be deployed on Azure.

    Create a .NET API project

    Since the focus of this article is on the deployment part, we won’t create complex APIs. Just a simple Hello Word is enough.

    To do that, we’re gonna use dotnet Minimal API – a way to create APIs without scaffolding lots of files and configurations.

    Our API, the BooksAPI, has a single endpoint: /, the root, simply returns “Hello World!”.

    All our code is stored in the Program file:

    var builder = WebApplication.CreateBuilder(args);
    
    var app = builder.Build();
    
    app.UseHttpsRedirection();
    
    app.MapGet("/", () => "Hello World!");
    
    app.Run();
    

    Nothing fancy: run the application locally, and navigate to the root. You will see the Hello World message.

    Lastly, put your code on GitHub: initialize a repository and publish it on GitHub – it can either be a public or a private repository.

    Create an App Service on Azure

    Now, to deploy an application, we need to define its destination. We’re going to deploy it on Azure, so you need an Azure account before moving on.

    Open the Azure Portal, navigate to the App Service section, and create a new one.

    Configure it as you wish, and then proceed until you have it up and running.

    Once everything is done, you should have something like this:

    Azure App Service overview

    Now the application is ready to be used: we now need to deploy our code here.

    Generate the GitHub Action YAML file for deploying .NET APIs on Azure

    It’s time to create our Continuous Delivery pipeline.

    Luckily, GitHub already provides lots of templates for GitHub Actions. We will need one specific for our .NET APIs.

    On GitHub, navigate to your repository, head to the Actions menu, and select New workflow.

    New Workflow button on GitHub

    You will see several predefined actions that allow you to do stuff with your repository. We are now interested in the one called “Deploy a .NET Core app to an Azure Web App”:

    Template for deploying the .NET Application on Azure

    Clicking on “Configure” you will see a template. Read carefully the instructions, as they will guide you to the correct configuration of the GitHub action.

    In particular, you will have to update the environment variables specified in this section:

    env:
      AZURE_WEBAPP_NAME: your-app-name # set this to the name of your Azure Web App
      AZURE_WEBAPP_PACKAGE_PATH: "." # set this to the path to your web app project, defaults to the repository root
      DOTNET_VERSION: "5" # set this to the .NET Core version to use
    

    Clearly, AZURE_WEBAPP_NAME must match the name you’ve defined on Azure, while DOTNET_VERSION must match the version you’re using to create your dotnet APIs.

    For my specific project, I’ve replaced that section with

    env:
      AZURE_WEBAPP_NAME: BooksAPI<myName> # set this to the name of your Azure Web App
      AZURE_WEBAPP_PACKAGE_PATH: "." # set this to the path to your web app project, defaults to the repository root
      DOTNET_VERSION: "6.0" # set this to the .NET Core version to use
    

    🟧 DOTNET_VERSION requires also the minor version of dotnet. Setting 6 will now work: you need to specify 6.0. 🟧

    Now you can save your YAML file in your repository: it will be saved under ./.github/workflows.

    So, as a reference, here’s the full YAML file I’m using to deploy my APIs:

    name: Build and deploy ASP.Net Core app to an Azure Web App
    
    env:
      AZURE_WEBAPP_NAME: BooksAPI<myName>
      AZURE_WEBAPP_PACKAGE_PATH: "."
      DOTNET_VERSION: "6.0"
    
    on:
      push:
        branches: ["master"]
      workflow_dispatch:
    
    permissions:
      contents: read
    
    jobs:
      build:
        runs-on: ubuntu-latest
    
        steps:
          - uses: actions/checkout@v3
    
          - name: Set up .NET Core
            uses: actions/setup-dotnet@v2
            with:
              dotnet-version: ${{ env.DOTNET_VERSION }}
    
          - name: Set up dependency caching for faster builds
            uses: actions/cache@v3
            with:
              path: ~/.nuget/packages
              key: ${{ runner.os }}-nuget-${{ hashFiles('**/packages.lock.json') }}
              restore-keys: |
                            ${{ runner.os }}-nuget-
    
          - name: Build with dotnet
            run: dotnet build --configuration Release
    
          - name: dotnet publish
            run: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp
    
          - name: Upload artifact for deployment job
            uses: actions/upload-artifact@v3
            with:
              name: .net-app
              path: ${{env.DOTNET_ROOT}}/myapp
    
      deploy:
        permissions:
          contents: none
        runs-on: ubuntu-latest
        needs: build
        environment:
          name: "Development"
          url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
    
        steps:
          - name: Download artifact from build job
            uses: actions/download-artifact@v3
            with:
              name: .net-app
    
          - name: Deploy to Azure Web App
            id: deploy-to-webapp
            uses: azure/webapps-deploy@v2
            with:
              app-name: ${{ env.AZURE_WEBAPP_NAME }}
              publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
              package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
    

    As you can see, we have 2 distinct steps: build and deploy.

    In the build phase, we check out our code, restore the NuGet dependencies, build the project, pack it and store the final result as an artifact.

    In the deploy step, we retrieve the newly created artifact and publish it on Azure.

    Store the Publish profile as GitHub Secret

    As you can see in the instructions of the workflow file, you have to

    Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE, paste the publish profile contents as the value of the secret.

    That Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE statement was not clear to me: I thought you had to create that key within your .NET project. Turns out you can create secrets related to repositories on GitHub (so, it’s language-agnostic).

    A Publish profile is a file that contains information and settings used to deploy applications to Azure. It’s nothing but an XML file that lists the possible ways to deploy your application, such as FTP, Web Deploy, Zip Deploy, and so on.

    We have to get our publish profile and save it into GitHub secrets.

    To retrieve the Publish profile, head to the Azure App Service page and click Get publish profile to download the file.

    Get Publish Profile button on Azure Portal

    Now, get back on GitHub, Head to Settings > Security > Secrets > Actions.

    Here you can create a new secret related to your repository.

    Create a new one, name it AZURE_WEBAPP_PUBLISH_PROFILE, and paste the content of the Publish profile file you’ve just downloaded.

    You will then see something like this:

    GitHub secret for Publish profile

    Notice that the secret name must be AZURE_WEBAPP_PUBLISH_PROFILE. That constraint is set because we are accessing the Publish profile by key:

    - name: Deploy to Azure Web App
        id: deploy-to-webapp
        uses: azure/webapps-deploy@v2
        with:
            app-name: ${{ env.AZURE_WEBAPP_NAME }}
            publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
            package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
    

    In particular, notice the publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} part.

    Clearly, the two names must match: nothing stops you from changing the name of the secret in both the YAML file and the GitHub Secret page.

    Final result

    It’s time to see the final result.

    Update the application code (I’ve slightly modified the Hello world message), and push your changes to GitHub.

    Under the Actions tab, you will see your CD pipeline run.

    CD workflow run

    Once it’s completed, you can head to your application root and see the final result.

    Final result of the API

    Further readings

    Automating repetitive tasks allows you to perform more actions with fewer errors. Generally speaking, the more stuff you can automate, the better.

    My own blog heavily relies on automation: scaffolding content, tracking ideas, and publishing online…

    If you want to peek at what I do, here are my little secrets:

    🔗 From idea to publishing, and beyond: how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT

    In this article, we’ve only built and deployed our application. We can do more: run tests and keep track of code coverage. If you want to learn how you can do it using Azure DevOps, here we go:

    🔗 Cobertura, YAML, and Code Coverage Protector: how to view Code Coverage report on Azure DevOps | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    I have to admit that I struggled a lot in setting up the CD pipeline. I was using the one proposed by default on Visual Studio – but it didn’t work.

    Using the template found on GitHub worked almost instantly – I just had to figure out what did they mean by repository secrets.

    Now we have everything in place. Since the workflow is stored in a text file within my repository, if I have to create and deploy a new API project I can simply do that by copying that file and fixing the references.

    Nice and easy, right? 😉

    Happy coding!

    🐧



    Source link

  • Book Review: C# 11 and .NET 7

    Book Review: C# 11 and .NET 7


    It’s time to review this book!

    • Book title: C# 11 and .NET 7 – Modern Cross-Platform Development Fundamentals
    • Author(s): Mark J. Price (LinkedIn, Twitter)
    • Published year: 2022
    • Publisher: Packt
    • Links: Amazon, Packt

    What can we say?

    “C# 11 and .NET 7 – Modern Cross-Platform Development Fundamentals” is a HUGE book – ~750 pages – that guides readers from the very basics of C# and dotnet to advanced topics and approaches.

    This book starts from the very beginning, explaining the history of C# and .NET, then moving to C# syntax and exploring OOP topics.

    If you already have some experience with C#, you might be tempted to skip those chapters. Don’t skip them! Yes, they’re oriented to newbies, but you’ll find some gems that you might find interesting or that you might have ignored before.

    Then, things get really interesting: some of my favourite topics were:

    • how to build and distribute packages;
    • how to publish Console Apps;
    • Entity Framework (which I used in the past, before ASP.NET Core, so it was an excellent way to see how things evolved);
    • Blazor

    What I liked

    • the content is well explained;
    • you have access to the code example to follow along (also, the author explains how to install and configure stuff necessary to follow along, such as SQL Lite when talking about Entity Framework)
    • it also teaches you how to use Visual Studio

    What I did not like

    • in the printed version, some images are not that readable
    • experienced developers might find some chapters boring (well, they’re not the target audience of the book, so it makes sense 🤷‍♂️ )

    This article first appeared on Code4IT 🐧

    Wrapping up

    Have you read it? What do you think of this book?

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!

    🐧





    Source link