برچسب: vert

  • How to run SonarQube analysis locally with Docker | Code4IT

    How to run SonarQube analysis locally with Docker | Code4IT


    The quality of a project can be measured by having a look at how the code is written. SonarQube can help you by running static code analysis and letting you spot the pain points. Let’s learn how to install and run it locally with Docker.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Code quality is important, and having the right tool can be terribly beneficial for an application’s long-term success.

    Although maintainability problems often come from module separation and cannot be solved by making a single class cleaner, a tool like SonarQube can pave the way to a cleaner codebase.

    In this article, we will learn how to download and install SonarQube Community using Docker. We will see how to configure it and run your very first code analysis on a .NET-based application.

    Scaffold a dummy ASP.NET Core API project

    To try it out, you need- of course! – a repository to analyse.

    In this article, I will set up SonarQube to analyse a tiny, dummy ASP.NET Core API project. You are probably already familiar with this API project: it’s the default one created by Visual Studio – the one with the Weather Forecast.

    I chose to use Controllers instead of Minimal APIs so that we could analyse some more code.

    Have a look at the code: you will notice that the default implementation of the WeatherForecastController injects an instance of ILogger, stores it, and then never references it in other places. This sounds like a good maintainability issue that SonarQube should be able to identify.

    To better locate which files SonarQube is creating, I decided to put this project under source control, but only locally. This way, when we run the SonarQube analysis, we will be able to see the files created and modified by SonarQube.

    Clearly, the first step is to have SonaQube installed on your machine.

    I’m going to install SonarQube Community Build. It contains almost all the functionalities of SonarQube, and it’s available for free (of course, to have additional functionalities, you have to pick the proper pricing tier).

    🔗 SonarQube Community Build

    SonarQube Community Build can be installed via Docker: this way, SonarQube can run in a containerised environment, regardless of your Operating System.

    To do that, you can run the following command:

    docker run --name sonarqube-community -p 9001:9000 sonarqube:community
    

    This Docker command downloads the latest version of the sonarqube:community Docker Image, and runs it locally, making it available at localhost:9001.

    As briefly explained in an old article, the -p 9001:9000 part of the CLI command means that you are exposing the port 9000 of the “inner” container to the world via the port 9001 of the host.

    Once the command has finished downloading all the dependencies and loading all the resources, you will be able to access SonarQube on localhost:9001.

    You will be asked to log in: the default username is admin, and the password is (again) admin.

    SonaQube login for

    After the first login, you will be asked to change your password.

    Create a SonarQube Project

    It’s time to link SonarQube to your repository.

    To do that, you have to create a so-called Project. Ideally, you may want to integrate SonarQube into your CI pipeline, but having it run locally is fine for tying it out.

    So, on the Projects page, you can create a new project. Click on “Create a local project” and follow the wizard.

    “Create a local project” button

    First, create a new Project by defining the Display name (in my case, code4it-sonarqube-local) and the project key (code4it-sonarqube-local-project-key). The Project Key is used in the command line to execute the code analysis using the rules defined in this project.

    Also, you have to specify the name of the branch that you will be using as a baseline: generally, it’s either “main” or “master”, but it can be anything.

    Create new project Form

    Follow the wizard, choosing some configurations (I suggest you start with the default values), and you’ll end up with a Project ready to be initialised.

    SonarQube wizard: choose analysis method

    Then, you will have to generate a token to run the analysis (I know, it feels like there are too many similar steps. But bear with me; we’re almost ready to run the analysis).

    Generate the Token

    By hitting the “generate” button you’ll see a new token like this: sqp_fd71f97760c84539b579713f18a07c790432cfe8. Remember to store it somewhere, as you’ll gonna be using it later.

    The last step is to make sure that you have sonarscanner available as a .NET Core Global Tool in your machine.

    Just open a terminal as an administrator and run:

    dotnet tool install --global dotnet-sonarscanner
    

    Run the SonarQube analysis on your local repository

    Finally, we are ready to run the first analysis of the code!

    I suggest you commit all your changes so that you’ll see the files generated by SonarQube.

    Open a Terminal, navigate to the root of the Solution, and follow these steps.

    Prepare the SonarQube analysis

    You first have to instruct SonaQube on the configurations to be used for the current analysis.

    The command to run is something like this:

    dotnet sonarscanner begin /k:"<your key here>" /d:sonar.host.url="<your-host-root-url>"  /d:sonar.token="<your-project-token>"
    

    For my specific execution context, using the values you can see in this article, I have to run the command with the following parameters:

    dotnet sonarscanner begin /k:"code4it-sonarqube-local-project-key" /d:sonar.host.url="http://localhost:9001"  /d:sonar.token="sqp_fd71f97760c84539b579713f18a07c790432cfe8"
    

    The flags represent the configurations of SonarQube:

    /k is the Project Key, as defined before: it contains the rules to be used;
    /d:sonar.host.url is the url that will receive the result of the analysis, allowing SonarQube to aggregate the issues and display them on a UI;
    /d:sonar.token is the Token you created before.

    After the command completes, you’ll see that SonarQube created some files to prepare the code analysis. These files contain all the rules under code analysis and their related severity.

    SonarQube files generated after initialization

    From now on, SonarQube will be able to run the analysis and understand how to treat each issue.

    Build the solution

    Now you have to build the whole solution, running:

    You can, of course, choose to run the command specifying the solution file to build.

    Even if it seems trivial, this step is crucial for SonarQube: in fact, it generates some new metadata files that list all the files that have to be taken into account when running the analysis, as well as the path to the output folder:

    Files generated by SonarQube after the build

    Run the actual SonarQube analysis

    Finally, it’s time to run the actual analysis.

    Again, head to the root of the application, and on a terminal run the following command:

    dotnet sonarscanner end /d:sonar.token="<your-token>"
    

    In my case, the full command is

    dotnet sonarscanner end /d:sonar.token="sqp_fd71f97760c84539b579713f18a07c790432cfe8"
    

    Depending on the size of the project, it will take different amounts of time. For this simple project, it took 7 seconds. For a huge project I worked on, it took almost 2 hours.

    Also, the run time depends on the amount of new code to be analyzed: the very first run is the slowest one, and then all the subsequent analyses will focus on the latest code. In fact, most of the issues are stored in a cache.

    No new files are created, as the result is directly sent to the SonarQube server.

    The result is now available at localhost!

    Open a browser, open the website at the port you defined before, and get ready to navigate the status of the static analysis.

    SonarQube analysis overview

    As I was expecting, the project passed the so-called Quality Gates – the minimum level set to consider a project “good”.

    Yet, as you can see under the “Issues” tab, there are actually two issues. For example, there’s a suggested improvement that says to remove the _logger field, it is not used:

    SonarQube issue details

    Of course, in a more complex project, you’ll find more issues, with different severity.

    Further readings

    This article first appeared on Code4IT 🐧

    In this article, I assumed you know the basics of Docker. If not, or if you want to brush up your knowledge about the basics of Docker, here’s an article for you.

    🔗 First steps with Docker: download and run MongoDB locally | Code4IT

    All in all, remember that having clean code is only one of the concerns you should care about when writing code. But what should you really focus on?

    🔗 Code opinion: performance or clean code?

    Wrapping up

    SonarQube is a tool, not the solution to your problems.

    Just like with Code Coverage, having your code without SonarQube issues does not mean that your code is future-proof and maintainable.

    Maybe the single line of code or the single class has no issues. However, the code may still be a mess, preventing you from applying changes easily.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Why reaching 100% Code Coverage must NOT be your testing goal (with examples in C#) &vert; Code4IT

    Why reaching 100% Code Coverage must NOT be your testing goal (with examples in C#) | Code4IT


    Average teams aim at 100% Code Coverage just to reach the number. Great teams don’t. Why?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Code Coverage is a valuable metric in software development, especially when it comes to testing. It provides insights into how much of your codebase is exercised by your test suite.

    However, we must recognize that Code Coverage alone should not be the ultimate goal of your testing strategy. It has some known limitations, and 100% Code Coverage does not guarantee your code to be bug-free.

    In this article, we’ll explore why Code Coverage matters, its limitations, and how to balance achieving high coverage and effective testing. We’ll use C# to demonstrate when Code Coverage works well and how you can cheat on the result.

    What Is Code Coverage?

    Code Coverage measures the percentage of code lines, branches, or statements executed during testing. It helps answer questions like:

    • How much of my code is tested?
    • Are there any untested paths or dead code?
    • Which parts of the application need additional test coverage?

    In C#, tools like Cobertura, dotCover, and Visual Studio’s built-in coverage analysis provide Code Coverage reports.

    You may be tempted to think that the higher the coverage, the better the quality of your tests. However, we will soon demonstrate why this assumption is misleading.

    Why Code Coverage Matters

    Clearly, if you write valuable tests, Code Coverage is a great ally.

    A high value of Code Coverage helps you with:

    1. Risk mitigation: High Code Coverage reduces the risk of undiscovered defects. If a piece of code isn’t covered, it will likely contain bugs.
    2. Preventing regressions: code is destined to evolve over time. If you ensure that most of your code is covered by tests, whenever you’ll add some more code you will discover which parts of the existing system are impacted by your changes. If you update the production code and no test fails, it might be a bad sign: you probably need to cover the code you are modifying with enough tests.
    3. Quality assurance: Code Coverage ensures that critical parts of your application are tested thoroughly. Good tests focus on the functional aspects of the code (what) rather than on the technical aspects (how). A good test suite is a safety net against regressions.
    4. Guidance for Testing Efforts: Code Coverage highlights areas that need more attention. It guides developers in writing additional tests where necessary.

    The Limitations of Code Coverage

    While Code Coverage is valuable, it has limitations:

    1. False Sense of Security: Achieving 100% coverage doesn’t guarantee bug-free software. It’s possible to have well-covered code that still contains subtle defects. This is especially true when mocking dependencies.
    2. They focus on Lines, Not Behavior: Code Coverage doesn’t consider the quality of tests. It doesn’t guarantee that the tests covers all possible scenarios.
    3. Ignored Edge Cases: Some code paths (exception handling, rare conditions) are complex to cover. High coverage doesn’t necessarily mean thorough testing.

    3 Practical reasons why Code Coverage percentage can be misleading

    For the sake of this article, I’ve created a dummy .NET API project with the typical three layers: controller, service, and repository.

    It contains a Controller with two endpoints:

    [ApiController]
    [Route("[controller]")]
    public class UniversalWeatherForecastController : ControllerBase
    {
        private readonly IWeatherService _weatherService;
    
        public UniversalWeatherForecastController(IWeatherService weatherService)
        {
            _weatherService = weatherService;
        }
    
        [HttpGet]
        public IEnumerable<Weather> Get(int locationId)
        {
            var forecast = _weatherService.ForecastsByLocation(locationId);
            return forecast.ToList();
        }
    
        [HttpGet("minByPlanet")]
        public Weather GetMinByPlanet(Planet planet)
        {
            return _weatherService.MinTemperatureForPlanet(planet);
        }
    }
    

    The Controller uses the Service:

    public class WeatherService : IWeatherService
    {
        private readonly IWeatherForecastRepository _repository;
    
        public WeatherService(IWeatherForecastRepository repository)
        {
            _repository = repository;
        }
    
        public IEnumerable<Weather> ForecastsByLocation(int locationId)
        {
            ArgumentOutOfRangeException.ThrowIfLessThanOrEqual(locationId, 0);
    
            Location? searchedLocation = _repository.GetLocationById(locationId);
    
            if (searchedLocation == null)
                throw new LocationNotFoundException(locationId);
    
            return searchedLocation.WeatherForecasts;
        }
    
        public Weather MinTemperatureForPlanet(Planet planet)
        {
            var allCitiesInPlanet = _repository.GetLocationsByPlanet(planet);
            int minTemperature = int.MaxValue;
            Weather minWeather = null;
            foreach (var city in allCitiesInPlanet)
            {
                int temperature =
                    city.WeatherForecasts.MinBy(c => c.TemperatureC).TemperatureC;
    
                if (temperature < minTemperature)
                {
                    minTemperature = temperature;
                    minWeather = city.WeatherForecasts.MinBy(c => c.TemperatureC);
                }
            }
            return minWeather;
        }
    }
    

    Finally, the Service calls the Repository, omitted for brevity (it’s just a bunch of items in an in-memory List).

    I then created an NUnit test project to generate the unit tests, focusing on the WeatherService:

    
    public class WeatherServiceTests
    {
        private readonly Mock<IWeatherForecastRepository> _mockRepository;
        private WeatherService _sut;
    
        public WeatherServiceTests() => _mockRepository = new Mock<IWeatherForecastRepository>();
    
        [SetUp]
        public void Setup() => _sut = new WeatherService(_mockRepository.Object);
    
        [TearDown]
        public void Teardown() =>_mockRepository.Reset();
    
        // Tests
    
    }
    

    This class covers two cases, both related to the ForecastsByLocation method of the Service.

    Case 1: when the location exists in the repository, this method must return the related info.

    [Test]
    public void ForecastByLocation_Should_ReturnForecast_When_LocationExists()
    {
        //Arrange
        var forecast = new List<Weather>
            {
                new Weather{
                    Date = DateOnly.FromDateTime(DateTime.Now.AddDays(1)),
                    Summary = "sunny",
                    TemperatureC = 30
                }
            };
    
        var location = new Location
        {
            Id = 1,
            WeatherForecasts = forecast
        };
    
        _mockRepository.Setup(r => r.GetLocationById(1)).Returns(location);
    
        //Act
        var resultForecast = _sut.ForecastsByLocation(1);
    
        //Assert
        CollectionAssert.AreEquivalent(forecast, resultForecast);
    }
    

    Case 2: when the location does not exist in the repository, the method should throw a LocationNotFoundException.

    [Test]
    public void ForecastByLocation_Should_Throw_When_LocationDoesNotExists()
    {
        //Arrange
        _mockRepository.Setup(r => r.GetLocationById(1)).Returns<Location?>(null);
    
        //Act + Assert
        Assert.Catch<LocationNotFoundException>(() => _sut.ForecastsByLocation(1));
    }
    

    We then can run the Code Coverage report and see the result:

    Initial Code Coverage

    Tests cover 16% of lines and 25% of branches, as shown in the report displayed above.

    Delving into the details of the WeatherService class, we can see that we have reached 100% Code Coverage for the ForecastsByLocation method.

    Code Coverage Details for the Service

    Can we assume that that method is bug-free? Not at all!

    Not all cases may be covered by tests

    Let’s review the method under test.

    public IEnumerable<Weather> ForecastsByLocation(int locationId)
    {
        ArgumentOutOfRangeException.ThrowIfLessThanOrEqual(locationId, 0);
    
        Location? searchedLocation = _repository.GetLocationById(locationId);
    
        if (searchedLocation == null)
            throw new LocationNotFoundException(locationId);
    
        return searchedLocation.WeatherForecasts;
    }
    

    Our tests only covered two cases:

    • the location exists;
    • the location does not exist.

    However, these tests do not cover the following cases:

    • the locationId is less than zero;
    • the locationId is exactly zero (are we sure that 0 is an invalid locationId?)
    • the _repository throws an exception (right now, that exception is not handled);
    • the location does exist, but it has no weather forecast info; is this a valid result? Or should we have thrown another custom exception?

    So, well, we have 100% Code Coverage for this method, yet we have plenty of uncovered cases.

    You can cheat on the result by adding pointless tests

    There’s a simple way to have high Code Coverage without worrying about the quality of the tests: calling the methods and ignoring the result.

    To demonstrate it, we can create one single test method to reach 100% coverage for the Repository, without even knowing what it actually does:

    public class WeatherForecastRepositoryTests
    {
        private readonly WeatherForecastRepository _sut;
    
        public WeatherForecastRepositoryTests() =>
            _sut = new WeatherForecastRepository();
    
        [Test]
        public void TotallyUselessTest()
        {
            _ = _sut.GetLocationById(1);
            _ = _sut.GetLocationsByPlanet(Planet.Jupiter);
    
            Assert.That(1, Is.EqualTo(1));
        }
    }
    

    Here we are: we have reached 53% of total Code Coverage by adding one single test, which does not provide any value!

    We reached 53% Code Coverage without adding useful methods

    As you can see, in fact, the WeatherForecastRepository has now reached 100% Code Coverage.

    The whole class has 100% Code Coverage, even without useful tests

    Great job! Or is it?

    You can cheat by excluding parts of the code

    In C# there is a handy attribute that you can apply to methods and classes: ExcludeFromCodeCoverage.

    While this attribute can be useful for classes that you cannot test, it can be used to inflate the Code Coverage percentage by applying it to classes and methods you don’t want to test (maybe because you are lazy?).

    We can, in fact, add that attribute to every single class like this:

    
    [ApiController]
    [Route("[controller]")]
    [ExcludeFromCodeCoverage]
    public class UniversalWeatherForecastController : ControllerBase
    {
        // omitted
    }
    
    [ExcludeFromCodeCoverage]
    public class WeatherService : IWeatherService
    {
        // omitted
    }
    
    [ExcludeFromCodeCoverage]
    public class WeatherForecastRepository : IWeatherForecastRepository
    {
        // omitted
    }
    

    You can then add the same attribute to all the other classes – even the Program class! – to reach 100% Code Coverage without writing lots of test.

    100% Code Coverage, but without any test

    Note: to reach 100% I had to exclude everything but the tests on the Repository: otherwise, if I had exactly zero methods under tests, the final Code Coverage would’ve been 0.

    Beyond Code Coverage: Effective Testing Strategies

    As we saw, high Code Coverage is not enough. It’s a good starting point, but it must not be the final goal.

    We can, indeed, focus our efforts in different areas:

    1. Test Quality: Prioritize writing meaningful tests over chasing high coverage. Focus on edge cases, boundary values, and scenarios that matter to users.
    2. Exploratory Testing: Manual testing complements automated tests. Exploratory testing uncovers issues that automated tests might miss.
    3. Mutation Testing: Instead of just measuring coverage, consider mutation testing. It introduces artificial defects and checks if tests catch them.

    Finally, my suggestion is to focus on integration tests rather than on unit tests: this testing strategy is called Testing Diamond.

    Further readings

    To generate Code Coverage reports, I used Coverlet, as I explained in this article (which refers to Visual Studio 2019, but the steps are still valid with newer versions).

    🔗 How to view Code Coverage with Coverlet and Visual Studio | Code4IT

    In my opinion, we should not focus all our efforts on Unit Tests. On the contrary, we should write more Integration Tests to ensure that the functionality, as a whole, works correctly.

    This way of defining tests is called Testing Diamond, and I explained it here:

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage)

    This article first appeared on Code4IT 🐧

    Finally, I talked about Code Coverage on YouTube as a guest on the VisualStudio Toolbox channel. Check it out here!

    https://www.youtube.com/watch?v=R80G3LJ6ZWc

    Wrapping up

    Code Coverage is a useful metric but should not be the end goal. Aim for a balance: maintain good coverage while ensuring effective testing. Remember that quality matters more than mere numbers. Happy testing! 🚀

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Getting started with Load testing with K6 on Windows 11 &vert; Code4IT

    Getting started with Load testing with K6 on Windows 11 | Code4IT


    Can your system withstand heavy loads? You can answer this question by running Load Tests. Maybe, using K6 as a free tool.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Understanding how your system reacts to incoming network traffic is crucial to determining whether it’s stable, able to meet the expected SLO, and if the underlying infrastructure and architecture are fine.

    How can we simulate many incoming requests? How can we harvest the results of our API calls?

    In this article, we will learn how to use K6 to run load tests and display the final result locally in Windows 11.

    This article will be the foundation of future content, in which I’ll explore more topics related to load testing, performance tips, and more.

    What is Load Testing?

    Load testing simulates real-world usage conditions to ensure the software can handle high traffic without compromising performance or user experience.

    The importance of load testing lies in its ability to identify bottlenecks and weak points in the system that could lead to slow response times, errors, or crashes when under stress.

    By conducting load testing, developers can make necessary optimizations and improvements, ensuring the software is robust, reliable, and scalable. It’s an essential step in delivering a quality product that meets user expectations and maintains business continuity during peak usage times. If you think of it, a system unable to handle the incoming traffic may entirely or partially fail, leading to user dissatisfaction, loss of revenue, and damage to the company’s reputation.

    Ideally, you should plan to have automatic load tests in place in your Continuous Delivery pipelines, or, at least, ensure that you run Load tests in your production environment now and then. You then want to compare the test results with the previous ones to ensure that you haven’t introduced bottlenecks in the last releases.

    The demo project

    For the sake of this article, I created a simple .NET API project: it exposes just one endpoint, /randombook, which returns info about a random book stored in an in-memory Entity Framework DB context.

    int requestCount = 0;
    int concurrentExecutions = 0;
    object _lock = new();
    app.MapGet("/randombook", async (CancellationToken ct) =>
    {
        Book ? thisBook =
            default;
        var delayMs = Random.Shared.Next(10, 10000);
        try
        {
            lock(_lock)
            {
                requestCount++;
                concurrentExecutions++;
                app.Logger.LogInformation("Request {Count}. Concurrent Executions {Executions}. Delay: {DelayMs}ms", requestCount, concurrentExecutions, delayMs);
            }
            using(ApiContext context = new ApiContext())
            {
                await Task.Delay(delayMs);
                if (ct.IsCancellationRequested)
                {
                    app.Logger.LogWarning("Cancellation requested");
                    throw new OperationCanceledException();
                }
                var allbooks = await context.Books.ToArrayAsync(ct);
                thisBook = Random.Shared.GetItems(allbooks, 1).First();
            }
        }
        catch (Exception ex)
        {
            app.Logger.LogError(ex, "An error occurred");
            return Results.Problem(ex.Message);
        }
        finally
        {
            lock(_lock)
            {
                concurrentExecutions--;
            }
        }
        return TypedResults.Ok(thisBook);
    });
    

    There are some details that I want to highlight before moving on with the demo.

    As you can see, I added a random delay to simulate a random RTT (round-trip time) for accessing the database:

    var delayMs = Random.Shared.Next(10, 10000);
    // omit
    await Task.Delay(delayMs);
    

    I then added a thread-safe counter to keep track of the active operations. I increase the value when the request begins, and decrease it when the request completes. The log message is defined in the lock section to avoid concurrency issues.

    lock (_lock)
    {
        requestCount++;
        concurrentExecutions++;
    
        app.Logger.LogInformation("Request {Count}. Concurrent Executions {Executions}. Delay: {DelayMs}ms",
            requestCount,
            concurrentExecutions,
            delayMs
     );
    }
    
    // and then
    
    lock (_lock)
    {
        concurrentExecutions--;
    }
    

    Of course, it’s not a perfect solution: it just fits my need for this article.

    Install and configure K6 on Windows 11

    With K6, you can run the Load Tests by defining the endpoint to call, the number of requests per minute, and some other configurations.

    It’s a free tool, and you can install it using Winget:

    winget install k6 --source winget
    

    You can ensure that you have installed it correctly by opening a Bash (and not a PowerShell) and executing the following command.

    Note: You can actually use PowerShell, but you have to modify some system keys to make K6 recognizable as a command.

    The --version prints the version installed and the id of the latest GIT commit belonging to the installed package. For example, you will see k6.exe v0.50.0 (commit/f18209a5e3, go1.21.8, windows/amd64).

    Now, we can initialize the tool. Open a Bash and run the following command:

    This command generates a script.js file, which you will need to configure in order to set up the Load Testing configurations.

    Here’s the scaffolded file (I removed the comments that refer to parts we are not going to cover in this article):

    import http from "k6/http"
    import { sleep } from "k6"
    
    export const options = {
      // A number specifying the number of VUs to run concurrently.
      vus: 10, // A string specifying the total duration of the test run.
      duration: "30s",
    }
    
    export default function () {
      http.get("https://test.k6.io")
      sleep(1)
    }
    

    Let’s analyze the main parts:

    • vus: 10: VUs are the Virtual Users: they simulate the incoming requests that can be executed concurrently.
    • duration: '30s': this value represents the duration of the whole test run;
    • http.get('https://test.k6.io');: it’s the main function. We are going to call the specified endpoint and keep track of the responses, metrics, timings, and so on;
    • sleep(1): it’s the sleep time between each iteration.

    To run it, you need to call:

    Understanding Virtual Users (VUs) in K6

    VUs, Iterations, Sleep time… how do they work together?

    I updated the script.js file to clarify how K6 works, and how it affects the API calls.

    The new version of the file is this:

    import http from "k6/http"
    import { sleep } from "k6"
    
    export const options = {
      vus: 1,
      duration: "30s",
    }
    
    export default function () {
      http.get("https://localhost:7261/randombook")
      sleep(1)
    }
    

    We are saying “Run the load testing for 30 seconds. I want only ONE execution to exist at a time. After each execution, sleep for 1 second”.

    Make sure to run the API project, and then run k6 run script.js.

    Let’s see what happens:

    1. K6 starts, and immediately calls the API.
    2. On the API, we can see the first incoming call. The API sleeps for 1 second, and then starts sending other requests.

    By having a look at the logs printed from the application, we can see that we had no more than one concurrent request:

    Logs from 1 VU

    From the result screen, we can see that we have run our application for 30 seconds (plus another 30 seconds for graceful-stop) and that the max number of VUs was set to 1.

    Load Tests results with 1 VU

    Here, you can find the same results as plain text, making it easier to follow.

    execution: local
    script: script.js
    output: -
    
    scenarios: (100.00%) 1 scenario, 1 max VUs, 1m0s max duration (incl. graceful stop):
     * default: 1 looping VUs for 30s (gracefulStop: 30s)
    
    
    data_received..................: 2.8 kB 77 B/s
    data_sent......................: 867 B   24 B/s
    http_req_blocked...............: avg=20.62ms   min=0s       med=0s     max=123.77ms p(90)=61.88ms   p(95)=92.83ms
    http_req_connecting............: avg=316.64µs min=0s       med=0s     max=1.89ms   p(90)=949.95µs p(95)=1.42ms
    http_req_duration..............: avg=4.92s     min=125.65ms med=5.37s max=9.27s     p(90)=8.04s     p(95)=8.66s
    { expected_response:true }...: avg=4.92s     min=125.65ms med=5.37s max=9.27s     p(90)=8.04s     p(95)=8.66s
    http_req_failed................: 0.00%   ✓ 0         ✗ 6
    http_req_receiving.............: avg=1.12ms   min=0s       med=0s     max=6.76ms   p(90)=3.38ms   p(95)=5.07ms
    http_req_sending...............: avg=721.55µs min=0s       med=0s     max=4.32ms   p(90)=2.16ms   p(95)=3.24ms
    http_req_tls_handshaking.......: avg=13.52ms   min=0s       med=0s     max=81.12ms   p(90)=40.56ms   p(95)=60.84ms
    http_req_waiting...............: avg=4.92s     min=125.65ms med=5.37s max=9.27s     p(90)=8.03s     p(95)=8.65s
    http_reqs......................: 6       0.167939/s
    iteration_duration.............: avg=5.95s     min=1.13s     med=6.38s max=10.29s   p(90)=9.11s     p(95)=9.7s
    iterations.....................: 6       0.167939/s
    vus............................: 1       min=1       max=1
    vus_max........................: 1       min=1       max=1
    
    
    running (0m35.7s), 0/1 VUs, 6 complete and 0 interrupted iterations
    default ✓ [======================================] 1 VUs   30s
    

    Now, let me run the same script but update the VUs. We are going to run this configuration:

    export const options = {
      vus: 3,
      duration: "30s",
    }
    

    The result is similar, but this time we had performed 16 requests instead of 6. That’s because, as you can see, there were up to 3 concurrent users accessing our APIs.

    Logs from 3 VU

    The final duration was still 30 seconds. However, we managed to accept 3x users without having impacts on the performance, and without returning errors.

    Load Tests results with 3 VU

    Customize Load Testing properties

    We have just covered the surface of what K6 can do. Of course, there are many resources in the official K6 documentation, so I won’t repeat everything here.

    There are some parts, though, that I want to showcase here (so that you can deep dive into the ones you need).

    HTTP verbs

    In the previous examples, we used the post HTTP method. As you can imagine, there are other methods that you can use.

    Each HTTP method has a corresponding Javascript function. For example, we have

    • get() for the GET method
    • post() for the POST method
    • put() for the PUT method
    • del() for the DELETE method.

    Stages

    You can create stages to define the different parts of the execution:

    export const options = {
      stages: [
        { duration: "30s", target: 20 },
        { duration: "1m30s", target: 10 },
        { duration: "20s", target: 0 },
      ],
    }
    

    With the previous example, I defined three stages:

    1. the first one lasts 30 seconds, and brings the load to 20 VUs;
    2. next, during the next 90 second, the number of VUs decreases to 10;
    3. finally, in the last 20 seconds, it slowly shuts down the remaining calls.

    Load Tests results with complex Stages

    As you can see from the result, the total duration was 2m20s (which corresponds to the sum of the stages), and the max amount of requests was 20 (the number defined in the first stage).

    Scenarios

    Scenarios allow you to define the details of requests iteration.

    We always use a scenario, even if we don’t create one: in fact, we use the default scenario that gives us a predetermined time for the gracefulStop value, set to 30 seconds.

    We can define custom scenarios to tweak the different parameters used to define how the test should act.

    A scenario is nothing but a JSON element where you define arguments like duration, VUs, and so on.

    By defining a scenario, you can also decide to run tests on the same endpoint but using different behaviours: you can create a scenario for a gradual growth of users, one for an immediate peak, and so on.

    A glimpse to the final report

    Now, we can focus on the meaning of the data returned by the tool.

    Let’s use again the image we saw after running the script with the complex stages:

    Load Tests results with complex Stages

    We can see lots of values whose names are mostly self-explaining.

    We can see, for example, data_received and data_sent, which tell you the size of the data sent and received.

    We have information about the duration and response of HTTP requests (http_req_duration, http_req_sending, http_reqs), as well as information about the several phases of an HTTP connection, like http_req_tls_handshaking.

    We finally have information about the configurations set in K6, such as iterations, vus, and vus_max.

    You can see the average value, the min and max, and some percentiles for most of the values.

    Wrapping up

    K6 is a nice tool for getting started with load testing.

    You can see more examples in the official documentation. I suggest to take some time and explore all the possibilities provided by K6.

    This article first appeared on Code4IT 🐧

    As I said before, this is just the beginning: in future articles, we will use K6 to understand how some technical choices impact the performance of the whole application.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!





    Source link

  • Path.Combine and Path.Join are similar but way different. &vert; Code4IT

    Path.Combine and Path.Join are similar but way different. | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When you need to compose the path to a folder or file location, you can rely on the Path class. It provides several static methods to create, analyze and modify strings that represent a file system.

    Path.Join and Path.Combine look similar, yet they have some important differences that you should know to get the result you are expecting.

    Path.Combine: take from the last absolute path

    Path.Combine concatenates several strings into a single string that represents a file path.

    Path.Combine("C:", "users", "davide");
    // C:\users\davide
    

    However, there’s a tricky behaviour: if any argument other than the first contains an absolute path, all the previous parts are discarded, and the returned string starts with the last absolute path:

    Path.Combine("foo", "C:bar", "baz");
    // C:bar\baz
    
    Path.Combine("foo", "C:bar", "baz", "D:we", "ranl");
    // D:we\ranl
    

    Path.Join: take everything

    Path.Join does not try to return an absolute path, but it just joins the string using the OS path separator:

    Path.Join("C:", "users", "davide");
    // C:\users\davide
    

    This means that if there is an absolute path in any argument position, all the previous parts are not discarded:

    Path.Join("foo", "C:bar", "baz");
    // foo\C:bar\baz
    
    Path.Join("foo", "C:bar", "baz", "D:we", "ranl");
    // foo\C:bar\baz\D:we\ranl
    

    Final comparison

    As you can see, the behaviour is slightly different.

    Let’s see a table where we call the two methods using the same input strings:

    Path.Combine Path.Join
    ["singlestring"] singlestring singlestring
    ["foo", "bar", "baz"] foo\bar\baz foo\bar\baz
    ["foo", " bar ", "baz"] foo\ bar \baz foo\ bar \baz
    ["C:", "users", "davide"] C:\users\davide C:\users\davide
    ["foo", " ", "baz"] foo\ \baz foo\ \baz
    ["foo", "C:bar", "baz"] C:bar\baz foo\C:bar\baz
    ["foo", "C:bar", "baz", "D:we", "ranl"] D:we\ranl foo\C:bar\baz\D:we\ranl
    ["C:", "/users", "/davide"] /davide C:/users/davide
    ["C:", "users/", "/davide"] /davide C:\users//davide
    ["C:", "\users", "\davide"] \davide C:\users\davide

    Have a look at some specific cases:

    • neither methods handle white and empty spaces: ["foo", " ", "baz"] are transformed to foo\ \baz. Similarly, ["foo", " bar ", "baz"] are combined into foo\ bar \baz, without removing the head and trail whitespaces. So, always remove white spaces and empty values!
    • Path.Join handles in a not-so-obvious way the case of a path starting with / or \: if a part starts with \, it is included in the final path; if it starts with /, it is escaped as //. This behaviour depends on the path separator used by the OS: in my case, I’m running these methods using Windows 11.

    Finally, always remember that the path separator depends on the Operating System that is running the code. Don’t assume that it will always be /: this assumption may be correct for one OS but wrong for another one.

    This article first appeared on Code4IT 🐧

    Wrapping up

    As we have learned, Path.Combine and Path.Join look similar but have profound differences.

    Dealing with path building may look easy, but it hides some complexity. Always remember to:

    • validate and clean your input before using either of these methods (remove empty values, white spaces, and head or trailing path separators);
    • always write some Unit Tests to cover all the necessary cases;

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Seeding in-memory Entity Framework with realistic data with Bogus &vert; Code4IT

    Seeding in-memory Entity Framework with realistic data with Bogus | Code4IT


    You don’t need a physical database to experiment with ORMs. You can use an in-memory DB and seed the database with realistic data generated with Bogus.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, you want to experiment with some features or create a demo project, but you don’t want to instantiate a real database instance.

    Also, you might want to use some realistic data – not just “test1”, 123, and so on. These values are easy to set but not very practical when demonstrating functionalities.

    In this article, we’re going to solve this problem by using Bogus and Entity Framework: you will learn how to generate realistic data and how to store them in an in-memory database.

    Bogus, a C# library for generating realistic data

    Bogus is a popular library for generating realistic data for your tests. It allows you to choose the category of dummy data that best suits your needs.

    It all starts by installing Bogus via NuGet by running Install-Package Bogus.

    From here, you can define the so-called Fakers, whose purpose is to generate dummy instances of your classes by auto-populating their fields.

    Let’s see a simple example. We have this POCO class named Book:

    public class Book
    {
        public Guid Id { get; set; }
        public string Title { get; set; }
        public int PagesCount { get; set; }
        public Genre[] Genres { get; set; }
        public DateOnly PublicationDate { get; set; }
        public string AuthorFirstName { get; set; }
        public string AuthorLastName { get; set; }
    }
    
    public enum Genre
    {
        Thriller, Fantasy, Romance, Biography
    }
    

    Note: for the sake of simplicity, I used a dumb approach: author’s first and last name are part of the Book info itself, and the Genres property is treated as an array of enums and not as a flagged enum.

    From here, we can start creating our Faker by specifying the referenced type:

    Faker<Book> bookFaker = new Faker<Book>();
    

    We can add one or more RuleFor methods to create rules used to generate each property.

    The simplest approach is to use the overload where the first parameter is a Function pointing to the property to be populated, and the second is a Function that calls the methods provided by Bogus to create dummy data.

    Think of it as this pseudocode:

    faker.RuleFor(sm => sm.SomeProperty, f => f.SomeKindOfGenerator.GenerateSomething());
    

    Another approach is to pass as the first argument the name of the property like this:

    faker.RuleFor("myName", f=> f.SomeKindOfGenerator.GenerateSomething())
    

    A third approach is to define a generator for a specific type, saying “every time you’re trying to map a property with this type, use this generator”:

    bookFaker.RuleForType(typeof(DateOnly), f => f.Date.PastDateOnly());
    

    Let’s dive deeper into Bogus, generating data for common types.

    Generate random IDs with Bogus

    We can generate random GUIDs like this:

    bookFaker.RuleFor(b => b.Id, f => f.Random.Guid());
    

    In a similar way, you can generate Uuid by calling f.Random.Uuid().

    Generate random text with Bogus

    We can generate random text, following the Lorem Ipsum structure, to pick a single word or a longer text:

    Using Text you generate random text:

    bookFaker.RuleFor(b => b.Title, f => f.Lorem.Text());
    

    However, you can use several other methods to generate text with different lengths, such as Letter, Word, Paragraphs, Sentences, and more.

    Working with Enums with Bogus

    If you have an enum, you can rely again on the Random property of the Faker and get a random subset of the enums like this:

    bookFaker.RuleFor(b => b.Genres, f => f.Random.EnumValues<Genre>(2));
    

    As you can see, I specified the number of random items to use (in this case, 2). If you don’t set it, it will take a random number of items.

    However, the previous method returns an array of elements. If you want to get a single enum, you should use f.Random.Enum<Genre>().

    One of the most exciting features of Bogus is the ability to generate realistic data for common entities, such as a person.

    In particular, you can use the Person property to generate data related to the first name, last name, Gender, UserName, Phone, Website, and much more.

    You can use it this way:

    bookFaker.RuleFor(b => b.AuthorFirstName, f => f.Person.FirstName);
    bookFaker.RuleFor(b => b.AuthorLastName, f => f.Person.LastName);
    

    Generate final class instances with Bogus

    We can generate the actual items now that we’ve defined our rules.

    You just need to call the Generate method; you can also specify the number of items to generate by passing a number as a first parameter:

    List<Book> books = bookFaker.Generate(2);
    

    Suppose you want to generate a random quantity of items. In that case, you can use the GenerateBetween method, specifying the top and bottom limit:

    List<Book> books = bookFaker.GenerateBetween(2, 5);
    

    Wrapping up the Faker example

    Now that we’ve learned how to generate a Faker, we can refactor the code to make it easier to read:

    private List<Book> GenerateBooks(int count)
    {
        Faker<Book> bookFaker = new Faker<Book>()
            .RuleFor(b => b.Id, f => f.Random.Guid())
            .RuleFor(b => b.Title, f => f.Lorem.Text())
            .RuleFor(b => b.Genres, f => f.Random.EnumValues<Genre>())
            .RuleFor(b => b.AuthorFirstName, f => f.Person.FirstName)
            .RuleFor(b => b.AuthorLastName, f => f.Person.LastName)
            .RuleFor(nameof(Book.PagesCount), f => f.Random.Number(100, 800))
            .RuleForType(typeof(DateOnly), f => f.Date.PastDateOnly());
    
        return bookFaker.Generate(count);
    }
    

    If we run it, we can see it generates the following items:

    Bogus-generated data

    Seeding InMemory Entity Framework with dummy data

    Entity Framework is among the most famous ORMs in the .NET ecosystem. Even though it supports many integrations, sometimes you just want to store your items in memory without relying on any specific database implementation.

    Using Entity Framework InMemory provider

    To add this in-memory provider, you must install the Microsoft.EntityFrameworkCore.InMemory NuGet Package.

    Now you can add a new DbContext – which is a sort of container of all the types you store in your database – ensuring that the class inherits from DbContext.

    public class BooksDbContext : DbContext
    {
        public DbSet<Book> Books { get; set; }
    }
    

    You then have to declare the type of database you want to use by defining it the int OnConfiguring method:

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder.UseInMemoryDatabase("BooksDatabase");
    }
    

    Note: even though it’s an in-memory database, you still need to declare the database name.

    Seeding the database with data generated with Bogus

    You can seed the database using the data generated by Bogus by overriding the OnModelCreating method:

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        base.OnModelCreating(modelBuilder);
    
        var booksFromBogus = BogusBookGenerator.GenerateBooks(15);
    
        modelBuilder.Entity<Book>().HasData(booksFromBogus);
    }
    

    Notice that we first create the items and then, using modelBuilder.Entity<Book>().HasData(booksFromBogus), we set the newly generated items as content for the Books DbSet.

    Consume dummy data generated with EF Core

    To wrap up, here’s the complete implementation of the DbContext:

    public class BooksDbContext : DbContext
    {
        public DbSet<Book> Books { get; set; }
    
        protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        {
          optionsBuilder.UseInMemoryDatabase("BooksDatabase");
        }
    
        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            base.OnModelCreating(modelBuilder);
    
            var booksFromBogus = BogusBookGenerator.GenerateBooks(15);
    
            modelBuilder.Entity<Book>().HasData(booksFromBogus);
        }
    }
    

    We are now ready to instantiate the DbContext, ensure that the Database has been created and seeded with the correct data, and perform the operations needed.

    using var dbContext = new BooksDbContext();
    dbContext.Database.EnsureCreated();
    
    var allBooks = await dbContext.Books.ToListAsync();
    
    var thrillerBooks = dbContext.Books
            .Where(b => b.Genres.Contains(Genre.Thriller))
            .ToList();
    

    Further readings

    In this blog, we’ve already discussed the Entity Framework. In particular, we used it to perform CRUD operations on a PostgreSQL database.

    🔗 How to perform CRUD operations with Entity Framework Core and PostgreSQL | Code4IT

    This article first appeared on Code4IT 🐧

    I suggest you explore the potentialities of Bogus: there are a lot of functionalities that I didn’t cover in this article, and they may make your tests and experiments meaningful and easier to understand.

    🔗 Bogus repository | GitHub

    Wrapping up

    Bogus is a great library for creating unit and integration tests. However, I find it useful to generate dummy data for several purposes, like creating a stub of a service, populating a UI with realistic data, or trying out other tools and functionalities.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Mark a class as Sealed to prevent subclasses creation &vert; Code4IT

    Mark a class as Sealed to prevent subclasses creation | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    The O in SOLID stands for the Open-closed principle: according to the official definition, “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.”

    To extend a class, you usually create a subclass, overriding or implementing methods from the parent class.

    Extend functionalities by using subclasses

    The most common way to extend a class is to mark it as abstract :

    public abstract class MyBaseClass
    {
      public DateOnly Date { get; init; }
      public string Title { get; init; }
    
      public abstract string GetFormattedString();
    
      public virtual string FormatDate() => Date.ToString("yyyy-MM-dd");
    }
    

    Then, to extend it you create a subclass and define the internal implementations of the extention points in the parent class:

    public class ConcreteClass : MyBaseClass
    {
      public override string GetFormattedString() => $"{Title} | {FormatDate()}";
    }
    

    As you know, this is the simplest example: overriding and implementing methods from an abstract class.

    You can override methods from a concrete class:

    public class MyBaseClass2
    {
      public DateOnly Date { get; init; }
      public string Title { get; init; }
    
      public string GetFormattedString() => $"{Title} ( {FormatDate()} )";
    
      public string FormatDate() => Date.ToString("yyyy-MM-dd");
    }
    
    public class ConcreteClass2 : MyBaseClass2
    {
      public new string GetFormattedString() => $"{Title} | {FormatDate()}";
    }
    

    Notice that even though there are no abstract methods in the base class, you can override the content of a method by using the new keyword.

    Prevent the creation of subclasses using the sealed keyword

    Especially when exposing classes via NuGet, you want to prevent consumers from creating subclasses and accessing the internal status of the structures you have defined.

    To prevent classes from being extended, you must mark your class as sealed:

    public sealed class MyBaseClass3
    {
      public DateOnly Date { get; init; }
      public string Title { get; init; }
    
      public string GetFormattedString() => $"{Title} ( {FormatDate()} )";
    
      public string FormatDate() => Date.ToString("yyyy-MM-dd");
    }
    
    public class ConcreteClass3 : MyBaseClass3
    {
    }
    

    This way, even if you declare ConcreteClass3 as a subclass of MyBaseClass3, you won’t be able to compile the application:

    Compilation error when trying to extend a sealed class

    4 reasons to mark a class as sealed

    Ok, it’s easy to prevent a class from being extended by a subclass. But what are the benefits of having a sealed class?

    Marking a C# class as sealed can be beneficial for several reasons:

    1. Security by design: By marking a class as sealed, you prevent consumers from creating subclasses that can alter or extend critical functionalities of the base class in unintended ways.
    2. Performance improvements: The compiler can optimize sealed classes more effectively because it knows there are no subclasses. This will not bring substantial performance improvements, but it can still help if every nanosecond is important.
    3. Explicit design intent: Sealing the class communicates to other developers that the class is not intended to be extended or modified. If they want to use it, they accept they cannot modify or extend it, as it has been designed in that way by purpose.

    This article first appeared on Code4IT 🐧

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • 2 ways to use custom equality rules in a HashSet &vert; Code4IT

    2 ways to use custom equality rules in a HashSet | Code4IT


    With HashSet, you can get a list of different items in a performant way. What if you need a custom way to define when two objects are equal?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, object instances can be considered equal even though some of their properties are different. Consider a movie translated into different languages: the Italian and French versions are different, but the movie is the same.

    If we want to store unique values in a collection, we can use a HashSet<T>. But how can we store items in a HashSet when we must follow a custom rule to define if two objects are equal?

    In this article, we will learn two ways to add custom equality checks when using a HashSet.

    Let’s start with a dummy class: Pirate.

    public class Pirate
    {
        public int Id { get; }
        public string Name { get; }
    
        public Pirate(int id, string username)
        {
            Id = id;
            Name = username;
        }
    }
    

    I’m going to add some instances of Pirate to a HashSet. Please, note that there are two pirates whose Id is 4:

    List<Pirate> mugiwara = new List<Pirate>()
    {
        new Pirate(1, "Luffy"),
        new Pirate(2, "Zoro"),
        new Pirate(3, "Nami"),
        new Pirate(4, "Sanji"), // This ...
        new Pirate(5, "Chopper"),
        new Pirate(6, "Robin"),
        new Pirate(4, "Duval"), // ... and this
    };
    
    
    HashSet<Pirate> hashSet = new HashSet<Pirate>();
    
    
    foreach (var pirate in mugiwara)
    {
        hashSet.Add(pirate);
    }
    
    
    _output.WriteAsTable(hashSet);
    

    (I really hope you’ll get the reference 😂)

    Now, what will we print on the console? (ps: output is just a wrapper around some functionalities provided by Spectre.Console, that I used here to print a table)

    HashSet result when no equality rule is defined

    As you can see, we have both Sanji and Duval: even though their Ids are the same, those are two distinct objects.

    Also, we haven’t told HashSet that the Id property must be used as a discriminator.

    Define a custom IEqualityComparer in a C# HashSet

    In order to add a custom way to tell the HashSet that two objects can be treated as equal, we can define a custom equality comparer: it’s nothing but a class that implements the IEqualityComparer<T> interface, where T is the name of the class we are working on.

    public class PirateComparer : IEqualityComparer<Pirate>
    {
        bool IEqualityComparer<Pirate>.Equals(Pirate? x, Pirate? y)
        {
            Console.WriteLine($"Equals: {x.Name} vs {y.Name}");
            return x.Id == y.Id;
        }
    
        int IEqualityComparer<Pirate>.GetHashCode(Pirate obj)
        {
            Console.WriteLine("GetHashCode " + obj.Name);
            return obj.Id.GetHashCode();
        }
    }
    

    The first method, Equals, compares two instances of a class to tell if they are equal, following the custom rules we write.

    The second method, GetHashCode, defines a way to build an object’s hash code given its internal status. In this case, I’m saying that the hash code of a Pirate object is just the hash code of its Id property.

    To include this custom comparer, you must add a new instance of PirateComparer to the HashSet declaration:

    HashSet<Pirate> hashSet = new HashSet<Pirate>(new PirateComparer());
    

    Let’s rerun the example, and admire the result:

    HashSet result with custom comparer

    As you can see, there is only one item whose Id is 4: Sanji.

    Let’s focus a bit on the messages printed when executing Equals and GetHashCode.

    GetHashCode Luffy
    GetHashCode Zoro
    GetHashCode Nami
    GetHashCode Sanji
    GetHashCode Chopper
    GetHashCode Robin
    GetHashCode Duval
    Equals: Sanji vs Duval
    

    Every time we insert an item, we call the GetHashCode method to generate an internal ID used by the HashSet to check if that item already exists.

    As stated by Microsoft’s documentation,

    Two objects that are equal return hash codes that are equal. However, the reverse is not true: equal hash codes do not imply object equality, because different (unequal) objects can have identical hash codes.

    This means that if the Hash Code is already used, it’s not guaranteed that the objects are equal. That’s why we need to implement the Equals method (hint: do not just compare the HashCode of the two objects!).

    Is implementing a custom IEqualityComparer the best choice?

    As always, it depends.

    On the one hand, using a custom IEqualityComparer has the advantage of allowing you to have different HashSets work differently depending on the EqualityComparer passed in input; on the other hand, you are now forced to pass an instance of IEqualityComparer everywhere you use a HashSet — and if you forget one, you’ll have a system with inconsistent behavior.

    There must be a way to ensure consistency throughout the whole codebase.

    Implement the IEquatable interface

    It makes sense to implement the equality checks directly inside the type passed as a generic type to the HashSet.

    To do that, you need to have that class implement the IEquatable<T> interface, where T is the class itself.

    Let’s rework the Pirate class, letting it implement the IEquatable<Pirate> interface.

    public class Pirate : IEquatable<Pirate>
    {
        public int Id { get; }
        public string Name { get; }
    
        public Pirate(int id, string username)
        {
            Id = id;
            Name = username;
        }
    
        bool IEquatable<Pirate>.Equals(Pirate? other)
        {
            Console.WriteLine($"IEquatable Equals: {this.Name} vs {other.Name}");
            return this.Id == other.Id;
        }
    
        public override bool Equals(object obj)
        {
            Console.WriteLine($"Override Equals {this.Name} vs {(obj as Pirate).Name}");
            return Equals(obj as Pirate);
        }
    
        public override int GetHashCode()
        {
            Console.WriteLine($"GetHashCode {this.Id}");
            return (Id).GetHashCode();
        }
    }
    

    The IEquatable interface forces you to implement the Equals method. So, now we have two implementations of Equals (the one for IEquatable and the one that overrides the default implementation). Which one is correct? Is the GetHashCode really used?

    Let’s see what happens in the next screenshot:

    HashSet result with a class that implements IEquatable

    As you could’ve imagined, the Equals method called in this case is the one needed to implement the IEquatable interface.

    Please note that, as we don’t need to use the custom comparer, the HashSet initialization becomes:

    HashSet<Pirate> hashSet = new HashSet<Pirate>();
    

    What has the precedence: IEquatable or IEqualityComparer?

    What happens when we use both IEquatable and IEqualityComparer?

    Let’s quickly demonstrate it.

    First of all, keep the previous implementation of the Pirate class, where the equality check is based on the Id property:

    public class Pirate : IEquatable<Pirate>
    {
        public int Id { get; }
        public string Name { get; }
    
        public Pirate(int id, string username)
        {
            Id = id;
            Name = username;
        }
    
        bool IEquatable<Pirate>.Equals(Pirate? other)
        {
            Console.WriteLine($"IEquatable Equals: {this.Name} vs {other.Name}");
            return this.Id == other.Id;
        }
    
        public override int GetHashCode()
        {
            Console.WriteLine($"GetHashCode {this.Id}");
            return (Id).GetHashCode();
        }
    }
    

    Now, create a new IEqualityComparer where the equality is based on the Name property.

    public class PirateComparerByName : IEqualityComparer<Pirate>
    {
        bool IEqualityComparer<Pirate>.Equals(Pirate? x, Pirate? y)
        {
            Console.WriteLine($"Equals: {x.Name} vs {y.Name}");
            return x.Name == y.Name;
        }
        int IEqualityComparer<Pirate>.GetHashCode(Pirate obj)
        {
            Console.WriteLine("GetHashCode " + obj.Name);
            return obj.Name.GetHashCode();
        }
    }
    

    Now we have custom checks on both the Name and the Id.

    It’s time to add a new pirate to the list, and initialize the HashSet by passing in the constructor an instance of PirateComparerByName.

    List<Pirate> mugiwara = new List<Pirate>()
    {
        new Pirate(1, "Luffy"),
        new Pirate(2, "Zoro"),
        new Pirate(3, "Nami"),
        new Pirate(4, "Sanji"), // Id = 4
        new Pirate(5, "Chopper"), // Name = Chopper
        new Pirate(6, "Robin"),
        new Pirate(4, "Duval"), // Id = 4
        new Pirate(7, "Chopper") // Name = Chopper
    };
    
    
    HashSet<Pirate> hashSet = new HashSet<Pirate>(new PirateComparerByName());
    
    
    foreach (var pirate in mugiwara)
    {
        hashSet.Add(pirate);
    }
    

    We now have two pirates with ID = 4 and two other pirates with Name = Chopper.

    Can you foresee what will happen?

    HashSet items when defining both IEqualityComparare and IEquatable

    The checks on the ID are totally ignored: in fact, the final result contains both Sanji and Duval, even if their IDs are the same. The custom IEqualityComparer has the precedence over the IEquatable interface.

    This article first appeared on Code4IT 🐧

    Wrapping up

    This started as a short article but turned out to be a more complex topic.

    There is actually more to discuss, like performance considerations, code readability, and more. Maybe we’ll tackle those topics in a future article.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!

    🐧





    Source link

  • IEnumerable vs ICollection, and why it matters &vert; Code4IT

    IEnumerable vs ICollection, and why it matters | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Defining the best return type is crucial to creating a shared library whose behaviour is totally under your control.

    You should give the consumers of your libraries just the right amount of freedom to integrate and use the classes and structures you have defined.

    That’s why it is important to know the differences between interfaces like IEnumerable<T> and ICollection<T>: these interfaces are often used together but have totally different meanings.

    IEnumerable: loop through the items in the collection

    Suppose that IAmazingInterface is an interface you expose so that clients can interact with it without knowing the internal behaviour.

    You have defined it this way:

    public interface IAmazingInterface
    {
        IEnumerable<int> GetNumbers(int[] numbers);
    }
    

    As you can see, the GetNumbers returns an IEnumerable<int>: this means that (unless they do some particular tricks like using reflection), clients will only be able to loop through the collection of items.

    Clients don’t know that, behind the scenes, AmazingClass uses a custom class MySpecificEnumberable.

    public class AmazingClass: IAmazingInterface
    {
        public IEnumerable<int> GetNumbers(int[] numbers)
            => new MySpecificEnumberable(numbers);
    }
    

    MySpecificEnumberable is a custom class whose purpose is to store the initial values in a sorted way. It implements IEnumerable<int>, so the only operations you have to support are the two implementations of GetEnumerator() – pay attention to the returned data type!

    public class MySpecificEnumberable : IEnumerable<int>
    {
        private readonly int[] _numbers;
    
        public MySpecificEnumberable(int[] numbers)
        {
            _numbers = numbers.OrderBy(_ => _).ToArray();
        }
    
        public IEnumerator<int> GetEnumerator()
        {
            foreach (var number in _numbers)
            {
                yield return number;
            }
        }
    
        IEnumerator IEnumerable.GetEnumerator()
            => _numbers.GetEnumerator();
    }
    

    Clients will then be able to loop all the items in the collection:

    IAmazingInterface something = new AmazingClass();
    var numbers = something.GetNumbers([1, 5, 6, 9, 8, 7, 3]);
    
    foreach (var number in numbers)
    {
        Console.WriteLine(number);
    }
    

    But you cannot add or remove items from it.

    ICollection: list, add, and remove items

    As we saw, IEnumerable<T> only allows you to loop through all the elements. However, you cannot add or remove items from an IEnumerable<T>.

    To do so, you need something that implements ICollection<T>, like the following class (I haven’t implemented any of these methods: I want you to focus on the operations provided, not on the implementation details).

    class MySpecificCollection : ICollection<int>
    {
        public int Count => throw new NotImplementedException();
    
        public bool IsReadOnly => throw new NotImplementedException();
    
        public void Add(int item) => throw new NotImplementedException();
    
        public void Clear() => throw new NotImplementedException();
    
        public bool Contains(int item) => throw new NotImplementedException();
    
        public void CopyTo(int[] array, int arrayIndex) => throw new NotImplementedException();
    
        public IEnumerator<int> GetEnumerator() => throw new NotImplementedException();
    
        public bool Remove(int item) => throw new NotImplementedException();
    
        IEnumerator IEnumerable.GetEnumerator() => throw new NotImplementedException();
    }
    

    ICollection<T> is a subtype of IEnumerable<T>, so everything we said before is still valid.

    However, having a class that implements ICollection<T> gives you full control over how items can be added or removed from the collection, allowing you to define custom behaviour. For instance, you can define that the Add method adds an integer only if it’s an odd number.

    Why knowing the difference actually matters

    Classes and interfaces are meant to be used. If you are like me, you work on both the creation of the class and its consumption.

    So, if an interface must return a sequence of items, you most probably use the List shortcut: define the return type of the method as List<Item>, and then use it, regardless of having it looped through or having the consumer add items to the sequence.

    // in the interface
    public interface ISomething
    {
        List<Item> PerformSomething(int[] numbers);
    }
    
    
    // in the consumer class
    ISomething instance = //omitted
    List<Item> myItems = instance.PerformSomething([2, 3, 4, 5]);
    

    Everything works fine, but it works because we are in control of both the definition and the consumer.

    What if you have to expose the library to something outside your control?

    You have to consider two elements:

    • consumers should not be able to tamper with your internal implementation (for example, by adding items when they are not supposed to);
    • you should be able to change the internal implementation as you wish without breaking changes.

    So, if you want your users to just enumerate the items within a collection, you may start this way:

    // in the interface
    public interface ISomething
    {
        IEnumerable<Item> PerformSomething(int[] numbers);
    }
    
    // in the implementation
    
    IEnumerable<Item> PerformSomething(int[] numbers)
    {
        return numbers.Select(x => new Item(x)).ToList();
    }
    
    // in the consumer class
    
    ISomething instance = //omitted
    IEnumerable<Item> myItems = instance.PerformSomething([2, 3, 4, 5]);
    

    Then, when the time comes, you can change the internal implementation of PerformSomething with a more custom class:

    // custom IEnumerable definition
    public class MyCustomEnumberable : IEnumerable<Item> { /*omitted*/ }
    
    // in the interface
    IEnumerable<Item> PerformSomething(int[] numbers)
    {
        MyCustomEnumberable customEnumerable = new MyCustomEnumberable();
        customEnumerable.DoSomething(numbers);
        return customEnumerable;
    }
    

    And the consumer will not notice the difference. Again, unless they try to use tricks to tamper with your code!

    This article first appeared on Code4IT 🐧

    Wrapping up

    While understanding the differences between IEnumerable and ICollection is trivial, understanding why you should care about them is not.

    IEnumerable and ICollection hierarchy

    I hope this article helped you understand that yeah, you can take the easy way and return everywhere a List, but it’s a choice that you cannot always apply to a project, and that probably will make breaking changes more frequent in the long run.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Enhancing Retry Patterns with a bit of randomness &vert; Code4IT

    Enhancing Retry Patterns with a bit of randomness | Code4IT


    Operations may fail for transient reasons. How can you implement retry patterns? And how can a simple Jitter help you stabilize the system?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When building complex systems, you may encounter situations where you have to retry an operation several times before giving up due to transient errors.

    How can you implement proper retry strategies? And how can a little thing called “Jitter” help avoid the so-called “Thundering Herd problem”?.

    Retry Patterns and their strategies

    Retry patterns are strategies for retrying operations caused by transient, temporary errors, such as packet loss or a temporarily unavailable resource.

    Suppose you have a database that can handle up to 3 requests per second (yay! so performant!).

    Accidentally, three clients try to execute an operation at the exact same instant. What happens now?

    Well, the DB becomes temporarily unavailable, and it won’t be able to serve those requests. So, since this issue occurred by chance, you just have to wait and retry.

    How long should we wait before the next tentative?

    You can imagine that the timeframe between a tentative and the next one follows a mathematical function, where the wait time (called Backoff) depends on the tentative number:

    Backoff = f(RetryAttemptNumber)
    

    With that in mind, we can think of two main retry strategies: linear backoff retries and exponential backoff retries.

    Linear backoff retries

    The simplest way to handle retries is with Linear backoff.

    Let’s continue with the mathematical function analogy. In this case, the function we can use is a linear function.

    We can simplify the idea by saying that, regardless of the attempt number, the delay between one retry and the next one stays constant.

    Linear backoff

    Let’s see an example in C#. Say that you have defined an operation that may fail randomly, stored in an Action instance. You can call the following RetryOperationWithLinearBackoff method to execute the operation passed in input with a linear retry.

    static void RetryOperationWithLinearBackoff(Action operation)
    {
        int maxRetries = 5;
        double delayInSeconds = 5.0;
    
        for (int attempt = 0; attempt < maxRetries; attempt++)
        {
            try
            {
                operation();
                return;
            }
            catch (Exception e)
            {
                Console.WriteLine($"Retrying in {delayInSeconds:F2} seconds...");
                Thread.Sleep(TimeSpan.FromSeconds(delayInSeconds));
            }
        }
    }
    

    The input opertation will be retried for up to 5 times, and every time an operation fails, the system waits 5 seconds before the next retry.

    Linear backoff is simple to implement, as you just saw. However, it falls short when the system is in a faulty state and takes a long time to get back to work. Having linear retries and a fixed amount of maximum retries limits the timespan an operation can be retried. You can end up finishing your attempts while the downstream system is still recovering.

    There may be better ways.

    Exponential backoff retries

    An alternative is to use Exponential Backoff.

    With this approach, the backoff becomes longer after every attempt — usually, it doubles at every retry, that’s why it is called “exponential” backoff.

    This way, if the downstream system takes a long time to recover, the top-level operation has a better chance of being completed successfully.

    Exponential Backoff

    Of course, the downside of this approach is that to get a response from the operation (did it complete? did it fail?), you will have to wait longer — it all depends on the number of retries.
    So, the top-level operation can go into timeout because it tries to access a resource, but the retries become increasingly diluted.

    A simple implementation in C# would be something like this:

    static void RetryOperationWithExponentialBackoff(Action operation)
    {
        int maxRetries = 5;
        double baseDelayInSeconds = 2.0;
    
        for (int attempt = 0; attempt < maxRetries; attempt++)
        {
            try
            {
                operation();
                return;
            }
            catch (Exception e)
            {
                double exponentialDelay = baseDelayInSeconds * Math.Pow(2, attempt);
                Console.WriteLine($"Retrying in {exponentialDelay:F2} seconds...");
                Thread.Sleep(TimeSpan.FromSeconds(exponentialDelay));
            }
        }
    }
    

    The key to understanding the exponential backoff is how the delay is calculated:

    double exponentialDelay = baseDelayInSeconds * Math.Pow(2, attempt);
    

    Understanding the Thundering Herd problem

    The “basic” versions of these retry patterns are effective in overcoming temporary service unavailability, but they can inadvertently cause a thundering herd problem. This occurs when multiple clients retry simultaneously, overwhelming the system with a surge of requests, potentially leading to further failures.

    Suppose that a hypothetical downstream system becomes unavailable if 5 or more requests occur simultaneously.

    What happens when five requests start at the exact same moment? They start, overwhelm the system, and they all fail.

    Their retries will always be in sync, since the backoff is fixed (yes, it can grow in time, but it’s still a fixed value).

    So, all five requests will wait for a fixed amount of time before the next retry. This means that they will always stay in sync.

    Let’s make it more clear with these simple diagrams, where each color represents a different client trying to perform the operation, and the number inside the star represents the attempt number.

    In the case of linear backoff, all the requests are always in sync.

    Multiple retries with linear backoff

    The same happens when using exponential backoff: even if the backoff grows exponentially, all the requests stay in sync, making the system unstable.

    Multiple retries with exponential backoff

    What is Jitter?

    Jitter refers to the introduction of randomness into timing mechanisms: the term was first adopted when talking about network communications, but then became in use for in other areas of system design.

    Jitter helps to mitigate the risk of synchronized retries that can lead to spikes in server load, forcing clients that try to simultanously access a resource to perform their operations with a slightly randomized delay.

    In fact, by randomizing the delay intervals between retries, jitter ensures that retries are spread out over time, reducing the likelihood of overwhelming a service.

    Benefits of Jitter in Distributed Systems

    This is where Jitter comes in handy: it adds a random interval around the moment a retry should happen to minimize excessive retries in sync.

    Exponential Backoff with Jitter

    Jitter introduces randomness to the delay intervals between retries. By staggering these retries, jitter helps distribute the load more evenly over time.

    This reduces the risk of server overload and allows backend systems to recover and process requests efficiently. Implementing jitter can transform a simple retry mechanism into a robust strategy that enhances system reliability and performance.

    Incorporating jitter into your system design offers several advantages:

    • Reduced Load Spikes: By spreading out retries, Jitter minimizes sudden surges in traffic, preventing server overload.
    • Enhanced System Stability: With less synchronized activity, systems remain more stable, even during peak usage times.
    • Improved Resource Utilization: Jitter allows for more efficient use of resources, as requests are processed more evenly.
    • Greater Resilience: Systems become more resilient to transient errors and network fluctuations, improving overall reliability.
    • Avoiding Synchronization: Jitter prevents multiple clients from retrying at the same time, which can lead to server overload.
    • Improved Resource Utilization: By spreading out retries, jitter helps maintain a more consistent load on servers, improving resource utilization.
    • Enhanced Reliability: Systems become more resilient to transient errors, reducing the likelihood of cascading failures.

    Let’s review the retry methods we defined before.

    static void RetryOperationWithLinearBackoffAndJitter(Action operation)
    {
        int maxRetries = 5;
        double baseDelayInSeconds = 5.0;
    
        Random random = new Random();
    
        for (int attempt = 0; attempt < maxRetries; attempt++)
        {
            try
            {
                operation();
                return;
            }
            catch (Exception e)
            {
                double jitter = random.NextDouble() * 4 - 2; // Random jitter between -2 and 2 seconds
                double delay = baseDelayInSeconds + jitter;
                Console.WriteLine($"Retrying in {delay:F2} seconds...");
                Thread.Sleep(TimeSpan.FromSeconds(delay));
            }
        }
    }
    

    And, for Exponential Backoff,

    static void RetryOperationWithExponentialBackoffAndJitter(Action operation)
    {
        int maxRetries = 5;
        double baseDelayInSeconds = 2.0;
    
        Random random = new Random();
    
        for (int attempt = 0; attempt < maxRetries; attempt++)
        {
            try
            {
                operation();
                return;
            }
            catch (Exception e)
            {
                // Exponential backoff with jitter
                double exponentialDelay = baseDelayInSeconds * Math.Pow(2, attempt);
                double jitter = random.NextDouble() * (exponentialDelay / 2);
                double delay = exponentialDelay + jitter;
                Console.WriteLine($"Retrying in {delay:F2} seconds...");
                Thread.Sleep(TimeSpan.FromSeconds(delay));
            }
        }
    }
    

    In both cases, the key is in creating the delay variable: a random value (the Jitter) is added to the delay.

    Notice that the Jitter can also be a negative value!

    Further readings

    Retry patterns and Jitter make your system more robust, but if badly implemented, they can make your code a mess. So, a question arises: should you focus on improving performances or on writing cleaner code?

    🔗 Code opinion: performance or clean code? | Code4IT

    This article first appeared on Code4IT 🐧

    Clearly, if the downstream system is not able to handle too many requests, you may need to implement a way to limit the number of incoming requests in a timeframe. You can choose between 4 well-known algorithms to implement Rate Limiting.

    🔗 4 algorithms to implement Rate Limiting, with comparison | Code4IT

    Wrapping up

    While adding jitter may seem like a minor tweak, its impact on distributed systems can be significant. By introducing randomness into retry patterns, jitter helps create a more balanced, efficient, and robust system.

    As we continue to build and scale our systems, incorporating jitter is a best practice that can prevent cascading failures and optimize performance. All in all, a little randomness can be just what your system needs to thrive.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Easy logging management with Seq and ILogger in ASP.NET &vert; Code4IT

    Easy logging management with Seq and ILogger in ASP.NET | Code4IT


    Seq is one of the best Log Sinks out there : it’s easy to install and configure, and can be added to an ASP.NET application with just a line of code.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Logging is one of the most essential parts of any application.

    Wouldn’t it be great if we could scaffold and use a logging platform with just a few lines of code?

    In this article, we are going to learn how to install and use Seq as a destination for our logs, and how to make an ASP.NET 8 API application send its logs to Seq by using the native logging implementation.

    Seq: a sink and dashboard to manage your logs

    In the context of logging management, a “sink” is a receiver of the logs generated by one or many applications; it can be a cloud-based system, but it’s not mandatory: even a file on your local file system can be considered a sink.

    Seq is a Sink, and works by exposing a server that stores logs and events generated by an application. Clearly, other than just storing the logs, Seq allows you to view them, access their details, perform queries over the collection of logs, and much more.

    It’s free to use for individual usage, and comes with several pricing plans, depending on the usage and the size of the team.

    Let’s start small and install the free version.

    We have two options:

    1. Download it locally, using an installer (here’s the download page);
    2. Use Docker: pull the datalust/seq image locally and run the container on your Docker engine.

    Both ways will give you the same result.

    However, if you already have experience with Docker, I suggest you use the second approach.

    Once you have Docker installed and running locally, open a terminal.

    First, you have to pull the Seq image locally (I know, it’s not mandatory, but I prefer doing it in a separate step):

    Then, when you have it downloaded, you can start a new instance of Seq locally, exposing the UI on a specific port.

    docker run --name seq -d --restart unless-stopped -e ACCEPT_EULA=Y -p 5341:80 datalust/seq:latest
    

    Let’s break down the previous command:

    • docker run: This command is used to create and start a new Docker container.
    • --name seq: This option assigns the name seq to the container. Naming containers can make them easier to manage.
    • -d: This flag runs the container in detached mode, meaning it runs in the background.
    • --restart unless-stopped: This option ensures that the container will always restart unless it is explicitly stopped. This is useful for ensuring that the container remains running even after a reboot or if it crashes.
    • -e ACCEPT_EULA=Y: This sets an environment variable inside the container. In this case, it sets ACCEPT_EULA to Y, which likely indicates that you accept the End User License Agreement (EULA) for the software running in the container.
    • -p 5341:80: This maps port 5341 on your host machine to port 80 in the container. This allows you to access the service running on port 80 inside the container via port 5341 on your host.
    • datalust/seq:latest: This specifies the Docker image to use for the container. datalust/seq is the image name, and latest is the tag, indicating that you want to use the latest version of this image.

    So, this command runs a container named seq in the background, ensures it restarts unless stopped, sets an environment variable to accept the EULA, maps a host port to a container port, and uses the latest version of the datalust/seq image.

    It’s important to pay attention to the used port: by default, Seq uses port 5341 to interact with the UI and the API. If you prefer to use another port, feel free to do that – just remember that you’ll need some additional configuration.

    Now that Seq is installed on your machine, you can access its UI. Guess what? It’s on localhost:5341!

    Seq brand new instance

    However, Seq is “just” a container for our logs – but we have to produce them.

    A sample ASP.NET API project

    I’ve created a simple API project that exposes CRUD operations for a data model stored in memory (we don’t really care about the details).

    [ApiController]
    [Route("[controller]")]
    public class BooksController : ControllerBase
    {
        public BooksController()
        {
    
        }
    
        [HttpGet("{id}")]
        public ActionResult<Book> GetBook([FromRoute] int id)
        {
    
            Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
            return book switch
            {
                null => NotFound(),
                _ => Ok(book)
            };
        }
    }
    

    As you can see, the details here are not important.

    Even the Main method is the default one:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddControllers();
    
    builder.Services.AddEndpointsApiExplorer();
    builder.Services.AddSwaggerGen();
    
    var app = builder.Build();
    
    if (app.Environment.IsDevelopment())
    {
        app.UseSwagger();
        app.UseSwaggerUI();
    }
    
    app.UseHttpsRedirection();
    
    app.MapControllers();
    
    app.Run();
    

    We have the Controllers, we have Swagger… well, nothing fancy.

    Let’s mix it all together.

    How to integrate Seq with an ASP.NET application

    If you want to use Seq in an ASP.NET application (may it be an API application or whatever else), you have to add it to the startup pipeline.

    First, you have to install the proper NuGet package: Seq.Extensions.Logging.

    The Seq.Extensions.Logging NuGet package

    Then, you have to add it to your Services, calling the AddSeq() method:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddControllers();
    
    builder.Services.AddEndpointsApiExplorer();
    builder.Services.AddSwaggerGen();
    
    + builder.Services.AddLogging(lb => lb.AddSeq());
    
    var app = builder.Build();
    

    Now, Seq is ready to intercept whatever kind of log arrives at the specified port (remember, in our case, we are using the default one: 5341).

    We can try it out by adding an ILogger to the BooksController constructor:

    private readonly ILogger<BooksController> _logger;
    
    public BooksController(ILogger<BooksController> logger)
    {
        _logger = logger;
    }
    

    So that we can use the _logger instance to create logs as we want, using the necessary Log Level:

    [HttpGet("{id}")]
    public ActionResult<Book> GetBook([FromRoute] int id)
    {
        _logger.LogInformation("I am Information");
        _logger.LogWarning("I am Warning");
        _logger.LogError("I am Error");
        _logger.LogCritical("I am Critical");
    
        Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
        return book switch
        {
            null => NotFound(),
            _ => Ok(book)
        };
    }
    

    Log messages on Seq

    Using Structured Logging with ILogger and Seq

    One of the best things about Seq is that it automatically handles Structured Logging.

    [HttpGet("{id}")]
    public ActionResult<Book> GetBook([FromRoute] int id)
    {
        _logger.LogInformation("Looking if in my collection with {TotalBooksCount} books there is one with ID {SearchedId}"
     , booksCatalogue.Count, id);
    
        Book? book = booksCatalogue.SingleOrDefault(x => x.Id == id);
        return book switch
        {
            null => NotFound(),
            _ => Ok(book)
        };
    }
    

    Have a look at this line:

    _logger.LogInformation("Looking if in my collection with {TotalBooksCount} books there is one with ID {SearchedId}"
     , booksCatalogue.Count, id);
    

    This line generates a string message, replaces all the placeholders, and, on top of that, creates two properties, SearchedId and TotalBooksCount; you can now define queries using these values.

    Structured Logs in Seq allow you to view additional logging properties

    Further readings

    I have to admit it: logging management is one of my favourite topics.

    I’ve already written a sort of introduction to Seq in the past, but at that time, I did not use the native ILogger, but Serilog, a well-known logging library that added some more functionalities on top of the native logger.

    🔗 Logging with Serilog and Seq | Code4IT

    This article first appeared on Code4IT 🐧

    In particular, Serilog can be useful for propagating Correlation IDs across multiple services so that you can fetch all the logs generated by a specific operation, even though they belong to separate applications.

    🔗 How to log Correlation IDs in .NET APIs with Serilog

    Feel free to search through my blog all the articles related to logging – I’m sure you will find interesting stuff!

    Wrapping up

    I think Seq is the best tool for local development: it’s easy to download and install, supports structured logging, and can be easily added to an ASP.NET application with just a line of code.

    I usually add it to my private projects, especially when the operations I run are complex enough to require some well-structured log.

    Given how it’s easy to install, sometimes I use it for my work projects too: when I have to fix a bug, but I don’t want to use the centralized logging platform (since it’s quite complex to use), I add Seq as a destination sink, run the application, and analyze the logs in my local machine. Then, of course, I remove its reference, as I want it to be just a discardable piece of configuration.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link