برچسب: run

  • Use TestCase to run similar unit tests with NUnit | Code4IT

    Use TestCase to run similar unit tests with NUnit | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In my opinion, Unit tests should be well structured and written even better than production code.

    In fact, Unit Tests act as a first level of documentation of what your code does and, if written properly, can be the key to fixing bugs quickly and without adding regressions.

    One way to improve readability is by grouping similar tests that only differ by the initial input but whose behaviour is the same.

    Let’s use a dummy example: some tests on a simple Calculator class that only performs sums on int values.

    public static class Calculator
    {
        public static int Sum(int first, int second) => first + second;
    }
    

    One way to create tests is by creating one test for each possible combination of values:

    public class SumTests
    {
    
        [Test]
        public void SumPositiveNumbers()
        {
            var result = Calculator.Sum(1, 5);
            Assert.That(result, Is.EqualTo(6));
        }
    
        [Test]
        public void SumNegativeNumbers()
        {
            var result = Calculator.Sum(-1, -5);
            Assert.That(result, Is.EqualTo(-6));
        }
    
        [Test]
        public void SumWithZero()
        {
            var result = Calculator.Sum(1, 0);
            Assert.That(result, Is.EqualTo(1));
        }
    }
    

    However, it’s not a good idea: you’ll end up with lots of identical tests (DRY, remember?) that add little to no value to the test suite. Also, this approach forces you to add a new test method to every new kind of test that pops into your mind.

    When possible, we should generalize it. With NUnit, we can use the TestCase attribute to specify the list of parameters passed in input to our test method, including the expected result.

    We can then simplify the whole test class by creating only one method that accepts the different cases in input and runs tests on those values.

    [Test]
    [TestCase(1, 5, 6)]
    [TestCase(-1, -5, -6)]
    [TestCase(1, 0, 1)]
    public void SumWorksCorrectly(int first, int second, int expected)
    {
        var result = Calculator.Sum(first, second);
        Assert.That(result, Is.EqualTo(expected));
    }
    

    By using TestCase, you can cover different cases by simply adding a new case without creating new methods.

    Clearly, don’t abuse it: use it only to group methods with similar behaviour – and don’t add if statements in the test method!

    There is a more advanced way to create a TestCase in NUnit, named TestCaseSource – but we will talk about it in a future C# tip 😉

    Further readings

    If you are using NUnit, I suggest you read this article about custom equality checks – you might find it handy in your code!

    🔗 C# Tip: Use custom Equality comparers in Nunit tests | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to run SonarQube analysis locally with Docker | Code4IT

    How to run SonarQube analysis locally with Docker | Code4IT


    The quality of a project can be measured by having a look at how the code is written. SonarQube can help you by running static code analysis and letting you spot the pain points. Let’s learn how to install and run it locally with Docker.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Code quality is important, and having the right tool can be terribly beneficial for an application’s long-term success.

    Although maintainability problems often come from module separation and cannot be solved by making a single class cleaner, a tool like SonarQube can pave the way to a cleaner codebase.

    In this article, we will learn how to download and install SonarQube Community using Docker. We will see how to configure it and run your very first code analysis on a .NET-based application.

    Scaffold a dummy ASP.NET Core API project

    To try it out, you need- of course! – a repository to analyse.

    In this article, I will set up SonarQube to analyse a tiny, dummy ASP.NET Core API project. You are probably already familiar with this API project: it’s the default one created by Visual Studio – the one with the Weather Forecast.

    I chose to use Controllers instead of Minimal APIs so that we could analyse some more code.

    Have a look at the code: you will notice that the default implementation of the WeatherForecastController injects an instance of ILogger, stores it, and then never references it in other places. This sounds like a good maintainability issue that SonarQube should be able to identify.

    To better locate which files SonarQube is creating, I decided to put this project under source control, but only locally. This way, when we run the SonarQube analysis, we will be able to see the files created and modified by SonarQube.

    Clearly, the first step is to have SonaQube installed on your machine.

    I’m going to install SonarQube Community Build. It contains almost all the functionalities of SonarQube, and it’s available for free (of course, to have additional functionalities, you have to pick the proper pricing tier).

    🔗 SonarQube Community Build

    SonarQube Community Build can be installed via Docker: this way, SonarQube can run in a containerised environment, regardless of your Operating System.

    To do that, you can run the following command:

    docker run --name sonarqube-community -p 9001:9000 sonarqube:community
    

    This Docker command downloads the latest version of the sonarqube:community Docker Image, and runs it locally, making it available at localhost:9001.

    As briefly explained in an old article, the -p 9001:9000 part of the CLI command means that you are exposing the port 9000 of the “inner” container to the world via the port 9001 of the host.

    Once the command has finished downloading all the dependencies and loading all the resources, you will be able to access SonarQube on localhost:9001.

    You will be asked to log in: the default username is admin, and the password is (again) admin.

    SonaQube login for

    After the first login, you will be asked to change your password.

    Create a SonarQube Project

    It’s time to link SonarQube to your repository.

    To do that, you have to create a so-called Project. Ideally, you may want to integrate SonarQube into your CI pipeline, but having it run locally is fine for tying it out.

    So, on the Projects page, you can create a new project. Click on “Create a local project” and follow the wizard.

    “Create a local project” button

    First, create a new Project by defining the Display name (in my case, code4it-sonarqube-local) and the project key (code4it-sonarqube-local-project-key). The Project Key is used in the command line to execute the code analysis using the rules defined in this project.

    Also, you have to specify the name of the branch that you will be using as a baseline: generally, it’s either “main” or “master”, but it can be anything.

    Create new project Form

    Follow the wizard, choosing some configurations (I suggest you start with the default values), and you’ll end up with a Project ready to be initialised.

    SonarQube wizard: choose analysis method

    Then, you will have to generate a token to run the analysis (I know, it feels like there are too many similar steps. But bear with me; we’re almost ready to run the analysis).

    Generate the Token

    By hitting the “generate” button you’ll see a new token like this: sqp_fd71f97760c84539b579713f18a07c790432cfe8. Remember to store it somewhere, as you’ll gonna be using it later.

    The last step is to make sure that you have sonarscanner available as a .NET Core Global Tool in your machine.

    Just open a terminal as an administrator and run:

    dotnet tool install --global dotnet-sonarscanner
    

    Run the SonarQube analysis on your local repository

    Finally, we are ready to run the first analysis of the code!

    I suggest you commit all your changes so that you’ll see the files generated by SonarQube.

    Open a Terminal, navigate to the root of the Solution, and follow these steps.

    Prepare the SonarQube analysis

    You first have to instruct SonaQube on the configurations to be used for the current analysis.

    The command to run is something like this:

    dotnet sonarscanner begin /k:"<your key here>" /d:sonar.host.url="<your-host-root-url>"  /d:sonar.token="<your-project-token>"
    

    For my specific execution context, using the values you can see in this article, I have to run the command with the following parameters:

    dotnet sonarscanner begin /k:"code4it-sonarqube-local-project-key" /d:sonar.host.url="http://localhost:9001"  /d:sonar.token="sqp_fd71f97760c84539b579713f18a07c790432cfe8"
    

    The flags represent the configurations of SonarQube:

    /k is the Project Key, as defined before: it contains the rules to be used;
    /d:sonar.host.url is the url that will receive the result of the analysis, allowing SonarQube to aggregate the issues and display them on a UI;
    /d:sonar.token is the Token you created before.

    After the command completes, you’ll see that SonarQube created some files to prepare the code analysis. These files contain all the rules under code analysis and their related severity.

    SonarQube files generated after initialization

    From now on, SonarQube will be able to run the analysis and understand how to treat each issue.

    Build the solution

    Now you have to build the whole solution, running:

    You can, of course, choose to run the command specifying the solution file to build.

    Even if it seems trivial, this step is crucial for SonarQube: in fact, it generates some new metadata files that list all the files that have to be taken into account when running the analysis, as well as the path to the output folder:

    Files generated by SonarQube after the build

    Run the actual SonarQube analysis

    Finally, it’s time to run the actual analysis.

    Again, head to the root of the application, and on a terminal run the following command:

    dotnet sonarscanner end /d:sonar.token="<your-token>"
    

    In my case, the full command is

    dotnet sonarscanner end /d:sonar.token="sqp_fd71f97760c84539b579713f18a07c790432cfe8"
    

    Depending on the size of the project, it will take different amounts of time. For this simple project, it took 7 seconds. For a huge project I worked on, it took almost 2 hours.

    Also, the run time depends on the amount of new code to be analyzed: the very first run is the slowest one, and then all the subsequent analyses will focus on the latest code. In fact, most of the issues are stored in a cache.

    No new files are created, as the result is directly sent to the SonarQube server.

    The result is now available at localhost!

    Open a browser, open the website at the port you defined before, and get ready to navigate the status of the static analysis.

    SonarQube analysis overview

    As I was expecting, the project passed the so-called Quality Gates – the minimum level set to consider a project “good”.

    Yet, as you can see under the “Issues” tab, there are actually two issues. For example, there’s a suggested improvement that says to remove the _logger field, it is not used:

    SonarQube issue details

    Of course, in a more complex project, you’ll find more issues, with different severity.

    Further readings

    This article first appeared on Code4IT 🐧

    In this article, I assumed you know the basics of Docker. If not, or if you want to brush up your knowledge about the basics of Docker, here’s an article for you.

    🔗 First steps with Docker: download and run MongoDB locally | Code4IT

    All in all, remember that having clean code is only one of the concerns you should care about when writing code. But what should you really focus on?

    🔗 Code opinion: performance or clean code?

    Wrapping up

    SonarQube is a tool, not the solution to your problems.

    Just like with Code Coverage, having your code without SonarQube issues does not mean that your code is future-proof and maintainable.

    Maybe the single line of code or the single class has no issues. However, the code may still be a mess, preventing you from applying changes easily.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link