برچسب: testing

  • a (better?) alternative to Testing Diamond and Testing Pyramid | Code4IT

    a (better?) alternative to Testing Diamond and Testing Pyramid | Code4IT


    The Testing Pyramid focuses on Unit Tests; the Testing Diamond focuses on Integration Tests; and what about the Testing Vial?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Testing is crucial in any kind of application. It can also be important in applications that are meant to be thrown away: in fact, with a proper testing strategy, you can ensure that the application will do exactly what you expect it to do; instead of running it over and over again to fix the parts, by adding some specific tests, you will speed up the development of that throwaway project.

    The most common testing strategies are the Testing Pyramid and the Testing Diamond. They are both useful, but I think that they are not perfect.

    That’s why I came up with a new testing strategy that I called “the Testing Vial”: in this article, I’m going to introduce it and explain the general idea.

    Since it’s a new idea, I’d like to hear your honest feedback. Don’t be afraid to tell me that this is a terrible idea – let’s work on it together!

    The Testing Pyramid: the focus is on Unit Tests

    The Testing Pyramid is a testing strategy where the focus is on Unit Tests.

    Unit Tests are easy to write (well, they are often easy to write: it depends on whether your codebase is a mess!), they are fast to execute, so they provide immediate feedback.

    The testing pyramid

    So, the focus here is on technical details: if you create a class named Foo, most probably you will have its sibling class FooTests. And the same goes for each (public) method in it.

    Yes, I know: unit tests can operate across several methods of the same class, as long as it is considered a “unit”. But let’s be real: most of the time, we write tests against each single public method. And, even worse, we are overusing mocks.

    Problems with the Testing Pyramid

    The Testing Pyramid relies too much on unit tests.

    But Unit Tests are not perfect:

    1. They often rely too much on mocks: tests might not reflect the real execution of the system;
    2. They are too closely coupled with the related class and method: if you add one parameter to one single method, you most probably will have to update tens of test methods;
    3. They do not reflect the business operations: you might end up creating the strongest code ever, but missing the point of the whole business meaning. Maybe, because you focused too much on technical details and forgot to evaluate all the acceptance criteria.

    Now, suppose that you have to change something big, like

    • add OpenTelemetry support on the whole system;
    • replace SQL with MongoDB;
    • refactor a component, replacing a huge internal switch-case block with the Chain Of Responsibility pattern.

    Well, in this case, you will have to update or delete a lot of Unit Tests. And, still, you might not be sure you haven’t added regressions. This is one of the consequences of focusing too much on Unit Tests.

    The Testing Diamond: the focus is on Integration Tests

    The Testing Diamond emphasises the importance of Integration Tests.

    The Testing Diamond

    So, when using this testing strategy, you are expected to write many more Integration Tests and way fewer Unit Tests.

    In my opinion, this is a better approach to testing: this way, you can focus more on the business value and less on the technical details.

    Using this approach, you may refactor huge parts of the system without worrying too much about regressions and huge changes in tests: in fact, Integration Tests will give you a sort of safety net, ensuring that the system still works as expected.

    So, if I had to choose, I’d go with the Testing Diamond: implementations may change, while the overall application functionality will still be preserved.

    Problems with the Testing Diamond

    Depending on the size of the application and on how it is structured, Integration Tests may be time-consuming and hard to spin up.

    Maybe you have a gigantic monolith that takes minutes to start up: in this case, running Integration Tests may take literally hours.

    Also, there is a problem with data: if you are going to write data to a database (or an external resource), how can you ensure that the operation does not insert duplicate or dirty data?

    For this problem, there are several solutions, such as:

    • using Ephemeral Environments specifically to run these tests;
    • using TestContainers to create a sandbox environment;
    • replacing some specific operations (like saving data on the DB or sending HTTP requests) by using a separate, standalone service (as we learned in this article, where we customised a WebApplicationFactory).

    Those approaches may not be easy to implement, I know.

    Also, Integration Tests alone may not cover all the edge cases, making your application less robust.

    Introducing the Testing Vial: the focus is on business entities

    Did you notice? Both the Testing Pyramid and the Testing Diamond focus on the technical aspects of the tests, and not on the meaning for the business.

    I think that is a wrong approach, and that we should really shift our focus from the number of tests of a specific type (more Unit Tests or more Integration Tests?) to the organisational value they bring: that’s why I came up with the idea of the Testing Vial.

    The Testing Vial

    You can imagine tests to be organised into sealed vials.

    In each vial, you have

    • E2E tests: to at least cover the most critical flows
    • Integration tests: to cover at least all the business requirements as they are described in the Acceptance Criteria of your User Stories (or, in general, to cover all Happy Paths and the most common Unhappy Paths);
    • Unit test: to cover at least all the edge cases that are hard to reproduce with Integration tests.

    So, using the Testing Vial, you don’t have to worry about the number of tests of a specific type: you only care that, regardless of their number, tests are focused on Business concerns.

    But, ok, nothing fancy: it’s just common sense.

    To make the Testing Vial effective, there are two more parts to add.

    Architectural tests, to validate that the system design hasn’t changed

    After you have all these tests, in a variable number which depends solely on what is actually helpful for you, you also write some Architectural Tests, for example by using ArchUnit, for Java, or ArchUnit.NET for .NET applications.

    This way, other than focusing on the business value (regardless of this goal being achieved by Unit Tests or Integration Tests), you also validate that the system hasn’t changed in unexpected ways. For example, you might have added a dependency between modules, making the system more coupled and less maintainable.

    Generally speaking, Architectural Tests should be written in the initial phases of a project, so that, by running them from time to time, they can ensure that nothing has changed.

    With Architectural Tests, which act as a cap for the vial, you ensure that the tests are complete, valid, and that the architecture-wise maintainability of the system is preserved.

    But that’s not enough!

    Categories, to identify and isolate areas of your application

    All of this makes sense if you add one or more tags to your tests: these tags should identify the business entity the test is referring to. For example, in an e-shop application, you should add categories about “Product”, “Cart”, “User”, and so on. This is way easier if you already do DDD, clearly.

    In C# you can categorise tests by using TestCategory if you use MSTest or NUnit, or Trait if you use xUnit.*

    [TestCategory("Cart")]
    [TestCategory("User")]
    public async Task User_Should_DoSomethingWithCart(){}
    

    Ok, but why?

    Well, categorising tests allows you to keep track of the impacts of a change more broadly. Especially at the beginning, you might notice that too many tests are marked with too many categories: this might be a sign of a poor design, and you might want to work to improve it.

    Also, by grouping by category, you can have a complete view of everything that happens in the system about that specific Entity, regardless of the type of test.

    Did you know that in Visual Studio you can group tests by Category (called Traits), so that you can see and execute all the tests related to a specific Category?

    Tests grouped by Category in Visual Studio 2022

    By using Code Coverage tools wisely – executing them in combination with tests of a specific category – you can identify all the parts of the application that are affected by such tests. This is especially true if you have many Integration Tests: just by looking at the executed methods, you can have a glimpse of all the parts touched by that test. This simple trick can also help you out with reorganising the application (maybe by moving from monolith to modular monolith).

    Finally, having tests tagged, allows you to have a catalogue of all the Entities and their dependencies. And, in case you need to work on a specific activity that changes something about an Entity, you can perform better analyses to find potential, overlooked impacts.

    Further readings

    There is a lot of content about tests and testing strategies, so here are some of them.

    End-to-End Testing vs Integration Testing | Testim

    This article first appeared on Code4IT 🐧

    In this article I described how I prefer the Testing Diamond over the Testing Pyramid.

    Testing Pyramid vs Testing Diamond (and how they affect Code Coverage) | Code4IT

    Then, I clearly changed my mind and came up with the idea of the Testing Vial.

    Wrapping up

    With the Testing Vial approach, the shift moves from technical to business concerns: you don’t really care if you’ve written more Unit Tests or more Integration tests; you only care that you have covered everything that the business requires, and that by using Architecture Tests and Test Categories you can make sure that you are not introducing unwanted dependencies between modules, improving maintainability.

    Vials are meant to be standalone: by accessing the content of a vial, you can see everything related to it: its dependencies, its architecture, main user cases and edge cases.

    Yzma

    Clearly, the same test may appear in multiple vials, but that’s not a problem.

    I came up with this idea recently, so I want to hear from you what you think about it. I’m sure there are areas of improvement!

    Let me know!

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • injecting and testing the current time with TimeProvider and FakeTimeProvider | Code4IT

    injecting and testing the current time with TimeProvider and FakeTimeProvider | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Things that depend on concrete stuff are difficult to use when testing. Think of the file system: to have tests work properly, you have to ensure that the file system is structured exactly as you are expecting it to be.

    A similar issue occurs with dates: if you create tests based on the current date, they will fail the next time you run them.

    In short, you should find a way to abstract these functionalities, to make them usable in the tests.

    In this article, we are going to focus on the handling of dates: we’ll learn what the TimeProvider class is, how to use it and how to mock it.

    The old way for handling dates: a custom interface

    Back in the days, the most straightforward approach to add abstraction around the date management was to manually create an interface, or an abstract class, to wrap the access to the current date:

    public interface IDateTimeWrapper
    {
      DateTime GetCurrentDate();
    }
    

    Then, the standard implementation implemented the interface by using only the UTC date:

    public class DateTimeWrapper : IDateTimeWrapper
    {
      public DateTime GetCurrentDate() => DateTime.UtcNow;
    }
    

    A similar approach is to have an abstract class instead:

    public abstract class DateTimeWrapper
    {
      public virtual DateTime GetCurrentDate() => DateTime.UctNow;
    }
    

    Easy: you then have to add an instance of it in the DI engine, and you are good to go.

    The only problem? You have to do it for every project you are working on. Quite a waste of time!

    How to use TimeProvider in a .NET application to get the current date

    Along with .NET 8, the .NET team released an abstract class named TimeProvider. This abstract class, beyond providing an abstraction for local time, exposes methods for working with high-precision timestamps and TimeZones.

    It’s important to notice that dates are returned as DateTimeOffset, and not as DateTime instances.

    TimeProvider comes out-of-the-box with a .NET Console application, accessible as a singleton:

    static void Main(string[] args)
    {
      Console.WriteLine("Hello, World!");
      
      DateTimeOffset utc = TimeProvider.System.GetUtcNow();
      Console.WriteLine(utc);
    
      DateTimeOffset local = TimeProvider.System.GetLocalNow();
      Console.WriteLine(local);
    }
    

    On the contrary, if you need to use Dependency Injection, for example, in .NET APIs, you have to inject it as a singleton, like this:

    builder.Services.AddSingleton(TimeProvider.System);
    

    So that you can use it like this:

    public class SummerVacationCalendar
    {
      private readonly TimeProvider _timeProvider;
    
      public SummerVacationCalendar(TimeProvider timeProvider)
     {
        this._timeProvider = timeProvider;
     }
    
      public bool ItsVacationTime()
     {
        var today = _timeProvider.GetLocalNow();
        return today.Month == 8;
     }
    }
    

    How to test TimeProvider with FakeTimeProvider

    Now, how can we test the ItsVacationTime of the SummerVacationCalendar class?

    We can use the Microsoft.Extensions.TimeProvider.Testing NuGet library, still provided by Microsoft, which provides a FakeTimeProvider class that acts as a stub for the TimeProvider abstract class:

    TimeProvider.Testing NuGet package

    By using the FakeTimeProvider class, you can set the current UTC and Local time, as well as configure the other options provided by TimeProvider.

    Here’s an example:

    [Fact]
    public void WhenItsAugust_ShouldReturnTrue()
    {
     // Arrange
      var fakeTime = new FakeTimeProvider();
      fakeTime.SetUtcNow(new DateTimeOffset(2025, 8, 14, 22, 24, 12, TimeSpan.Zero));
      var sut = new SummerVacationCalendar(fakeTime);
    
     // Act
      var isVacation = sut.ItsVacationTime();
    
     // Assert
      Assert.True(isVacation);
    }
    
    [Fact]
    public void WhenItsNotAugust_ShouldReturnFalse()
    {
     // Arrange
      var fakeTime = new FakeTimeProvider();
      fakeTime.SetUtcNow(new DateTimeOffset(2025, 3, 14, 22, 24, 12, TimeSpan.Zero));
      var sut = new SummerVacationCalendar(fakeTime);
    
     // Act
      var isVacation = sut.ItsVacationTime();
    
     // Assert
      Assert.False(isVacation);
    }
    

    Further readings

    Actually, TimeProvider provides way more functionalities than just returning the UTC and the Local time.

    Maybe we’ll explore them in the future. But for now, do you know how the DateTimeKind enumeration impacts the way you create new DateTimes?

    🔗 C# tip: create correct DateTimes with DateTimeKind | Code4IT

    This article first appeared on Code4IT 🐧

    However, always remember to test the code not against the actual time but against static values. But, if for some reason you cannot add TimeProvider in your classes, there are other less-intrusive strategies that you can use (and that can work for other types of dependencies as well, like the file system):

    🔗 3 ways to inject DateTime and test it | Code4IT

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Try Cross-browser Testing! (For Free!)

    Try Cross-browser Testing! (For Free!)


    TLDR: You can cross-browser test your website in real browsers for free without installing anything by using Browserling. It runs all browsers (Chrome, Firefox, Safari, Edge, etc) on all systems so you don’t need to download them or keep your own browser stack.

    What Is Cross-browser Testing?

    Cross-browser testing means checking how a website looks and works in different browsers. Every browser, like Chrome, Firefox, Edge, or Safari, shows websites a little differently. Sometimes your site looks fine in one but breaks in another. Cross-browser testing makes sure your site works for everyone.

    Why Do I Need It?

    Because your visitors don’t all use the same browser. Some people are on Chrome, others on Safari or Firefox, and some still use Internet Explorer. If your site only works on one browser, you’ll lose visitors. Cross-browser testing helps you catch bugs before your users do.

    Can I Test Mobile Browsers Too?

    Yes, cross-browser testing tools like Browserling let you check both desktop and mobile versions. You can quickly switch between screen sizes and devices to see how your site looks on phones, tablets, and desktops.

    Do I Have to Install Different Browsers?

    Nope! That’s the best part. You don’t need to clutter your computer with ten different browsers. Instead, cross-browser testing runs them in the cloud. You just pick the browser you want and test right from your own browser window.

    Is It Safe?

    Totally. You’re not installing anything shady, and you’re not downloading random browsers from sketchy websites. Everything runs on Browserling’s secure servers.

    What If I Just Want to Test a Quick Fix?

    That’s exactly what the free version is for. Got a CSS bug? A weird layout issue? Just load up the browser you need, test your page, and see how it behaves.

    How Is This Different From Developer Tools?

    Dev tools are built into browsers and help you inspect your site, but they can’t show you how your site looks in browsers you don’t have. Cross-browser testing lets you actually run your site in those missing browsers and see the real deal.

    Is It Good for Developers and Testers?

    For sure. Developers use cross-browser testing to make websites look right across platforms. QA testers use it to make sure new releases don’t break old browsers. Even hobbyists can use it to make their personal sites look better.

    Is It Free?

    Yes, Browserling has a free plan with limited time per session. If you need more testing power, they also have paid options. But for quick checks, the free plan is usually enough.

    What Is Browserling?

    Browserling is a free cloud-based cross-browser testing service. It lets you open real browsers on real machines and test your sites instantly. The latest geo-browsing feature allows you to route your tests through 20+ countries to see how websites behave across regions or to bypass sites that try to block datacenter traffic. Plus, the latest infrastructure update added admin rights, WSL with Ubuntu/Kali, build tools, custom resolutions, and more.

    Who Uses Browserling?

    Browserling is trusted by developers, IT teams, schools, banks, and even governments. Anyone who needs websites to “just work” across browsers uses Browserling. Millions of people test their sites on it every month.

    Happy testing!



    Source link

  • Try Cross-browser Testing! (For Free!)

    Try Cross-browser Testing! (For Free!)


    TLDR: You can cross-browser test your website in real browsers for free without installing anything by using Browserling. It runs all browsers (Chrome, Firefox, Safari, Edge, etc) on all systems so you don’t need to download them or keep your own browser stack.

    What Is Cross-browser Testing?

    Cross-browser testing means checking how a website looks and works in different browsers. Every browser, like Chrome, Firefox, Edge, or Safari, shows websites a little differently. Sometimes your site looks fine in one but breaks in another. Cross-browser testing makes sure your site works for everyone.

    Why Do I Need It?

    Because your visitors don’t all use the same browser. Some people are on Chrome, others on Safari or Firefox, and some still use Internet Explorer. If your site only works on one browser, you’ll lose visitors. Cross-browser testing helps you catch bugs before your users do.

    Can I Test Mobile Browsers Too?

    Yes, cross-browser testing tools like Browserling let you check both desktop and mobile versions. You can quickly switch between screen sizes and devices to see how your site looks on phones, tablets, and desktops.

    Do I Have to Install Different Browsers?

    Nope! That’s the best part. You don’t need to clutter your computer with ten different browsers. Instead, cross-browser testing runs them in the cloud. You just pick the browser you want and test right from your own browser window.

    Is It Safe?

    Totally. You’re not installing anything shady, and you’re not downloading random browsers from sketchy websites. Everything runs on Browserling’s secure servers.

    What If I Just Want to Test a Quick Fix?

    That’s exactly what the free version is for. Got a CSS bug? A weird layout issue? Just load up the browser you need, test your page, and see how it behaves.

    How Is This Different From Developer Tools?

    Dev tools are built into browsers and help you inspect your site, but they can’t show you how your site looks in browsers you don’t have. Cross-browser testing lets you actually run your site in those missing browsers and see the real deal.

    Is It Good for Developers and Testers?

    For sure. Developers use cross-browser testing to make websites look right across platforms. QA testers use it to make sure new releases don’t break old browsers. Even hobbyists can use it to make their personal sites look better.

    Is It Free?

    Yes, Browserling has a free plan with limited time per session. If you need more testing power, they also have paid options. But for quick checks, the free plan is usually enough.

    What Is Browserling?

    Browserling is a free cloud-based cross-browser testing service. It lets you open real browsers on real machines and test your sites instantly. The latest geo-browsing feature allows you to route your tests through 20+ countries to see how websites behave across regions or to bypass sites that try to block datacenter traffic. Plus, the latest infrastructure update added admin rights, WSL with Ubuntu/Kali, build tools, custom resolutions, and more.

    Who Uses Browserling?

    Browserling is trusted by developers, IT teams, schools, banks, and even governments. Anyone who needs websites to “just work” across browsers uses Browserling. Millions of people test their sites on it every month.

    Happy testing!



    Source link

  • Why reaching 100% Code Coverage must NOT be your testing goal (with examples in C#) | Code4IT

    Why reaching 100% Code Coverage must NOT be your testing goal (with examples in C#) | Code4IT


    Average teams aim at 100% Code Coverage just to reach the number. Great teams don’t. Why?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Code Coverage is a valuable metric in software development, especially when it comes to testing. It provides insights into how much of your codebase is exercised by your test suite.

    However, we must recognize that Code Coverage alone should not be the ultimate goal of your testing strategy. It has some known limitations, and 100% Code Coverage does not guarantee your code to be bug-free.

    In this article, we’ll explore why Code Coverage matters, its limitations, and how to balance achieving high coverage and effective testing. We’ll use C# to demonstrate when Code Coverage works well and how you can cheat on the result.

    What Is Code Coverage?

    Code Coverage measures the percentage of code lines, branches, or statements executed during testing. It helps answer questions like:

    • How much of my code is tested?
    • Are there any untested paths or dead code?
    • Which parts of the application need additional test coverage?

    In C#, tools like Cobertura, dotCover, and Visual Studio’s built-in coverage analysis provide Code Coverage reports.

    You may be tempted to think that the higher the coverage, the better the quality of your tests. However, we will soon demonstrate why this assumption is misleading.

    Why Code Coverage Matters

    Clearly, if you write valuable tests, Code Coverage is a great ally.

    A high value of Code Coverage helps you with:

    1. Risk mitigation: High Code Coverage reduces the risk of undiscovered defects. If a piece of code isn’t covered, it will likely contain bugs.
    2. Preventing regressions: code is destined to evolve over time. If you ensure that most of your code is covered by tests, whenever you’ll add some more code you will discover which parts of the existing system are impacted by your changes. If you update the production code and no test fails, it might be a bad sign: you probably need to cover the code you are modifying with enough tests.
    3. Quality assurance: Code Coverage ensures that critical parts of your application are tested thoroughly. Good tests focus on the functional aspects of the code (what) rather than on the technical aspects (how). A good test suite is a safety net against regressions.
    4. Guidance for Testing Efforts: Code Coverage highlights areas that need more attention. It guides developers in writing additional tests where necessary.

    The Limitations of Code Coverage

    While Code Coverage is valuable, it has limitations:

    1. False Sense of Security: Achieving 100% coverage doesn’t guarantee bug-free software. It’s possible to have well-covered code that still contains subtle defects. This is especially true when mocking dependencies.
    2. They focus on Lines, Not Behavior: Code Coverage doesn’t consider the quality of tests. It doesn’t guarantee that the tests covers all possible scenarios.
    3. Ignored Edge Cases: Some code paths (exception handling, rare conditions) are complex to cover. High coverage doesn’t necessarily mean thorough testing.

    3 Practical reasons why Code Coverage percentage can be misleading

    For the sake of this article, I’ve created a dummy .NET API project with the typical three layers: controller, service, and repository.

    It contains a Controller with two endpoints:

    [ApiController]
    [Route("[controller]")]
    public class UniversalWeatherForecastController : ControllerBase
    {
        private readonly IWeatherService _weatherService;
    
        public UniversalWeatherForecastController(IWeatherService weatherService)
        {
            _weatherService = weatherService;
        }
    
        [HttpGet]
        public IEnumerable<Weather> Get(int locationId)
        {
            var forecast = _weatherService.ForecastsByLocation(locationId);
            return forecast.ToList();
        }
    
        [HttpGet("minByPlanet")]
        public Weather GetMinByPlanet(Planet planet)
        {
            return _weatherService.MinTemperatureForPlanet(planet);
        }
    }
    

    The Controller uses the Service:

    public class WeatherService : IWeatherService
    {
        private readonly IWeatherForecastRepository _repository;
    
        public WeatherService(IWeatherForecastRepository repository)
        {
            _repository = repository;
        }
    
        public IEnumerable<Weather> ForecastsByLocation(int locationId)
        {
            ArgumentOutOfRangeException.ThrowIfLessThanOrEqual(locationId, 0);
    
            Location? searchedLocation = _repository.GetLocationById(locationId);
    
            if (searchedLocation == null)
                throw new LocationNotFoundException(locationId);
    
            return searchedLocation.WeatherForecasts;
        }
    
        public Weather MinTemperatureForPlanet(Planet planet)
        {
            var allCitiesInPlanet = _repository.GetLocationsByPlanet(planet);
            int minTemperature = int.MaxValue;
            Weather minWeather = null;
            foreach (var city in allCitiesInPlanet)
            {
                int temperature =
                    city.WeatherForecasts.MinBy(c => c.TemperatureC).TemperatureC;
    
                if (temperature < minTemperature)
                {
                    minTemperature = temperature;
                    minWeather = city.WeatherForecasts.MinBy(c => c.TemperatureC);
                }
            }
            return minWeather;
        }
    }
    

    Finally, the Service calls the Repository, omitted for brevity (it’s just a bunch of items in an in-memory List).

    I then created an NUnit test project to generate the unit tests, focusing on the WeatherService:

    
    public class WeatherServiceTests
    {
        private readonly Mock<IWeatherForecastRepository> _mockRepository;
        private WeatherService _sut;
    
        public WeatherServiceTests() => _mockRepository = new Mock<IWeatherForecastRepository>();
    
        [SetUp]
        public void Setup() => _sut = new WeatherService(_mockRepository.Object);
    
        [TearDown]
        public void Teardown() =>_mockRepository.Reset();
    
        // Tests
    
    }
    

    This class covers two cases, both related to the ForecastsByLocation method of the Service.

    Case 1: when the location exists in the repository, this method must return the related info.

    [Test]
    public void ForecastByLocation_Should_ReturnForecast_When_LocationExists()
    {
        //Arrange
        var forecast = new List<Weather>
            {
                new Weather{
                    Date = DateOnly.FromDateTime(DateTime.Now.AddDays(1)),
                    Summary = "sunny",
                    TemperatureC = 30
                }
            };
    
        var location = new Location
        {
            Id = 1,
            WeatherForecasts = forecast
        };
    
        _mockRepository.Setup(r => r.GetLocationById(1)).Returns(location);
    
        //Act
        var resultForecast = _sut.ForecastsByLocation(1);
    
        //Assert
        CollectionAssert.AreEquivalent(forecast, resultForecast);
    }
    

    Case 2: when the location does not exist in the repository, the method should throw a LocationNotFoundException.

    [Test]
    public void ForecastByLocation_Should_Throw_When_LocationDoesNotExists()
    {
        //Arrange
        _mockRepository.Setup(r => r.GetLocationById(1)).Returns<Location?>(null);
    
        //Act + Assert
        Assert.Catch<LocationNotFoundException>(() => _sut.ForecastsByLocation(1));
    }
    

    We then can run the Code Coverage report and see the result:

    Initial Code Coverage

    Tests cover 16% of lines and 25% of branches, as shown in the report displayed above.

    Delving into the details of the WeatherService class, we can see that we have reached 100% Code Coverage for the ForecastsByLocation method.

    Code Coverage Details for the Service

    Can we assume that that method is bug-free? Not at all!

    Not all cases may be covered by tests

    Let’s review the method under test.

    public IEnumerable<Weather> ForecastsByLocation(int locationId)
    {
        ArgumentOutOfRangeException.ThrowIfLessThanOrEqual(locationId, 0);
    
        Location? searchedLocation = _repository.GetLocationById(locationId);
    
        if (searchedLocation == null)
            throw new LocationNotFoundException(locationId);
    
        return searchedLocation.WeatherForecasts;
    }
    

    Our tests only covered two cases:

    • the location exists;
    • the location does not exist.

    However, these tests do not cover the following cases:

    • the locationId is less than zero;
    • the locationId is exactly zero (are we sure that 0 is an invalid locationId?)
    • the _repository throws an exception (right now, that exception is not handled);
    • the location does exist, but it has no weather forecast info; is this a valid result? Or should we have thrown another custom exception?

    So, well, we have 100% Code Coverage for this method, yet we have plenty of uncovered cases.

    You can cheat on the result by adding pointless tests

    There’s a simple way to have high Code Coverage without worrying about the quality of the tests: calling the methods and ignoring the result.

    To demonstrate it, we can create one single test method to reach 100% coverage for the Repository, without even knowing what it actually does:

    public class WeatherForecastRepositoryTests
    {
        private readonly WeatherForecastRepository _sut;
    
        public WeatherForecastRepositoryTests() =>
            _sut = new WeatherForecastRepository();
    
        [Test]
        public void TotallyUselessTest()
        {
            _ = _sut.GetLocationById(1);
            _ = _sut.GetLocationsByPlanet(Planet.Jupiter);
    
            Assert.That(1, Is.EqualTo(1));
        }
    }
    

    Here we are: we have reached 53% of total Code Coverage by adding one single test, which does not provide any value!

    We reached 53% Code Coverage without adding useful methods

    As you can see, in fact, the WeatherForecastRepository has now reached 100% Code Coverage.

    The whole class has 100% Code Coverage, even without useful tests

    Great job! Or is it?

    You can cheat by excluding parts of the code

    In C# there is a handy attribute that you can apply to methods and classes: ExcludeFromCodeCoverage.

    While this attribute can be useful for classes that you cannot test, it can be used to inflate the Code Coverage percentage by applying it to classes and methods you don’t want to test (maybe because you are lazy?).

    We can, in fact, add that attribute to every single class like this:

    
    [ApiController]
    [Route("[controller]")]
    [ExcludeFromCodeCoverage]
    public class UniversalWeatherForecastController : ControllerBase
    {
        // omitted
    }
    
    [ExcludeFromCodeCoverage]
    public class WeatherService : IWeatherService
    {
        // omitted
    }
    
    [ExcludeFromCodeCoverage]
    public class WeatherForecastRepository : IWeatherForecastRepository
    {
        // omitted
    }
    

    You can then add the same attribute to all the other classes – even the Program class! – to reach 100% Code Coverage without writing lots of test.

    100% Code Coverage, but without any test

    Note: to reach 100% I had to exclude everything but the tests on the Repository: otherwise, if I had exactly zero methods under tests, the final Code Coverage would’ve been 0.

    Beyond Code Coverage: Effective Testing Strategies

    As we saw, high Code Coverage is not enough. It’s a good starting point, but it must not be the final goal.

    We can, indeed, focus our efforts in different areas:

    1. Test Quality: Prioritize writing meaningful tests over chasing high coverage. Focus on edge cases, boundary values, and scenarios that matter to users.
    2. Exploratory Testing: Manual testing complements automated tests. Exploratory testing uncovers issues that automated tests might miss.
    3. Mutation Testing: Instead of just measuring coverage, consider mutation testing. It introduces artificial defects and checks if tests catch them.

    Finally, my suggestion is to focus on integration tests rather than on unit tests: this testing strategy is called Testing Diamond.

    Further readings

    To generate Code Coverage reports, I used Coverlet, as I explained in this article (which refers to Visual Studio 2019, but the steps are still valid with newer versions).

    🔗 How to view Code Coverage with Coverlet and Visual Studio | Code4IT

    In my opinion, we should not focus all our efforts on Unit Tests. On the contrary, we should write more Integration Tests to ensure that the functionality, as a whole, works correctly.

    This way of defining tests is called Testing Diamond, and I explained it here:

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage)

    This article first appeared on Code4IT 🐧

    Finally, I talked about Code Coverage on YouTube as a guest on the VisualStudio Toolbox channel. Check it out here!

    https://www.youtube.com/watch?v=R80G3LJ6ZWc

    Wrapping up

    Code Coverage is a useful metric but should not be the end goal. Aim for a balance: maintain good coverage while ensuring effective testing. Remember that quality matters more than mere numbers. Happy testing! 🚀

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Getting started with Load testing with K6 on Windows 11 &vert; Code4IT

    Getting started with Load testing with K6 on Windows 11 | Code4IT


    Can your system withstand heavy loads? You can answer this question by running Load Tests. Maybe, using K6 as a free tool.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Understanding how your system reacts to incoming network traffic is crucial to determining whether it’s stable, able to meet the expected SLO, and if the underlying infrastructure and architecture are fine.

    How can we simulate many incoming requests? How can we harvest the results of our API calls?

    In this article, we will learn how to use K6 to run load tests and display the final result locally in Windows 11.

    This article will be the foundation of future content, in which I’ll explore more topics related to load testing, performance tips, and more.

    What is Load Testing?

    Load testing simulates real-world usage conditions to ensure the software can handle high traffic without compromising performance or user experience.

    The importance of load testing lies in its ability to identify bottlenecks and weak points in the system that could lead to slow response times, errors, or crashes when under stress.

    By conducting load testing, developers can make necessary optimizations and improvements, ensuring the software is robust, reliable, and scalable. It’s an essential step in delivering a quality product that meets user expectations and maintains business continuity during peak usage times. If you think of it, a system unable to handle the incoming traffic may entirely or partially fail, leading to user dissatisfaction, loss of revenue, and damage to the company’s reputation.

    Ideally, you should plan to have automatic load tests in place in your Continuous Delivery pipelines, or, at least, ensure that you run Load tests in your production environment now and then. You then want to compare the test results with the previous ones to ensure that you haven’t introduced bottlenecks in the last releases.

    The demo project

    For the sake of this article, I created a simple .NET API project: it exposes just one endpoint, /randombook, which returns info about a random book stored in an in-memory Entity Framework DB context.

    int requestCount = 0;
    int concurrentExecutions = 0;
    object _lock = new();
    app.MapGet("/randombook", async (CancellationToken ct) =>
    {
        Book ? thisBook =
            default;
        var delayMs = Random.Shared.Next(10, 10000);
        try
        {
            lock(_lock)
            {
                requestCount++;
                concurrentExecutions++;
                app.Logger.LogInformation("Request {Count}. Concurrent Executions {Executions}. Delay: {DelayMs}ms", requestCount, concurrentExecutions, delayMs);
            }
            using(ApiContext context = new ApiContext())
            {
                await Task.Delay(delayMs);
                if (ct.IsCancellationRequested)
                {
                    app.Logger.LogWarning("Cancellation requested");
                    throw new OperationCanceledException();
                }
                var allbooks = await context.Books.ToArrayAsync(ct);
                thisBook = Random.Shared.GetItems(allbooks, 1).First();
            }
        }
        catch (Exception ex)
        {
            app.Logger.LogError(ex, "An error occurred");
            return Results.Problem(ex.Message);
        }
        finally
        {
            lock(_lock)
            {
                concurrentExecutions--;
            }
        }
        return TypedResults.Ok(thisBook);
    });
    

    There are some details that I want to highlight before moving on with the demo.

    As you can see, I added a random delay to simulate a random RTT (round-trip time) for accessing the database:

    var delayMs = Random.Shared.Next(10, 10000);
    // omit
    await Task.Delay(delayMs);
    

    I then added a thread-safe counter to keep track of the active operations. I increase the value when the request begins, and decrease it when the request completes. The log message is defined in the lock section to avoid concurrency issues.

    lock (_lock)
    {
        requestCount++;
        concurrentExecutions++;
    
        app.Logger.LogInformation("Request {Count}. Concurrent Executions {Executions}. Delay: {DelayMs}ms",
            requestCount,
            concurrentExecutions,
            delayMs
     );
    }
    
    // and then
    
    lock (_lock)
    {
        concurrentExecutions--;
    }
    

    Of course, it’s not a perfect solution: it just fits my need for this article.

    Install and configure K6 on Windows 11

    With K6, you can run the Load Tests by defining the endpoint to call, the number of requests per minute, and some other configurations.

    It’s a free tool, and you can install it using Winget:

    winget install k6 --source winget
    

    You can ensure that you have installed it correctly by opening a Bash (and not a PowerShell) and executing the following command.

    Note: You can actually use PowerShell, but you have to modify some system keys to make K6 recognizable as a command.

    The --version prints the version installed and the id of the latest GIT commit belonging to the installed package. For example, you will see k6.exe v0.50.0 (commit/f18209a5e3, go1.21.8, windows/amd64).

    Now, we can initialize the tool. Open a Bash and run the following command:

    This command generates a script.js file, which you will need to configure in order to set up the Load Testing configurations.

    Here’s the scaffolded file (I removed the comments that refer to parts we are not going to cover in this article):

    import http from "k6/http"
    import { sleep } from "k6"
    
    export const options = {
      // A number specifying the number of VUs to run concurrently.
      vus: 10, // A string specifying the total duration of the test run.
      duration: "30s",
    }
    
    export default function () {
      http.get("https://test.k6.io")
      sleep(1)
    }
    

    Let’s analyze the main parts:

    • vus: 10: VUs are the Virtual Users: they simulate the incoming requests that can be executed concurrently.
    • duration: '30s': this value represents the duration of the whole test run;
    • http.get('https://test.k6.io');: it’s the main function. We are going to call the specified endpoint and keep track of the responses, metrics, timings, and so on;
    • sleep(1): it’s the sleep time between each iteration.

    To run it, you need to call:

    Understanding Virtual Users (VUs) in K6

    VUs, Iterations, Sleep time… how do they work together?

    I updated the script.js file to clarify how K6 works, and how it affects the API calls.

    The new version of the file is this:

    import http from "k6/http"
    import { sleep } from "k6"
    
    export const options = {
      vus: 1,
      duration: "30s",
    }
    
    export default function () {
      http.get("https://localhost:7261/randombook")
      sleep(1)
    }
    

    We are saying “Run the load testing for 30 seconds. I want only ONE execution to exist at a time. After each execution, sleep for 1 second”.

    Make sure to run the API project, and then run k6 run script.js.

    Let’s see what happens:

    1. K6 starts, and immediately calls the API.
    2. On the API, we can see the first incoming call. The API sleeps for 1 second, and then starts sending other requests.

    By having a look at the logs printed from the application, we can see that we had no more than one concurrent request:

    Logs from 1 VU

    From the result screen, we can see that we have run our application for 30 seconds (plus another 30 seconds for graceful-stop) and that the max number of VUs was set to 1.

    Load Tests results with 1 VU

    Here, you can find the same results as plain text, making it easier to follow.

    execution: local
    script: script.js
    output: -
    
    scenarios: (100.00%) 1 scenario, 1 max VUs, 1m0s max duration (incl. graceful stop):
     * default: 1 looping VUs for 30s (gracefulStop: 30s)
    
    
    data_received..................: 2.8 kB 77 B/s
    data_sent......................: 867 B   24 B/s
    http_req_blocked...............: avg=20.62ms   min=0s       med=0s     max=123.77ms p(90)=61.88ms   p(95)=92.83ms
    http_req_connecting............: avg=316.64µs min=0s       med=0s     max=1.89ms   p(90)=949.95µs p(95)=1.42ms
    http_req_duration..............: avg=4.92s     min=125.65ms med=5.37s max=9.27s     p(90)=8.04s     p(95)=8.66s
    { expected_response:true }...: avg=4.92s     min=125.65ms med=5.37s max=9.27s     p(90)=8.04s     p(95)=8.66s
    http_req_failed................: 0.00%   ✓ 0         ✗ 6
    http_req_receiving.............: avg=1.12ms   min=0s       med=0s     max=6.76ms   p(90)=3.38ms   p(95)=5.07ms
    http_req_sending...............: avg=721.55µs min=0s       med=0s     max=4.32ms   p(90)=2.16ms   p(95)=3.24ms
    http_req_tls_handshaking.......: avg=13.52ms   min=0s       med=0s     max=81.12ms   p(90)=40.56ms   p(95)=60.84ms
    http_req_waiting...............: avg=4.92s     min=125.65ms med=5.37s max=9.27s     p(90)=8.03s     p(95)=8.65s
    http_reqs......................: 6       0.167939/s
    iteration_duration.............: avg=5.95s     min=1.13s     med=6.38s max=10.29s   p(90)=9.11s     p(95)=9.7s
    iterations.....................: 6       0.167939/s
    vus............................: 1       min=1       max=1
    vus_max........................: 1       min=1       max=1
    
    
    running (0m35.7s), 0/1 VUs, 6 complete and 0 interrupted iterations
    default ✓ [======================================] 1 VUs   30s
    

    Now, let me run the same script but update the VUs. We are going to run this configuration:

    export const options = {
      vus: 3,
      duration: "30s",
    }
    

    The result is similar, but this time we had performed 16 requests instead of 6. That’s because, as you can see, there were up to 3 concurrent users accessing our APIs.

    Logs from 3 VU

    The final duration was still 30 seconds. However, we managed to accept 3x users without having impacts on the performance, and without returning errors.

    Load Tests results with 3 VU

    Customize Load Testing properties

    We have just covered the surface of what K6 can do. Of course, there are many resources in the official K6 documentation, so I won’t repeat everything here.

    There are some parts, though, that I want to showcase here (so that you can deep dive into the ones you need).

    HTTP verbs

    In the previous examples, we used the post HTTP method. As you can imagine, there are other methods that you can use.

    Each HTTP method has a corresponding Javascript function. For example, we have

    • get() for the GET method
    • post() for the POST method
    • put() for the PUT method
    • del() for the DELETE method.

    Stages

    You can create stages to define the different parts of the execution:

    export const options = {
      stages: [
        { duration: "30s", target: 20 },
        { duration: "1m30s", target: 10 },
        { duration: "20s", target: 0 },
      ],
    }
    

    With the previous example, I defined three stages:

    1. the first one lasts 30 seconds, and brings the load to 20 VUs;
    2. next, during the next 90 second, the number of VUs decreases to 10;
    3. finally, in the last 20 seconds, it slowly shuts down the remaining calls.

    Load Tests results with complex Stages

    As you can see from the result, the total duration was 2m20s (which corresponds to the sum of the stages), and the max amount of requests was 20 (the number defined in the first stage).

    Scenarios

    Scenarios allow you to define the details of requests iteration.

    We always use a scenario, even if we don’t create one: in fact, we use the default scenario that gives us a predetermined time for the gracefulStop value, set to 30 seconds.

    We can define custom scenarios to tweak the different parameters used to define how the test should act.

    A scenario is nothing but a JSON element where you define arguments like duration, VUs, and so on.

    By defining a scenario, you can also decide to run tests on the same endpoint but using different behaviours: you can create a scenario for a gradual growth of users, one for an immediate peak, and so on.

    A glimpse to the final report

    Now, we can focus on the meaning of the data returned by the tool.

    Let’s use again the image we saw after running the script with the complex stages:

    Load Tests results with complex Stages

    We can see lots of values whose names are mostly self-explaining.

    We can see, for example, data_received and data_sent, which tell you the size of the data sent and received.

    We have information about the duration and response of HTTP requests (http_req_duration, http_req_sending, http_reqs), as well as information about the several phases of an HTTP connection, like http_req_tls_handshaking.

    We finally have information about the configurations set in K6, such as iterations, vus, and vus_max.

    You can see the average value, the min and max, and some percentiles for most of the values.

    Wrapping up

    K6 is a nice tool for getting started with load testing.

    You can see more examples in the official documentation. I suggest to take some time and explore all the possibilities provided by K6.

    This article first appeared on Code4IT 🐧

    As I said before, this is just the beginning: in future articles, we will use K6 to understand how some technical choices impact the performance of the whole application.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!





    Source link