نویسنده: post Bina

  • Is Random.GetItems the best way to get random items in C# 12? | Code4IT

    Is Random.GetItems the best way to get random items in C# 12? | Code4IT


    You have a collection of items. You want to retrieve N elements randomly. Which alternatives do we have?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    One of the most common operations when dealing with collections of items is to retrieve a subset of these elements taken randomly.

    Before .NET 8, the most common way to retrieve random items was to order the collection using a random value and then take the first N items of the now sorted collection.

    From .NET 8 on, we have a new method in the Random class: GetItems.

    So, should we use this method or stick to the previous version? Are there other alternatives?

    For the sake of this article, I created a simple record type, CustomRecord, which just contains two properties.

    public record CustomRecord(int Id, string Name);
    

    I then stored a collection of such elements in an array. This article’s final goal is to find the best way to retrieve a random subset of such items. Spoiler alert: it all depends on your definition of best!

    Method #1: get random items with Random.GetItems

    Starting from .NET 8, released in 2023, we now have a new method belonging to the Random class: GetItems.

    There are three overloads:

    public T[] GetItems<T>(T[] choices, int length);
    public T[] GetItems<T>(ReadOnlySpan<T> choices, int length);
    public void GetItems<T>(ReadOnlySpan<T> choices, Span<T> destination);
    

    We will focus on the first overload, which accepts an array of items (choices) in input and returns an array of size length.

    We can use it as such:

    CustomRecord[] randomItems = Random.Shared.GetItems(Items, TotalItemsToBeRetrieved);
    

    Simple, neat, efficient. Or is it?

    Method #2: get the first N items from a shuffled copy of the initial array

    Another approach is to shuffle the whole initial array using Random.Shuffle. It takes in input an array and shuffles the items in-place.

    Random.Shared.Shuffle(Items);
    CustomRecord[] randomItems = copy.Take(TotalItemsToBeRetrieved).ToArray();
    

    If you need to preserve the initial order of the items, you should create a copy of the initial array and shuffle only the copy. You can do this by using this syntax:

    CustomRecord[] copy = [.. Items];
    

    If you just need some random items and don’t care about the initial array, you can shuffle it without making a copy.

    Once we’ve shuffled the array, we can pick the first N items to get a subset of random elements.

    Method #3: order by Guid, then take N elements

    Before .NET 8, one of the most used approaches was to order the whole collection by a random value, usually a newly generated Guid, and then take the first N items.

    var randomItems = Items
        .OrderBy(_ => Guid.NewGuid()) // THIS!
        .Take(TotalItemsToBeRetrieved)
        .ToArray();
    

    This approach works fine but has the disadvantage that it instantiates a new Guid value for every item in the collection, which is an expensive memory-wise operation.

    Method #4: order by Number, then take N elements

    Another approach was to generate a random number used as a discriminator to order the collection; then, again, we used to get the first N items.

    var randomItems = Items
        .OrderBy(_ => Random.Shared.Next()) // THIS!
        .Take(TotalItemsToBeRetrieved)
        .ToArray();
    

    This approach is slightly better because generating a random integer is way faster than generating a new Guid.

    Benchmarks of the different operations

    It’s time to compare the approaches.

    I used BenchmarkDotNet to generate the reports and ChartBenchmark to represent the results visually.

    Let’s see how I structured the benchmark.

    [MemoryDiagnoser]
    public class RandomItemsBenchmark
    {
        [Params(100, 10_000, 1_000_000)]
        public int Size;
    
        private CustomRecord[] Items;
        private int TotalItemsToBeRetrieved;
        private CustomRecord[] Copy;
    
        [IterationSetup]
        public void Setup()
        {
            var ids = Enumerable.Range(0, Size).ToArray();
            Items = ids.Select(i => new CustomRecord(i, $"Name {i}")).ToArray();
            Copy = [.. Items];
    
            TotalItemsToBeRetrieved = Random.Shared.Next(Size);
        }
    
        [Benchmark(Baseline = true)]
        public void WithRandomGetItems()
        {
            CustomRecord[] randomItems = Random.Shared.GetItems(Items, TotalItemsToBeRetrieved);
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithRandomGuid()
        {
            CustomRecord[] randomItems = Items
                .OrderBy(_ => Guid.NewGuid())
                .Take(TotalItemsToBeRetrieved)
                .ToArray();
    
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithRandomNumber()
        {
            CustomRecord[] randomItems = Items
                .OrderBy(_ => Random.Shared.Next())
                .Take(TotalItemsToBeRetrieved)
                .ToArray();
    
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithShuffle()
        {
            CustomRecord[] copy = [.. Items];
    
            Random.Shared.Shuffle(copy);
            CustomRecord[] randomItems = copy.Take(TotalItemsToBeRetrieved).ToArray();
    
            _ = randomItems.Length;
        }
    
        [Benchmark]
        public void WithShuffleNoCopy()
        {
            Random.Shared.Shuffle(Copy);
            CustomRecord[] randomItems = Copy.Take(TotalItemsToBeRetrieved).ToArray();
    
            _ = randomItems.Length;
        }
    }
    

    We are going to run the benchmarks on arrays with different sizes. We will start with a smaller array with 100 items and move to a bigger one with one million items.

    We generate the initial array of CustomRecord instances for every iteration and store it in the Items property. Then, we randomly choose the number of items to get from the Items array and store it in the TotalItemsToBeRetrieved property.

    We also generate a copy of the initial array at every iteration; this way, we can run Random.Shuffle without modifying the original array.

    Finally, we define the body of the benchmarks using the implementations we saw before.

    Notice: I marked the benchmark for the GetItems method as a baseline, using [Benchmark(Baseline = true)]. This way, we can easily see the results ratio for the other methods compared to this specific method.

    When we run the benchmark, we can see this final result (for simplicity, I removed the Error, StdDev, and Median columns):

    Method Size Mean Ratio Allocated Alloc Ratio
    WithRandomGetItems 100 6.442 us 1.00 424 B 1.00
    WithRandomGuid 100 39.481 us 6.64 3576 B 8.43
    WithRandomNumber 100 22.219 us 3.67 2256 B 5.32
    WithShuffle 100 7.038 us 1.16 1464 B 3.45
    WithShuffleNoCopy 100 4.254 us 0.73 624 B 1.47
    WithRandomGetItems 10000 58.401 us 1.00 5152 B 1.00
    WithRandomGuid 10000 2,369.693 us 65.73 305072 B 59.21
    WithRandomNumber 10000 1,828.325 us 56.47 217680 B 42.25
    WithShuffle 10000 180.978 us 4.74 84312 B 16.36
    WithShuffleNoCopy 10000 156.607 us 4.41 3472 B 0.67
    WithRandomGetItems 1000000 15,069.781 us 1.00 4391616 B 1.00
    WithRandomGuid 1000000 319,088.446 us 42.79 29434720 B 6.70
    WithRandomNumber 1000000 166,111.193 us 22.90 21512408 B 4.90
    WithShuffle 1000000 48,533.527 us 6.44 11575304 B 2.64
    WithShuffleNoCopy 1000000 37,166.068 us 4.57 6881080 B 1.57

    By looking at the numbers, we can notice that:

    • GetItems is the most performant method, both for time and memory allocation;
    • using Guid.NewGuid is the worst approach: it’s 10 to 60 times slower than GetItems, and it allocates, on average, 4x the memory;
    • sorting by random number is a bit better: it’s 30 times slower than GetItems, and it allocates around three times more memory;
    • shuffling the array in place and taking the first N elements is 4x slower than GetItems; if you also have to preserve the original array, notice that you’ll lose some memory allocation performance because you must allocate more memory to create the cloned array.

    Here’s the chart with the performance values. Notice that, for better readability, I used a Log10 scale.

    Results comparison for all executions

    If we move our focus to the array with one million items, we can better understand the impact of choosing one approach instead of the other. Notice that here I used a linear scale since values are on the same magnitude order.

    The purple line represents the memory allocation in bytes.

    Results comparison for one-million-items array

    So, should we use GetItems all over the place? Well, no! Let me tell you why.

    The problem with Random.GetItems: repeated elements

    There’s a huge problem with the GetItems method: it returns duplicate items. So, if you need to get N items without duplicates, GetItems is not the right choice.

    Here’s how you can demonstrate it.

    First, create an array of 100 distinct items. Then, using Random.Shared.GetItems, retrieve 100 items.

    The final array will have 100 items; the array may or may not contain duplicates.

    int[] source = Enumerable.Range(0, 100).ToArray();
    
    StringBuilder sb = new StringBuilder();
    
    for (int i = 1; i <= 200; i++)
    {
        HashSet<int> ints = Random.Shared.GetItems(source, 100).ToHashSet();
        sb.AppendLine($"run-{i}, {ints.Count}");
    }
    
    var finalCsv = sb.ToString();
    

    To check the number of distinct elements, I put the resulting array in a HashSet<int>. The final size of the HashSet will give us the exact percentage of unique values.

    If the HashSet size is exactly 100, it means that GetItems retrieved each element from the original array exactly once.

    For simplicity, I formatted the result in CSV format so that I could generate plots with it.

    Unique values percentage returned by GetItems

    As you can see, on average, we have 65% of unique items and 35% of duplicate items.

    Further readings

    I used the Enumerable.Range method to generate the initial items.

    I wrote an article to explain how to use it, which are some parts to consider when using it, and more.

    🔗 LINQ’s Enumerable.Range to generate a sequence of consecutive numbers | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    You should not replace the way you get random items from the array by using Random.GetItems. Well, unless you are okay with having duplicates.

    If you need unique values, you should rely on other methods, such as Random.Shuffle.

    All in all, always remember to validate your assumptions by running experiments on the methods you are not sure you can trust!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • No Visuals, No Time, No Problem: Launching OXI Instruments / ONE MKII in 2 Weeks

    No Visuals, No Time, No Problem: Launching OXI Instruments / ONE MKII in 2 Weeks


    Two weeks. No 3D Visuals. No panic.
    We built the OXI ONE MKII website using nothing but structure and type. All to meet the deadline for the product launch and its debut in Berlin.

    The Challenge

    Creating a website for the launch of a new flagship product is already a high-stakes task; doing it in under 14 days, with no flawless renders, raises the bar even higher. When OXI Instruments approached us, the ONE MKII was entering its final development stage. The product was set to premiere in Berlin, and the website had to be live by that time, no extensions, no room for delay. At the same time, there was no finalized imagery, no video, and no product renders ready for use.

    We had to

    • Build a bold, functional website without relying on visual assets
    • Reflect the character and philosophy of the ONE MKII — modular, live, expressive
    • Craft a structure that would be clear to musicians and intuitive across devices
    • Work in parallel with the OXI team, adjusting to changes and updates in real time

    This wasn’t just about speed. It was about designing clarity under pressure, with a strict editorial mindset, where every word, margin, and interaction had to work harder than usual. These are the kinds of things you’d never guess as an outside observer or a potential customer. But constraints like these are truly a test of resilience.

    The Approach

    If you’ve seen other websites we’ve launched with various teams, you’ll notice they often include 3D graphics or other rich visual layers. This project, however, was a rare exception.

    It was crucial to make the right call early on and to hit expectations spot-on during the concept stage. A couple of wrong turns wouldn’t be fatal, but too many missteps could easily lead to missing the deadline and delivering an underwhelming result.

    We focused on typography, photography, and rhythm. Fortunately, we were able to shape the art direction for the photos in parallel with the design process. Big thanks to Candace Janee (OXI project manager) who coordinated between me, the photographers, and everyone involved to quickly arrange compositions, lighting setups, and other details for the shoot.

    Another layer of complexity was planning the broader interface and future platform in tandem with this launch. While we were only releasing two core pages at this stage, we knew the site would eventually evolve into a full eCommerce platform. Every design choice had to consider the long game from homepage and support pages to product detail layouts and checkout flows. That also meant thinking ahead about how systems like Webflow, WordPress, WooCommerce, and email automation would integrate down the line.

    Typography

    With no graphics to lean on, typography had to carry more weight than usual not just in terms of legibility, but in how it communicates tone, energy, and brand attitude. We opted for a bold, editorial rhythm. Headlines drive momentum across the layout, while smaller supporting text helps guide the eye without clutter.

    We selected both typefaces from the same designer, Wei Huang, a type designer from Australia. Work Sans for headlines and body copy, and Fragment Mono for supporting labels and detailed descriptions.The two fonts complement each other well and are completely free to use, which allowed us to rely on Google Fonts without worrying about file formats or load sizes.

    CMS System

    Even though we were only launching two pages initially, the CMS was built with a full content ecosystem in mind. Product specs, updates, videos, and future campaigns all had a place in the structure. Instead of hardcoding static blocks, we built flexible content types that could evolve alongside the product line.

    The idea was simple: avoid rework later. The CMS wasn’t just a backend; it was the foundation of a scalable platform. Whether we were thinking of Webflow’s CMS collections or potential integrations with WordPress and WooCommerce, the goal was to create a system that was clean, extensible, and future-ready.

    Sketches. Early explorations.

    I really enjoy the concept phase. It’s the moment where different directions emerge and key patterns begin to form. Whether it’s alignment, a unique sense of ornamentation, asymmetry, or something else entirely. This stage is where the visual language starts to take shape.

    Here’s a look at some of the early concepts we explored. The OXI website could’ve turned out very differently.

    We settled on a dark version of the design partly due to the founder’s preference, and partly because the brand’s core colors (which were off-limits for changes) worked well with it. Additionally, cutting out the device from photos made it easier to integrate visuals into the layout and mask any imperfections.

    Rhythm & Layout

    When planning the rhythm and design, it’s important not to go overboard with creativity. As designers, we often want to add that “wow” factor but sometimes, the business just doesn’t need it.

    The target audience, people in the music world, already get their visual overload during performances by their favorite artists. But when they’re shopping for a new device, they’re not looking for spectacle. They want to see the product. The details. The specs. Everything that matters.

    All of it needs to be delivered clearly and accessibly. We chose the simplest approach: alternating between center-aligned and left-aligned sections, giving us the flexibility to structure the layout intuitively. Photography helps break up the technical content, and icons quickly draw attention to key features. People don’t read, they scan. We designed with that in mind.

    A few shots highlighting some of my favorite sections.

    Result

    The results were genuinely rewarding. The team felt a boost in motivation, and the brand’s audience and fans immediately noticed the shift highlighting how the update pushed OXI into a more professional direction.

    According to my information, the pre-orders for the device sold out in less than a week. It’s always a great feeling when you’re proud of the outcome, the team is happy, and the audience responds positively. That’s what matters most.

    Looking Ahead / Part Two

    This was just the beginning. The second part of the project (a full eCommerce experience) is currently in the works. The core will expand, but the principles will remain the same.

    I hope you’ll find the full relaunch of OXI Instruments just as exciting. Stay tuned on updates.





    Source link

  • [ENG] Improving Your Code Coverage | Microsoft Visual Studio YouTube channel



    [ENG] Improving Your Code Coverage | Microsoft Visual Studio YouTube channel



    Source link

  • The Quick Guide to Dijkstra's Algorithm



    The Quick Guide to Dijkstra's Algorithm



    Source link

  • Building a Physics-Based Character Controller with the Help of AI

    Building a Physics-Based Character Controller with the Help of AI


    Creating a third-person character controller involves more than just moving an object around a 3D scene. Realistic movement, grounded physics, responsive jumping, and animation blending are essential for a polished feel. This article explores how these elements can be assembled — not through traditional manual coding, but via AI-assisted development using Bolt.new, a browser-based AI-assisted development tool that generates web code from natural language prompts, backed by Claude 3.7 Sonnet and Claude 3.5 Sonnet LLMs. It provides a lightweight environment where developers can focus on describing functionality rather than writing boilerplate.

    For this character controller, Bolt handled tasks like setting up physics, integrating animations, and managing input systems, making it easier to test ideas and iterate quickly without switching between tools or writing everything from scratch.

    If you’re curious to learn more, check out this article on Codrops, which also explores the platform’s capabilities and showcases another real-world project built entirely with AI.

    The final project is powered by React Three Fiber, Three.js, and Rapier, and showcases how a designer or developer can create complex, interactive 3D experiences by guiding AI — focusing on behavior and structure rather than syntax.

    Step 1: Setting Up Physics with a Capsule and Ground

    The character controller begins with a simple setup: a capsule collider for the player and a ground plane to interact with. Rapier, a fast and lightweight physics engine built in WebAssembly, handles gravity, rigid body dynamics, and collisions. This forms the foundation for player movement and world interaction.

    The capsule shape was chosen for its stability when sliding across surfaces and climbing over small obstacles — a common pattern in real-time games.

    Step 2: Real-Time Tuning with a GUI

    To enable rapid iteration and balance gameplay feel, a visual GUI was introduced (using Leva.js). This panel exposes parameters such as:

    • Player movement speed
    • Jump force
    • Gravity scale
    • Follow camera offset
    • Debug toggles

    By integrating this directly into the experience, developers can tune the controller live without needing to edit or recompile code, speeding up testing and design decisions.

    Step 3: Ground Detection with Raycasting

    A raycast is used to detect whether the player is grounded. This simple yet effective check prevents the character from jumping mid-air or triggering multiple jumps in sequence.

    The logic is executed on every frame, casting a ray downward from the base of the capsule collider. When contact is confirmed, the jump input is enabled. This technique also allows smooth transitions between grounded and falling states in the animation system.

    Step 4: Integrating a Rigged Character with Animation States

    The visual character uses a rigged GLB model via Mixamo, with three key animations: Idle, Run, and Fall. These are integrated as follows:

    • The GLB character is attached as a child of the capsule collider
    • The animation state switches dynamically based on velocity and grounded status
    • Transitions are handled via animation blending for a natural feel

    This setup keeps the visuals in sync with physics, while preserving modular control over the physical capsule.

    Step 5: World Building and Asset Integration

    The environment was arranged in Blender, then exported as a single .glb file and imported into the bolt.new project scene. This approach allows for efficient scene composition while keeping asset management simple.

    For web, using .glb keeps geometry and textures bundled together. To maintain performance, it’s recommended to keep textures at 1024×1024 resolution or other square power-of-two sizes (e.g. 256, 512, 2048). This ensures optimal GPU memory usage and faster load times across devices.

    Special thanks to KayLousberg for the low-poly 3D kit used for prototyping.

    Step 6: Cross-Platform Input Support

    The controller was designed to work seamlessly across desktop, mobile, and gamepad platforms — all built using AI-generated logic through Bolt.

    Gamepad support was added using the Gamepad API, allowing players to plug in a controller and play with analog input.

    On desktop, the controller uses standard keyboard input (WASD or arrow keys) and mouse movement for camera control.

    On mobile, AI-generated code enabled an on-screen joystick and jump button, making the game fully touch-compatible.

    All input types control the same physics-driven character, ensuring consistent behavior across devices — whether you’re playing on a laptop, touchscreen, or game controller.

    This cross-platform support was implemented entirely through natural language prompts, showcasing how AI can translate high-level intent into working input systems.

    The Role of AI in the Workflow

    What makes this controller unique isn’t the mechanics — it’s the process. Every system was generated by AI through descriptive prompts, allowing the developer to work more like a creative director than a traditional engineer.

    AI handled the boilerplate, the physics setup, the animation switching logic — all based on clear creative goals. This opens new doors for prototyping and interactive design, where iteration speed matters more than syntax.

    This character controller demo includes:

    • Capsule collider with physics
    • Grounded detection via raycast
    • State-driven animation blending
    • GUI controls for tuning
    • Environment interaction with static/dynamic objects
    • Cross-Platform Input Support

    It’s a strong starting point for creating browser-based games, interactive experiences, or prototyping new ideas — all with the help of AI.

    Check out the full game built using this setup as a base: 🎮 Demo Game

    Thanks for following along — have fun building 😊



    Source link

  • Prim's Algorithm: Quick Guide with Examples



    Prim's Algorithm: Quick Guide with Examples



    Source link

  • IFormattable interface, to define different string formats for the same object &vert; Code4IT

    IFormattable interface, to define different string formats for the same object | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Even when the internal data is the same, sometimes you can represent it in different ways. Think of the DateTime structure: by using different modifiers, you can represent the same date in different formats.

    DateTime dt = new DateTime(2024, 1, 1, 8, 53, 14);
    
    Console.WriteLine(dt.ToString("yyyy-MM-dddd")); //2024-01-Monday
    Console.WriteLine(dt.ToString("Y")); //January 2024
    

    Same datetime, different formats.

    You can further customise it by adding the CultureInfo:

    System.Globalization.CultureInfo italianCulture = new System.Globalization.CultureInfo("it-IT");
    
    Console.WriteLine(dt.ToString("yyyy-MM-dddd", italianCulture)); //2024-01-lunedì
    Console.WriteLine(dt.ToString("Y", italianCulture)); //gennaio 2024
    

    Now, how can we use this behaviour in our custom classes?

    IFormattable interface for custom ToString definition

    Take this simple POCO class:

    public class Person
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public DateTime BirthDate { get; set; }
    }
    

    We can make this class implement the IFormattable interface so that we can define and use the advanced ToString:

    public class Person : IFormattable
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public DateTime BirthDate { get; set; }
    
        public string ToString(string? format, IFormatProvider? formatProvider)
        {
            // Here, you define how to work with different formats
        }
    }
    

    Now, we can define the different formats. Since I like to keep the available formats close to the main class, I added a nested class that only exposes the names of the formats.

    public class Person : IFormattable
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public DateTime BirthDate { get; set; }
    
        public string ToString(string? format, IFormatProvider? formatProvider)
        {
            // Here, you define how to work with different formats
        }
    
        public static class StringFormats
        {
            public const string FirstAndLastName = "FL";
            public const string Mini = "Mini";
            public const string Full = "Full";
        }
    }
    

    Finally, we can implement the ToString(string? format, IFormatProvider? formatProvider) method, taking care of all the different formats we support (remember to handle the case when the format is not recognised!)

    public string ToString(string? format, IFormatProvider? formatProvider)
    {
        switch (format)
        {
            case StringFormats.FirstAndLastName:
                return string.Format("{0} {1}", FirstName, LastName);
            case StringFormats.Full:
            {
                FormattableString fs = $"{FirstName} {LastName} ({BirthDate:D})";
                return fs.ToString(formatProvider);
            }
            case StringFormats.Mini:
                return $"{FirstName.Substring(0, 1)}.{LastName.Substring(0, 1)}";
            default:
                return this.ToString();
        }
    }
    

    A few things to notice:

    1. I use a switch statement based on the values defined in the StringFormats subclass. If the format is empty or unrecognised, this method returns the default implementation of ToString.
    2. You can use whichever way to generate a string, like string interpolation, or more complex ways;
    3. In the StringFormats.Full branch, I stored the string format in a FormattableString instance to apply the input formatProvider to the final result.

    Getting a custom string representation of an object

    We can try the different formatting options now that we have implemented them all.

    Look at how the behaviour changes based on the formatting and input culture (Hint: venerdí is the Italian for Friday.).

    Person person = new Person
    {
        FirstName = "Albert",
        LastName = "Einstein",
        BirthDate = new DateTime(1879, 3, 14)
    };
    
    System.Globalization.CultureInfo italianCulture = new System.Globalization.CultureInfo("it-IT");
    
    Console.WriteLine(person.ToString(Person.StringFormats.FirstAndLastName, italianCulture)); //Albert Einstein
    
    Console.WriteLine(person.ToString(Person.StringFormats.Mini, italianCulture)); //A.E
    
    Console.WriteLine(person.ToString(Person.StringFormats.Full, italianCulture)); //Albert Einstein (venerdì 14 marzo 1879)
    
    Console.WriteLine(person.ToString(Person.StringFormats.Full, null)); //Albert Einstein (Friday, March 14, 1879)
    
    Console.WriteLine(person.ToString(Person.StringFormats.Full, CultureInfo.InvariantCulture)); //Albert Einstein (Friday, 14 March 1879)
    
    Console.WriteLine(person.ToString("INVALID FORMAT", CultureInfo.InvariantCulture)); //Scripts.General.IFormattableTest+Person
    
    Console.WriteLine(string.Format("I am {0:Mini}", person)); //I am A.E
    
    Console.WriteLine($"I am not {person:Full}"); //I am not Albert Einstein (Friday, March 14, 1879)
    

    Not only that, but now the result can also depend on the Culture related to the current thread:

    using (new TemporaryThreadCulture(italianCulture))
    {
        Console.WriteLine(person.ToString(Person.StringFormats.Full, CultureInfo.CurrentCulture)); // Albert Einstein (venerdì 14 marzo 1879)
    }
    
    using (new TemporaryThreadCulture(germanCulture))
    {
        Console.WriteLine(person.ToString(Person.StringFormats.Full, CultureInfo.CurrentCulture)); //Albert Einstein (Freitag, 14. März 1879)
    }
    

    (note: TemporaryThreadCulture is a custom class that I explained in a previous article – see below)

    Further readings

    You might be thinking «wow, somebody still uses String.Format? Weird!»

    Well, even though it seems an old-style method to generate strings, it’s still valid, as I explain here:

    🔗How to use String.Format – and why you should care about it | Code4IT

    Also, how did I temporarily change the culture of the thread? Here’s how:
    🔗 C# Tip: How to temporarily change the CurrentCulture | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Animated Product Grid Preview with GSAP & Clip-Path

    Animated Product Grid Preview with GSAP & Clip-Path


    My (design) partner, Gaetan Ferhah, likes to send me his design and motion experiments throughout the week. It’s always fun to see what he’s working on, and it often sparks ideas for my own projects. One day, he sent over a quick concept for making a product grid feel a bit more creative and interactive. 💬 The idea for this tutorial came from that message.

    We’ll explore a “grid to preview” hover interaction that transforms product cards into a full preview. As with many animations and interactions, there are usually several ways to approach the implementation—ranging in complexity. It can feel intimidating (or almost impossible) to recreate a designer’s vision from scratch. But I’m a huge fan of simplifying wherever possible and leaning on optical illusions (✨ fake it ’til you make it ✨).

    For this tutorial, I knew I wanted to keep things straightforward and recreate the effect of puzzle pieces shifting into place using a combination of clip-path animation and an image overlay.

    Let’s break it down in a few steps:

    1. Layout and Overlay (HTML, CSS)Set up the initial layout and carefully match the position of the preview overlay to the grid.
    2. Build JavaScript structure (JavaScript)Creating some classes to keep us organised, add some interactivity (event listeners).
    3. Clip-Path Creation and Animation (CSS, JS, GSAP)Adding and animating the clip-path, including some calculations on resize—this forms a key part of the puzzle effect.
    4. Moving Product Cards (JS, GSAP)Set up animations to move the product cards towards each other on hover.
    5. Preview Image Scaling (JS, GSAP)Slightly scaling down the preview overlay in response to the inward movement of the other elements.
    6. Adding Images (HTML, JS, GSAP)Enough with the solid colours, let’s add some images and a gallery animation.
    7. Debouncing events (JS)Debouncing the mouse-enter event to prevent excessive triggering and reduce jitter.
    8. Final tweaks Crossed the t’s and dotted the i’s—small clean-ups and improvements.

    Layout and Overlay

    At the foundation of every good tutorial is a solid HTML structure. In this step, we’ll create two key elements: the product grid and the overlay for the preview cards. Since both need a similar layout, we’ll place them inside the same container (.products).

    Our grid will consist of 8 products (4 columns by 2 rows) with a gutter of 5vw. To keep things simple, I’m only adding the corresponding li elements for the products, but not yet adding any other elements. In the HTML, you’ll notice there are two preview containers: one for the left side and one for the right. If you want to see the preview overlays right away, head to the CodePen and set the opacity of .product-preview to 1.

    Why I Opted for Two Containers

    At first, I planned to use just one preview container and move it to the opposite side of the hovered card by updating the grid-column-start. That approach worked fine—until I got to testing.

    When I hovered over a product card on the left and quickly switched to one on the right, I realised the problem: with only one container, I also had just one timeline controlling everything inside it. That made it basically impossible to manage the “in/out” transition between sides smoothly.

    So, I decided to go with two containers—one for the left side and one for the right. This way, I could animate both sides independently and avoid timeline conflicts when switching between them.

    See the Pen
    Untitled by Gwen Bogaert (@gwen-bo)
    on CodePen.

    JavaScript Set-up

    In this step, we’ll add some classes to keep things structured before adding our event listeners and initiating our timelines. To keep things organised, let’s split it into two classes: ProductGrid and ProductPreview.

    ProductGrid will be fairly basic, responsible for handling the split between left and right, and managing top-level event listeners (such as mouseenter and mouseleave on the product cards, and a general resize).

    ProductPreview is where the magic happens. ✨ This is where we’ll control everything that happens once a mouse event is triggered (enter or leave). To pass the ‘active’ product, we’ll define a setProduct method, which, in later steps, will act as the starting point for controlling our GSAP animation(s).

    Splitting Products (Left – Right)

    In the ProductGrid class, we will split all the products into left and right groups. We have 8 products arranged in 4 columns, with each row containing 4 items. We are splitting the product cards into left and right groups based on their column position.

    this.ui.products.filter((_, i) => i % 4 === 2 || i % 4 === 3)

    The logic relies on the modulo or remainder operator. The line above groups the product cards on the right. We use the index (i) to check if it’s in the 3rd (i % 4 === 2) or 4th (i % 4 === 3) position of the row (remember, indexing starts at 0). The remaining products (with i % 4 === 0 or i % 4 === 1) will be grouped on the left.

    Now that we know which products belong to the left and right sides, we will initiate a ProductPreview for both sides and pass along the products array. This will allow us to define productPreviewRight and productPreviewLeft.

    To finalize this step, we will define event listeners. For each product, we’ll listen for mouseenter and mouseleave events, and either set or unset the active product (both internally and in the corresponding ProductPreview class). Additionally, we’ll add a resize event listener, which is currently unused but will be set up for future use.

    This is where we’re at so far (only changes in JavaScript):

    See the Pen
    Tutorial – step 2 (JavaScript structure) by Gwen Bogaert (@gwen-bo)
    on CodePen.

    Clip-path

    At the base of our effect lies the clip-path property and the ability to animate it with GSAP. If you’re not familiar with using clip-path to clip content, I highly recommend this article by Sarah Soueidan.

    Even though I’ve used clip-path in many of my projects, I often struggle to remember exactly how to define the shape I’m looking for. As before, I’ve once again turned to the wonderful tool Clippy, to get a head start on defining (or exploring) clip-path shapes. For me, it helps demystify which value influences which part of the shape.

    Let’s start with the cross (from Clippy) and modify the points to create a more mathematical-looking cross (✚) instead of the religious version (✟).

    clip-path: polygon(10% 25%, 35% 25%, 35% 0%, 65% 0%, 65% 25%, 90% 25%, 90% 50%, 65% 50%, 65% 100%, 35% 100%, 35% 50%, 10% 50%);

    Feel free to experiment with some of the values, and soon you’ll notice that with small adjustments, we can get much closer to the desired shape! For example, by stretching the horizontal arms completely to the sides (set to 10% and 90% before) and shifting everything more equally towards the center (with a 10% difference from the center — so either 40% or 60%).

    clip-path: polygon(0% 40%, 40% 40%, 40% 0%, 60% 0%, 60% 40%, 100% 40%, 100% 60%, 60% 60%, 60% 100%, 40% 100%, 40% 60%, 0% 60%);

    And bada bing, bada boom! This clip-path almost immediately creates the illusion that our single preview container is split into four parts — exactly the effect we want to achieve! Now, let’s move on to animating the clip-path to get one step closer to our final result:

    Animating Clip-paths

    The concept of animating clip-paths is relatively simple, but there are a few key things to keep in mind to ensure a smooth transition. One important consideration is that it’s best to define an equal number of points for both the start and end shapes.

    The idea is fairly straightforward: we begin with the clipped parts hidden, and by the end of the animation, we want the clip-path to disappear, revealing the entire preview container (by making the arms of the cross so thin that they’re barely visible or not visible at all). This can be achieved easily with a fromTo animation in GSAP (though it’s also supported in CSS animations).

    The Catch

    You might think, “That’s it, we’re done!” — but alas, there’s a catch when it comes to using this as our puzzle effect. To make it look realistic, we need to ensure that the shape of the cross aligns with the underlying product grid. And that’s where a bit of JavaScript comes in!

    We need to factor in the gutter of our grid (5vw) to calculate the width of the arms of our cross shape. It could’ve been as simple as adding or subtracting (half!) of the gutter to/from the 50%, but… there’s a catch in the catch!

    We’re not working with a square, but with a rectangle. Since our values are percentages, subtracting 2.5vw (half of the gutter) from the center wouldn’t give us equal-sized arms. This is because there would still be a difference between the x and y dimensions, even when using the same percentage value. So, let’s take a look at how to fix that:

    onResize() {
      const { width, height } = this.container.getBoundingClientRect()
      const vw = window.innerWidth / 100
    
      const armWidthVw = 5
      const armWidthPx = armWidthVw * vw
    
      this.armWidth = {
        x: (armWidthPx / width) * 100,
        y: (armWidthPx / height) * 100
      }
    }

    In the code above (triggered on each resize), we get the width and height of the preview container (which spans 4 product cards — 2 columns and 2 rows). We then calculate what percentage 5vw would be, relative to both the width and height.

    To conclude this step, we would have something like:

    See the Pen
    Tutorial – step 3 (clip path) by Gwen Bogaert (@gwen-bo)
    on CodePen.

    Moving Product Cards

    Another step in the puzzle effect is moving the visible product cards together so they appear to form one piece. This step is fairly simple — we already know how much they need to move (again, gutter divided by 2 = 2.5vw). The only thing we need to figure out is whether a card needs to move up, down, left, or right. And that’s where GSAP comes to the rescue!

    We need to define both the vertical (y) and horizontal (x) movement for each element based on its index in the list. Since we only have 4 items, and they need to move inward, we can check whether the index is odd or even to determine the desired value for the horizontal movement. For vertical movement, we can decide whether it should move to the top or bottom depending on the position (top or bottom).

    In GSAP, many properties (like x, y, scale, etc.) can accept a function instead of a fixed value. When you pass a function, GSAP calls it for each target element individually.

    Horizontal (x): cards with an even index (0, 2) get shifted right by 2.5vw, the other (two) move to the left. Vertical (y): cards with an index lower than 2 (0,1) are located at the top, so need to move down, the other (two) move up.

    {
      x: (i) => {
        return i % 2 === 0 ? '2.5vw' : '-2.5vw'
      },
      y: (i) => {
        return i < 2 ? '2.5vw' : '-2.5vw'
      }
    }

    See the Pen
    Tutorial – step 3 (clip path) by Gwen Bogaert (@gwen-bo)
    on CodePen.

    Preview Image (Scaling)

    Cool, we’re slowly getting there! We have our clip-path animating in and out on hover, and the cards are moving inward as well. However, you might notice that the cards and the image no longer have an exact overlap once the cards have been moved. To fix that and make everything more seamless, we’ll apply a slight scale to the preview container.

    This is where a bit of extra calculation comes in, because we want it to scale relative to the gutter. So we take into account the height and width of the container.

    onResize() {
        const { width, height } = this.container.getBoundingClientRect()
        const vw = window.innerWidth / 100
        
        // ...armWidth calculation (see previous step)
    
        const widthInVw = width / vw
        const heightInVw = height / vw
        const shrinkVw = 5
    
        this.scaleFactor = {
          x: (widthInVw - shrinkVw) / widthInVw,
          y: (heightInVw - shrinkVw) / heightInVw
        }
      }

    This calculation determines a scale factor to shrink our preview container inward, matching the cards coming together. First, the rectangle’s width/height (in pixels) is converted into viewport width units (vw) by dividing it by the pixel value of 1vw. Next, the shrink amount (5vw) is subtracted from that width/height. Finally, the result is divided by the original width in vw to calculate the scale factor (which will be slightly below 1). Since we’re working with a rectangle, the scale factor for the x and y axes will be slightly different.

    In the codepen below, you’ll see the puzzle effect coming along nicely on each container. Pink are the product cards (not moving), red and blue are the preview containers.

    See the Pen
    Tutorial – step 4 (moving cards) by Gwen Bogaert (@gwen-bo)
    on CodePen.

    Adding Pictures

    Let’s make our grid a little more fun to look at!

    In this step, we’re going to add the product images to our grid, and the product preview images inside the preview container. Once that’s done, we’ll start our image gallery on hover.

    The HTML changes are relatively simple. We’ll add an image to each product li element and… not do anything with it. We’ll just leave the image as is.

    <li class="product" >
      <img src="./assets/product-1.png" alt="alt" width="1024" height="1536" />
    </li>

    The rest of the magic will happen inside the preview container. Each container will hold the preview images of the products from the other side (those that will be visible). So, the left container will contain the images of the 4 products on the right, and the right container will contain the images of the 4 products on the left. Here’s an example of one of these:

    <div class="product-preview --left">
      <div class="product-preview__images">
        <!-- all detail images -->
        <img data-id="2" src="./assets/product-2.png" alt="product-image" width="1024" height="1536" />
        <img data-id="2" src="./assets/product-2-detail-1.png" alt="product-image" width="1024" height="1536" />
    
        <img data-id="3" src="./assets/product-3.png" alt="product-image" width="1024" height="1536" />
        <img data-id="3" src="./assets/product-3-detail-1.png" alt="product-image" width="1024" height="1536" />
    
        <img data-id="6" src="./assets/product-6.png" alt="product-image" width="1024" height="1024" />
        <img data-id="6" src="./assets/product-6-detail-1.png" alt="product-image" width="1024" height="1024" />
    
        <img data-id="7" src="./assets/product-7.png" alt="product-image" width="1024" height="1536" />
        <img data-id="7" src="./assets/product-7-detail-1.png" alt="product-image" width="1024" height="1536" />
        <!-- end of all detail images -->
      </div>
    
      <div class="product-preview__inside masked-preview">
      </div>
    </div>

    Once that’s done, we can initialise by querying those images in the constructor of the ProductPreview, sorting them by their dataset.id. This will allow us to easily access the images later via the data-index attribute that each product has. To sum up, at the end of our animate-in timeline, we can call startPreviewGallery, which will handle our gallery effect.

    startPreviewGallery(id) {
      const images = this.ui.previewImagesPerID[id]
      const timeline = gsap.timeline({ repeat: -1 })
    
      // first image is already visible (do not hide)
      gsap.set([...images].slice(1), { opacity: 0 })
    
      images.forEach((image) => {
        timeline
          .set(images, { opacity: 0 }) // Hide all images
          .set(image, { opacity: 1 }) // Show only this one
          .to(image, { duration: 0, opacity: 1 }, '+=0.5') 
      })
    
      this.galleryTimeline = timeline
    }

    Debouncing

    One thing I’d like to do is debounce hover effects, especially if they are more complex or take longer to complete. To achieve this, we’ll use a simple (and vanilla) JavaScript approach with setTimeout. Each time a hover event is triggered, we’ll set a very short timer that acts as a debouncer, preventing the effect from firing if someone is just “passing by” on their way to the product card on the other side of the grid.

    I ended up using a 100ms “cooldown” before triggering the animation, which helped reduce unnecessary animation starts and minimise jitter when interacting with the cards.

    productMouseEnter(product, preview) {
      // If another timer (aka hover) was running, cancel it
      if (this.hoverDelay) {
        clearTimeout(this.hoverDelay)
        this.hoverDelay = null
      }
    
      // Start a new timer
      this.hoverDelay = setTimeout(() => {
        this.activeProduct = product
        preview.setProduct(product)
        this.hoverDelay = null // clear reference
      }, 100)
    }
    
    productMouseLeave() {
      // If user leaves before debounce completes
      if (this.hoverDelay) {
        clearTimeout(this.hoverDelay)
        this.hoverDelay = null
      }
    
      if (this.activeProduct) {
        const preview = this.getProductSide(this.activeProduct)
        preview.setProduct(null)
        this.activeProduct = null
      }
    }

    Final Tweaks

    I can’t believe we’re almost there! Next up, it’s time to piece everything together and add some small tweaks, like experimenting with easings, etc. The final timeline I ended up with (which plays or reverses depending on mouseenter or mouseleave) is:

    buildTimeline() {
      const { x, y } = this.armWidth
    
      this.timeline = gsap
        .timeline({
          paused: true,
          defaults: {
            ease: 'power2.inOut'
          }
        })
        .addLabel('preview', 0)
        .addLabel('products', 0)
        .fromTo(this.container, { opacity: 0 }, { opacity: 1 }, 'preview')
        .fromTo(this.container, { scale: 1 }, { scaleX: this.scaleFactor.x, scaleY: this.scaleFactor.y, transformOrigin: 'center center' }, 'preview')
        .to(
          this.products,
          {
            opacity: 0,
            x: (i) => {
              return i % 2 === 0 ? '2.5vw' : '-2.5vw'
            },
            y: (i) => {
              return i < 2 ? '2.5vw' : '-2.5vw'
            }
          },
          'products'
        )
        .fromTo(
          this.masked,
          {
            clipPath: `polygon(
          ${50 - x / 2}% 0%,
          ${50 + x / 2}% 0%,
          ${50 + x / 2}% ${50 - y / 2}%,
          100% ${50 - y / 2}%,
          100% ${50 + y / 2}%,
          ${50 + x / 2}% ${50 + y / 2}%,
          ${50 + x / 2}% 100%,
          ${50 - x / 2}% 100%,
          ${50 - x / 2}% ${50 + y / 2}%,
          0% ${50 + y / 2}%,
          0% ${50 - y / 2}%,
          ${50 - x / 2}% ${50 - y / 2}%
        )`
          },
          {
            clipPath: `polygon(
          50% 0%,
          50% 0%,
          50% 50%,
          100% 50%,
          100% 50%,
          50% 50%,
          50% 100%,
          50% 100%,
          50% 50%,
          0% 50%,
          0% 50%,
          50% 50%
          )`
          },
          'preview'
        )
    }

    Final Result

    📝 A quick note on usability & accessibility

    While this interaction may look cool and visually engaging, it’s important to be mindful of usability and accessibility. In its current form, this effect relies quite heavily on motion and hover interactions, which may not be ideal for all users. Here are a few things that should be considered if you’d be planning on implementing a similar effect:

    • Motion sensitivity: Be sure to respect the user’s prefers-reduced-motion setting. You can easily check this with a media query and provide a simplified or static alternative for users who prefer minimal motion.
    • Keyboard navigation: Since this interaction is hover-based, it’s not currently accessible via keyboard. If you’d like to make it more inclusive, consider adding support for focus events and ensuring that all interactive elements can be reached and triggered using a keyboard.

    Think of this as a playful, exploratory layer — not a foundation. Use it thoughtfully, and prioritise accessibility where it counts. 💛

    Acknowledgements

    I am aware that this tutorial assumes an ideal scenario of only 8 products, because what happens if you have more? I didn’t test it out myself, but the important part is that the preview containers feel like an exact overlay of the product grid. If more cards are present, you could try ‘mapping’ the coordinates of the preview container to the 8 products that are completely in view. Or.. go crazy with your own approach if you had another idea. That’s the beauty of it, there’s always many approaches that would lead to the same (visual) outcome. 🪄

    Thank you so much for following along! A big thanks to Codrops for giving me the opportunity to contribute. I’m excited to see what you’ll create when inspired by this tutorial! If you have any questions, feel free to drop me a line!



    Source link

  • How to run SonarQube analysis locally with Docker &vert; Code4IT

    How to run SonarQube analysis locally with Docker | Code4IT


    The quality of a project can be measured by having a look at how the code is written. SonarQube can help you by running static code analysis and letting you spot the pain points. Let’s learn how to install and run it locally with Docker.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Code quality is important, and having the right tool can be terribly beneficial for an application’s long-term success.

    Although maintainability problems often come from module separation and cannot be solved by making a single class cleaner, a tool like SonarQube can pave the way to a cleaner codebase.

    In this article, we will learn how to download and install SonarQube Community using Docker. We will see how to configure it and run your very first code analysis on a .NET-based application.

    Scaffold a dummy ASP.NET Core API project

    To try it out, you need- of course! – a repository to analyse.

    In this article, I will set up SonarQube to analyse a tiny, dummy ASP.NET Core API project. You are probably already familiar with this API project: it’s the default one created by Visual Studio – the one with the Weather Forecast.

    I chose to use Controllers instead of Minimal APIs so that we could analyse some more code.

    Have a look at the code: you will notice that the default implementation of the WeatherForecastController injects an instance of ILogger, stores it, and then never references it in other places. This sounds like a good maintainability issue that SonarQube should be able to identify.

    To better locate which files SonarQube is creating, I decided to put this project under source control, but only locally. This way, when we run the SonarQube analysis, we will be able to see the files created and modified by SonarQube.

    Clearly, the first step is to have SonaQube installed on your machine.

    I’m going to install SonarQube Community Build. It contains almost all the functionalities of SonarQube, and it’s available for free (of course, to have additional functionalities, you have to pick the proper pricing tier).

    🔗 SonarQube Community Build

    SonarQube Community Build can be installed via Docker: this way, SonarQube can run in a containerised environment, regardless of your Operating System.

    To do that, you can run the following command:

    docker run --name sonarqube-community -p 9001:9000 sonarqube:community
    

    This Docker command downloads the latest version of the sonarqube:community Docker Image, and runs it locally, making it available at localhost:9001.

    As briefly explained in an old article, the -p 9001:9000 part of the CLI command means that you are exposing the port 9000 of the “inner” container to the world via the port 9001 of the host.

    Once the command has finished downloading all the dependencies and loading all the resources, you will be able to access SonarQube on localhost:9001.

    You will be asked to log in: the default username is admin, and the password is (again) admin.

    SonaQube login for

    After the first login, you will be asked to change your password.

    Create a SonarQube Project

    It’s time to link SonarQube to your repository.

    To do that, you have to create a so-called Project. Ideally, you may want to integrate SonarQube into your CI pipeline, but having it run locally is fine for tying it out.

    So, on the Projects page, you can create a new project. Click on “Create a local project” and follow the wizard.

    &ldquo;Create a local project&rdquo; button

    First, create a new Project by defining the Display name (in my case, code4it-sonarqube-local) and the project key (code4it-sonarqube-local-project-key). The Project Key is used in the command line to execute the code analysis using the rules defined in this project.

    Also, you have to specify the name of the branch that you will be using as a baseline: generally, it’s either “main” or “master”, but it can be anything.

    Create new project Form

    Follow the wizard, choosing some configurations (I suggest you start with the default values), and you’ll end up with a Project ready to be initialised.

    SonarQube wizard: choose analysis method

    Then, you will have to generate a token to run the analysis (I know, it feels like there are too many similar steps. But bear with me; we’re almost ready to run the analysis).

    Generate the Token

    By hitting the “generate” button you’ll see a new token like this: sqp_fd71f97760c84539b579713f18a07c790432cfe8. Remember to store it somewhere, as you’ll gonna be using it later.

    The last step is to make sure that you have sonarscanner available as a .NET Core Global Tool in your machine.

    Just open a terminal as an administrator and run:

    dotnet tool install --global dotnet-sonarscanner
    

    Run the SonarQube analysis on your local repository

    Finally, we are ready to run the first analysis of the code!

    I suggest you commit all your changes so that you’ll see the files generated by SonarQube.

    Open a Terminal, navigate to the root of the Solution, and follow these steps.

    Prepare the SonarQube analysis

    You first have to instruct SonaQube on the configurations to be used for the current analysis.

    The command to run is something like this:

    dotnet sonarscanner begin /k:"<your key here>" /d:sonar.host.url="<your-host-root-url>"  /d:sonar.token="<your-project-token>"
    

    For my specific execution context, using the values you can see in this article, I have to run the command with the following parameters:

    dotnet sonarscanner begin /k:"code4it-sonarqube-local-project-key" /d:sonar.host.url="http://localhost:9001"  /d:sonar.token="sqp_fd71f97760c84539b579713f18a07c790432cfe8"
    

    The flags represent the configurations of SonarQube:

    /k is the Project Key, as defined before: it contains the rules to be used;
    /d:sonar.host.url is the url that will receive the result of the analysis, allowing SonarQube to aggregate the issues and display them on a UI;
    /d:sonar.token is the Token you created before.

    After the command completes, you’ll see that SonarQube created some files to prepare the code analysis. These files contain all the rules under code analysis and their related severity.

    SonarQube files generated after initialization

    From now on, SonarQube will be able to run the analysis and understand how to treat each issue.

    Build the solution

    Now you have to build the whole solution, running:

    You can, of course, choose to run the command specifying the solution file to build.

    Even if it seems trivial, this step is crucial for SonarQube: in fact, it generates some new metadata files that list all the files that have to be taken into account when running the analysis, as well as the path to the output folder:

    Files generated by SonarQube after the build

    Run the actual SonarQube analysis

    Finally, it’s time to run the actual analysis.

    Again, head to the root of the application, and on a terminal run the following command:

    dotnet sonarscanner end /d:sonar.token="<your-token>"
    

    In my case, the full command is

    dotnet sonarscanner end /d:sonar.token="sqp_fd71f97760c84539b579713f18a07c790432cfe8"
    

    Depending on the size of the project, it will take different amounts of time. For this simple project, it took 7 seconds. For a huge project I worked on, it took almost 2 hours.

    Also, the run time depends on the amount of new code to be analyzed: the very first run is the slowest one, and then all the subsequent analyses will focus on the latest code. In fact, most of the issues are stored in a cache.

    No new files are created, as the result is directly sent to the SonarQube server.

    The result is now available at localhost!

    Open a browser, open the website at the port you defined before, and get ready to navigate the status of the static analysis.

    SonarQube analysis overview

    As I was expecting, the project passed the so-called Quality Gates – the minimum level set to consider a project “good”.

    Yet, as you can see under the “Issues” tab, there are actually two issues. For example, there’s a suggested improvement that says to remove the _logger field, it is not used:

    SonarQube issue details

    Of course, in a more complex project, you’ll find more issues, with different severity.

    Further readings

    This article first appeared on Code4IT 🐧

    In this article, I assumed you know the basics of Docker. If not, or if you want to brush up your knowledge about the basics of Docker, here’s an article for you.

    🔗 First steps with Docker: download and run MongoDB locally | Code4IT

    All in all, remember that having clean code is only one of the concerns you should care about when writing code. But what should you really focus on?

    🔗 Code opinion: performance or clean code?

    Wrapping up

    SonarQube is a tool, not the solution to your problems.

    Just like with Code Coverage, having your code without SonarQube issues does not mean that your code is future-proof and maintainable.

    Maybe the single line of code or the single class has no issues. However, the code may still be a mess, preventing you from applying changes easily.

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Why reaching 100% Code Coverage must NOT be your testing goal (with examples in C#) &vert; Code4IT

    Why reaching 100% Code Coverage must NOT be your testing goal (with examples in C#) | Code4IT


    Average teams aim at 100% Code Coverage just to reach the number. Great teams don’t. Why?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Code Coverage is a valuable metric in software development, especially when it comes to testing. It provides insights into how much of your codebase is exercised by your test suite.

    However, we must recognize that Code Coverage alone should not be the ultimate goal of your testing strategy. It has some known limitations, and 100% Code Coverage does not guarantee your code to be bug-free.

    In this article, we’ll explore why Code Coverage matters, its limitations, and how to balance achieving high coverage and effective testing. We’ll use C# to demonstrate when Code Coverage works well and how you can cheat on the result.

    What Is Code Coverage?

    Code Coverage measures the percentage of code lines, branches, or statements executed during testing. It helps answer questions like:

    • How much of my code is tested?
    • Are there any untested paths or dead code?
    • Which parts of the application need additional test coverage?

    In C#, tools like Cobertura, dotCover, and Visual Studio’s built-in coverage analysis provide Code Coverage reports.

    You may be tempted to think that the higher the coverage, the better the quality of your tests. However, we will soon demonstrate why this assumption is misleading.

    Why Code Coverage Matters

    Clearly, if you write valuable tests, Code Coverage is a great ally.

    A high value of Code Coverage helps you with:

    1. Risk mitigation: High Code Coverage reduces the risk of undiscovered defects. If a piece of code isn’t covered, it will likely contain bugs.
    2. Preventing regressions: code is destined to evolve over time. If you ensure that most of your code is covered by tests, whenever you’ll add some more code you will discover which parts of the existing system are impacted by your changes. If you update the production code and no test fails, it might be a bad sign: you probably need to cover the code you are modifying with enough tests.
    3. Quality assurance: Code Coverage ensures that critical parts of your application are tested thoroughly. Good tests focus on the functional aspects of the code (what) rather than on the technical aspects (how). A good test suite is a safety net against regressions.
    4. Guidance for Testing Efforts: Code Coverage highlights areas that need more attention. It guides developers in writing additional tests where necessary.

    The Limitations of Code Coverage

    While Code Coverage is valuable, it has limitations:

    1. False Sense of Security: Achieving 100% coverage doesn’t guarantee bug-free software. It’s possible to have well-covered code that still contains subtle defects. This is especially true when mocking dependencies.
    2. They focus on Lines, Not Behavior: Code Coverage doesn’t consider the quality of tests. It doesn’t guarantee that the tests covers all possible scenarios.
    3. Ignored Edge Cases: Some code paths (exception handling, rare conditions) are complex to cover. High coverage doesn’t necessarily mean thorough testing.

    3 Practical reasons why Code Coverage percentage can be misleading

    For the sake of this article, I’ve created a dummy .NET API project with the typical three layers: controller, service, and repository.

    It contains a Controller with two endpoints:

    [ApiController]
    [Route("[controller]")]
    public class UniversalWeatherForecastController : ControllerBase
    {
        private readonly IWeatherService _weatherService;
    
        public UniversalWeatherForecastController(IWeatherService weatherService)
        {
            _weatherService = weatherService;
        }
    
        [HttpGet]
        public IEnumerable<Weather> Get(int locationId)
        {
            var forecast = _weatherService.ForecastsByLocation(locationId);
            return forecast.ToList();
        }
    
        [HttpGet("minByPlanet")]
        public Weather GetMinByPlanet(Planet planet)
        {
            return _weatherService.MinTemperatureForPlanet(planet);
        }
    }
    

    The Controller uses the Service:

    public class WeatherService : IWeatherService
    {
        private readonly IWeatherForecastRepository _repository;
    
        public WeatherService(IWeatherForecastRepository repository)
        {
            _repository = repository;
        }
    
        public IEnumerable<Weather> ForecastsByLocation(int locationId)
        {
            ArgumentOutOfRangeException.ThrowIfLessThanOrEqual(locationId, 0);
    
            Location? searchedLocation = _repository.GetLocationById(locationId);
    
            if (searchedLocation == null)
                throw new LocationNotFoundException(locationId);
    
            return searchedLocation.WeatherForecasts;
        }
    
        public Weather MinTemperatureForPlanet(Planet planet)
        {
            var allCitiesInPlanet = _repository.GetLocationsByPlanet(planet);
            int minTemperature = int.MaxValue;
            Weather minWeather = null;
            foreach (var city in allCitiesInPlanet)
            {
                int temperature =
                    city.WeatherForecasts.MinBy(c => c.TemperatureC).TemperatureC;
    
                if (temperature < minTemperature)
                {
                    minTemperature = temperature;
                    minWeather = city.WeatherForecasts.MinBy(c => c.TemperatureC);
                }
            }
            return minWeather;
        }
    }
    

    Finally, the Service calls the Repository, omitted for brevity (it’s just a bunch of items in an in-memory List).

    I then created an NUnit test project to generate the unit tests, focusing on the WeatherService:

    
    public class WeatherServiceTests
    {
        private readonly Mock<IWeatherForecastRepository> _mockRepository;
        private WeatherService _sut;
    
        public WeatherServiceTests() => _mockRepository = new Mock<IWeatherForecastRepository>();
    
        [SetUp]
        public void Setup() => _sut = new WeatherService(_mockRepository.Object);
    
        [TearDown]
        public void Teardown() =>_mockRepository.Reset();
    
        // Tests
    
    }
    

    This class covers two cases, both related to the ForecastsByLocation method of the Service.

    Case 1: when the location exists in the repository, this method must return the related info.

    [Test]
    public void ForecastByLocation_Should_ReturnForecast_When_LocationExists()
    {
        //Arrange
        var forecast = new List<Weather>
            {
                new Weather{
                    Date = DateOnly.FromDateTime(DateTime.Now.AddDays(1)),
                    Summary = "sunny",
                    TemperatureC = 30
                }
            };
    
        var location = new Location
        {
            Id = 1,
            WeatherForecasts = forecast
        };
    
        _mockRepository.Setup(r => r.GetLocationById(1)).Returns(location);
    
        //Act
        var resultForecast = _sut.ForecastsByLocation(1);
    
        //Assert
        CollectionAssert.AreEquivalent(forecast, resultForecast);
    }
    

    Case 2: when the location does not exist in the repository, the method should throw a LocationNotFoundException.

    [Test]
    public void ForecastByLocation_Should_Throw_When_LocationDoesNotExists()
    {
        //Arrange
        _mockRepository.Setup(r => r.GetLocationById(1)).Returns<Location?>(null);
    
        //Act + Assert
        Assert.Catch<LocationNotFoundException>(() => _sut.ForecastsByLocation(1));
    }
    

    We then can run the Code Coverage report and see the result:

    Initial Code Coverage

    Tests cover 16% of lines and 25% of branches, as shown in the report displayed above.

    Delving into the details of the WeatherService class, we can see that we have reached 100% Code Coverage for the ForecastsByLocation method.

    Code Coverage Details for the Service

    Can we assume that that method is bug-free? Not at all!

    Not all cases may be covered by tests

    Let’s review the method under test.

    public IEnumerable<Weather> ForecastsByLocation(int locationId)
    {
        ArgumentOutOfRangeException.ThrowIfLessThanOrEqual(locationId, 0);
    
        Location? searchedLocation = _repository.GetLocationById(locationId);
    
        if (searchedLocation == null)
            throw new LocationNotFoundException(locationId);
    
        return searchedLocation.WeatherForecasts;
    }
    

    Our tests only covered two cases:

    • the location exists;
    • the location does not exist.

    However, these tests do not cover the following cases:

    • the locationId is less than zero;
    • the locationId is exactly zero (are we sure that 0 is an invalid locationId?)
    • the _repository throws an exception (right now, that exception is not handled);
    • the location does exist, but it has no weather forecast info; is this a valid result? Or should we have thrown another custom exception?

    So, well, we have 100% Code Coverage for this method, yet we have plenty of uncovered cases.

    You can cheat on the result by adding pointless tests

    There’s a simple way to have high Code Coverage without worrying about the quality of the tests: calling the methods and ignoring the result.

    To demonstrate it, we can create one single test method to reach 100% coverage for the Repository, without even knowing what it actually does:

    public class WeatherForecastRepositoryTests
    {
        private readonly WeatherForecastRepository _sut;
    
        public WeatherForecastRepositoryTests() =>
            _sut = new WeatherForecastRepository();
    
        [Test]
        public void TotallyUselessTest()
        {
            _ = _sut.GetLocationById(1);
            _ = _sut.GetLocationsByPlanet(Planet.Jupiter);
    
            Assert.That(1, Is.EqualTo(1));
        }
    }
    

    Here we are: we have reached 53% of total Code Coverage by adding one single test, which does not provide any value!

    We reached 53% Code Coverage without adding useful methods

    As you can see, in fact, the WeatherForecastRepository has now reached 100% Code Coverage.

    The whole class has 100% Code Coverage, even without useful tests

    Great job! Or is it?

    You can cheat by excluding parts of the code

    In C# there is a handy attribute that you can apply to methods and classes: ExcludeFromCodeCoverage.

    While this attribute can be useful for classes that you cannot test, it can be used to inflate the Code Coverage percentage by applying it to classes and methods you don’t want to test (maybe because you are lazy?).

    We can, in fact, add that attribute to every single class like this:

    
    [ApiController]
    [Route("[controller]")]
    [ExcludeFromCodeCoverage]
    public class UniversalWeatherForecastController : ControllerBase
    {
        // omitted
    }
    
    [ExcludeFromCodeCoverage]
    public class WeatherService : IWeatherService
    {
        // omitted
    }
    
    [ExcludeFromCodeCoverage]
    public class WeatherForecastRepository : IWeatherForecastRepository
    {
        // omitted
    }
    

    You can then add the same attribute to all the other classes – even the Program class! – to reach 100% Code Coverage without writing lots of test.

    100% Code Coverage, but without any test

    Note: to reach 100% I had to exclude everything but the tests on the Repository: otherwise, if I had exactly zero methods under tests, the final Code Coverage would’ve been 0.

    Beyond Code Coverage: Effective Testing Strategies

    As we saw, high Code Coverage is not enough. It’s a good starting point, but it must not be the final goal.

    We can, indeed, focus our efforts in different areas:

    1. Test Quality: Prioritize writing meaningful tests over chasing high coverage. Focus on edge cases, boundary values, and scenarios that matter to users.
    2. Exploratory Testing: Manual testing complements automated tests. Exploratory testing uncovers issues that automated tests might miss.
    3. Mutation Testing: Instead of just measuring coverage, consider mutation testing. It introduces artificial defects and checks if tests catch them.

    Finally, my suggestion is to focus on integration tests rather than on unit tests: this testing strategy is called Testing Diamond.

    Further readings

    To generate Code Coverage reports, I used Coverlet, as I explained in this article (which refers to Visual Studio 2019, but the steps are still valid with newer versions).

    🔗 How to view Code Coverage with Coverlet and Visual Studio | Code4IT

    In my opinion, we should not focus all our efforts on Unit Tests. On the contrary, we should write more Integration Tests to ensure that the functionality, as a whole, works correctly.

    This way of defining tests is called Testing Diamond, and I explained it here:

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage)

    This article first appeared on Code4IT 🐧

    Finally, I talked about Code Coverage on YouTube as a guest on the VisualStudio Toolbox channel. Check it out here!

    https://www.youtube.com/watch?v=R80G3LJ6ZWc

    Wrapping up

    Code Coverage is a useful metric but should not be the end goal. Aim for a balance: maintain good coverage while ensuring effective testing. Remember that quality matters more than mere numbers. Happy testing! 🚀

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link