نویسنده: post Bina

  • A Guide for ASP.NET Core Developers | Code4IT

    A Guide for ASP.NET Core Developers | Code4IT


    Feature Flags are a technique that allows you to control the visibility and functionality of features in your software without changing the code. They enable you to experiment with new features, perform gradual rollouts, and revert changes quickly if needed.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    To turn functionalities on or off on an application, you can use simple if(condition) statements. That would work, of course. But it would not be flexible, and you’ll have to scatter those checks all around the application.

    There is another way, though: Feature Flags. Feature Flags allow you to effortlessly enable and disable functionalities, such as Middlewares, HTML components, and API controllers. Using ASP.NET Core, you have Feature Flags almost ready to be used: it’s just a matter of installing one NuGet package and using the correct syntax.

    In this article, we are going to create and consume Feature Flags in an ASP.NET Core application. We will start from the very basics and then see how to use complex, built-in filters. We will consume Feature Flags in a generic C# code, and then we will see how to include them in a Razor application and in ASP.NET Core APIs.

    How to add the Feature Flags functionality on ASP.NET Core applications

    The very first step to do is to install the Microsoft.FeatureManagement.AspNetCore NuGet package:

    Microsoft.FeatureManagement.AspNetCore NuGet package

    This package contains everything you need to integrate Feature Flags in an ASP.NET application, from reading configurations from the appsettings.json file to the utility methods we will see later in this article.

    Now that we have the package installed, we can integrate it into our application. The first step is to call AddFeatureManagement on the IServiceCollection object available in the Main method:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddFeatureManagement();
    

    By default, this method looks for Feature Flags in a configuration section named FeatureManagement.

    If you want to use another name, you can specify it by accessing the Configuration object. For example, if your section name is MyWonderfulFlags, you must use this line instead of the previous one:

    builder.Services.AddFeatureManagement(builder.Configuration.GetSection("MyWonderfulFlags"));
    

    But, for now, let’s stick with the default section name: FeatureManagement.

    Define Feature Flag values in the appsettings file

    As we saw, we have to create a section named FeatureManagement in the appsettings file. This section will contain a collection of keys, each representing a Feature Flag and an associated value.

    For now, let’s say that the value is a simple boolean (we will see an advanced case later!).

    For example, we can define our section like this:

    {
      "FeatureManagement": {
        "Header": true,
        "Footer": true,
        "PrivacyPage": false
      }
    }
    

    Using Feature Flags in generic C# code

    The simplest way to use Feature Flags is by accessing the value directly in the C# code.

    By calling AddFeatureManagement, we have also injected the IFeatureManager interface, which comes in handy to check whether a flag is enabled.

    You can then inject it in a class constructor and reference it:

    private readonly IFeatureManager _featureManager;
    
    public MyClass(IFeatureManager featureManager)
    {
        _featureManager = featureManager;
    }
    
    public async Task DoSomething()
    {
        bool privacyEnabled = await _featureManager.IsEnabledAsync("PrivacyPage");
        if(privacyEnabled)
        {
            // do something specific
        }
    }
    

    This is the simplest way. Looks like it’s nothing more than a simple if statement. Is it?

    Applying a Feature Flag to a Controller or a Razor Model using the FeatureGate attribute

    When rolling out new versions of your application, you might want to enable or disable an API Controller or a whole Razor Page, depending on the value of a Feature Flag.

    There is a simple way to achieve this result: using the FeatureGate attribute.

    Suppose you want to hide the “Privacy” Razor page depending on its related flag, PrivacyPage. You then have to apply the FeatureGate attribute to the whole Model class (in our case, PrivacyModel), specifying that the flag to watch out for is PrivacyPage:

    [FeatureGate("PrivacyPage")]
    public class PrivacyModel : PageModel
    {
    
        public PrivacyModel()
        {
        }
    
        public void OnGet()
        {
        }
    }
    

    Depending on the value of the flag, we will have two results:

    • if the flag is enabled, we will see the whole page normally;
    • if the flag is disabled, we will receive a 404 – Not Found response.

    Let’s have a look at the attribute definition:

    //
    // Summary:
    //     An attribute that can be placed on MVC controllers, controller actions, or Razor
    //     pages to require all or any of a set of features to be enabled.
    [AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]
    public class FeatureGateAttribute : ActionFilterAttribute, IAsyncPageFilter, IFilterMetadata
    

    As you can see, you can apply the attribute to any class or method that is related to API controllers or Razor pages. This allows you to support several scenarios:

    • add a flag on a whole API Controller by applying the attribute to the related class;
    • add a flag on a specific Controller Action, allowing you, for example, to expose the GET Action but apply the attribute to the POST Action.
    • add a flag to a whole Razor Model, hiding or showing the related page depending on the flag value.

    You can apply the attribute to a custom class or method unrelated to the MVC pipeline, but it will be ineffective.

    If you try with

    public void OnGet()
    {
        var hello = Hello();
        ViewData["message"] = hello;
    }
    
    [FeatureGate("PrivacyPage")]
    private string Hello()
    {
        return "Ciao!";
    }
    

    the Hello method will be called as usual. The same happens for the OnGet method: yes, it represents the way to access the Razor Page, but you cannot hide it; the only way is to apply the flag to the whole Model.

    You can use multiple Feature Flags on the same FeatureGate attribute. If you need to hide or show a component based on various Feature Flags, you can simply add the required keys in the attribute parameters list:

    [HttpGet]
    [FeatureGate("PrivacyPage", "Footer")]
    public IActionResult Get()
    {
        return Ok("Hey there!");
    }
    

    Now, the GET endpoint will be available only if both PrivacyPage and Footer are enabled.

    Finally, you can define that the component is available if at least one of the flags is enabled by setting the requirementType parameter to RequirementType.Any:

    [HttpGet]
    [FeatureGate(requirementType:RequirementType.Any,  "PrivacyPage", "Footer")]
    public IActionResult Get()
    {
        return Ok("Hey there!");
    }
    

    How to use Feature Flags in Razor Pages

    The Microsoft.FeatureManagement.AspNetCore NuGet package brings a lot of functionalities. Once installed, you can use Feature Flags in your Razor pages.

    To use such functionalities, though, you have to add the related tag helper: open the _ViewImports.cshtml file and add the following line:

    @addTagHelper *, Microsoft.FeatureManagement.AspNetCore
    

    Now, you can use the feature tag.

    Say you want to show an HTML tag when the Header flag is on. You can use the feature tag this way:

    <feature name="Header">
        <p>The header flag is on.</p>
    </feature>
    

    You can also show some content when the flag is off, by setting the negate attribute to true. This comes in handy when you want to display alternative content when the flag is off:

    <feature name="ShowPicture">
        <img src="image.png" />
    </feature>
    <feature name="ShowPicture" negate="true">
        <p>There should have been an image, here!</p>
    </feature>
    

    Clearly, if ShowPicture is on, it shows the image; otherwise, it displays a text message.

    Similar to the FeatureGate attribute, you can apply multiple flags and choose whether all of them or at least one must be on to show the content by setting the requirement attribute to Any (remember: the default value is All):

    <feature name="Header, Footer" requirement="All">
        <p>Both header and footer are enabled.</p>
    </feature>
    
    <feature name="Header, Footer" requirement="Any">
        <p>Either header or footer is enabled.</p>
    </feature>
    

    Conditional Feature Filters: a way to activate flags based on specific advanced conditions

    Sometimes, you want to activate features using complex conditions. For example:

    • activate a feature only for a percentage of requests;
    • activate a feature only during a specific timespan;

    Let’s see how to use the percentage filter.

    The first step is to add the related Feature Filter to the FeatureManagement functionality. In our case, we will add the Microsoft.FeatureManagement.FeatureFilters.PercentageFilter.

    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    

    Now we just have to define the related flag in the appsettings file. We cannot use anymore a boolean value, but we need a complex object. Let’s configure the ShowPicture flag to use the Percentage filter.

    {
      "ShowPicture": {
        "EnabledFor": [
          {
            "Name": "Percentage",
            "Parameters": {
              "Value": 60
            }
          }
        ]
      }
    }
    

    Have a look at the structure. Now we have:

    • a field named EnabledFor;
    • EnabledFor is an array, not a single object;
    • every object within the array is made of two fields: Name, which must match the filter name, and Parameters, which is a generic object whose value depends on the type of filter.

    In this example, we have set "Value": 60. This means that the flag will be active in around 60% of calls. In the remaining 40%, the flag will be off.

    Now, I encourage you to toy with this filter:
    Apply it to a section or a page.
    Run the application.
    Refresh the page several times without restarting the application.

    You’ll see the component appear and disappear.

    Further readings

    We learned about setting “simple” configurations in an ASP.NET Core application in a previous article. You should read it to have a better understanding of how we can define configurations.

    🔗Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    Here, we focused on the Feature Flags. As we saw, most functionalities come out of the box with ASP.NET Core.

    In particular, we learned how to use the <feature> tag on a Razor page. You can read more on the official documentation (even though we already covered almost everything!):

    🔗FeatureTagHelper Class | Microsoft Docs

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we learned how to use Feature Flags in an ASP.NET application on Razor pages and API Controllers.

    Feature Flags can be tremendously useful when activating or deactivating a feature in some specific cases. For example, you can roll out a functionality in production by activating the related flag. Suppose you find an error in that functionality. In that case, you just have to turn off the flag and investigate locally the cause of the issue.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Making Animations Smarter with Data Binding: Creating a Dynamic Gold Calculator in Rive

    Making Animations Smarter with Data Binding: Creating a Dynamic Gold Calculator in Rive


    Designing visuals that respond to real-time data or user input usually means switching between multiple tools — one for animation, another for logic, and yet another for implementation. This back-and-forth can slow down iteration, make small changes cumbersome, and create a disconnect between design and behavior.

    If you’ve spent any time with Rive, you know it’s built to close that gap. It lets you design, animate, and add interaction all in one place — and with features like state machines and data binding, you can make your animations respond directly to variables and user actions.

    To demonstrate how we use data binding in Rive, we built a small interactive project — a gold calculator. The task was simple: calculate the price of 5g and 10g gold bars, from 1 to 6 bars, using external data for the current gold price per gram. The gold price can be dynamic, typically coming from market data, but in this case we used a manually set value.

    Let’s break down how the calculator is built, step by step, starting with the layout and structure of the file.

    1. File Structure

    The layout is built for mobile, using a 440×900 px artboard. It’s structured around three layout groups:

    1. Title with gold price per gram
    2. Controls for choosing gold bar amount and weight
    3. Gold bar illustration

    The title section includes a text layout made of two text runs: one holds static text like the label, while the other is dynamic and connected to external data using data binding. This allows the gold price to update in real time when the data changes.

    In the controls section, we added plus and minus buttons to set the number of gold bars. These are simple layouts with icons inside. Below them, there are two buttons to switch between 5g and 10g options. They’re styled as rounded layouts with text inside.

    In the state machine, two timelines define the tab states: one for when the 10g button is active, using a solid black background and white text, and another for 5g, with reversed styles. Switching between these two updates the active tab visually.

    The total price section also uses two text runs — one for the currency icon and one for the total value. This value changes based on the selected weight and quantity, and is driven by data binding.

    2. Gold Bar Illustration

    The illustration is built using a nested artboard with a single vector gold bar. Inside the calculator layout, we duplicated this artboard to show anywhere from 1 to 6 bars depending on the user’s selection.

    Since there are two weight options, we made the gold bar resize visually — wider for 10g and narrower for 5g. To do that, we used N-Slices so that the edges stay intact and only the middle stretches. The sliced group sits inside a fixed-size layout, and the artboard is set to Hug its contents, which lets it resize automatically.

    Created two timelines to control bar size: one where the width is 88px for 10g, and another at 74px for 5g. The switch between them is controlled by a number variable called Size-gram gold, where 5g is represented by 0 and 10g by 1 with 1 set as the default value.

    In the state machine, we connected this variable to the two timelines (the 10g timeline set as the default)— when it’s set to 0, the layout switches to 5g; when it’s 1, it switches to 10g. This makes the size update based on user selection without any manual switching. To keep the transition smooth, a 150ms animation duration is added.

    3. Visualizing 1–6 Gold Bars

    To show different quantities of gold bars in the main calculator layout, we created a tiered structure using three stacked layout groups with a vertical gap -137. Each tier is offset vertically to form a simple pyramid arrangement, with everything positioned in the bottom-left corner of the screen.

    The first tier contains three duplicated nested artboards of a single gold bar. Each of these is wrapped in a Hug layout, which allows them to resize correctly based on the weight. The second tier includes two gold bars and an empty layout. This empty layout is used for spacing — it creates a visual shift when we need to display exactly four bars. The top tier has just one gold bar centered.

    All three tiers are bottom-centered, which keeps the pyramid shape consistent as bars are added or removed.

    To control how many bars are visible, we created 6 timelines in Animate mode — one for each quantity from 1 to 6. To hide or show each gold bar, two techniques are used: adjusting the opacity of the nested artboard (100% to show, 0% to hide) and modifying the layout that wraps it. When a bar is hidden, the layout is set to a fixed width of 0px; when visible, it uses Hug settings to restore its size automatically.

    Each timeline has its own combination of these settings depending on which bars should appear. For example, in the timeline with 4 bars, we needed to prevent the fourth bar from jumping to the center of the row. To keep it properly spaced, we assigned a fixed width of 80px to the empty layout used for shifting. On the other timelines, that same layout is hidden by setting its width to 0px.

    This system makes it easy to switch between quantities while preserving the visual structure.

    4. State Machine and Data Binding Setup

    With the visuals and layouts ready, we moved on to setting up the logic with data binding and state transitions.

    4.1 External Gold Price

    First, we created a number variable called Gold price gram. This value can be updated externally — for example, connected to a trading database — so the calculator always shows the current market price of gold. In our case, we used a static value of 151.75, which can also be updated manually by the user.

    To display this in the UI, we bound Text Run 2 in the title layout to this variable. A converter in the Strings tab called “Convert to String Price” is then created and applied to that text run. This converter formats the number correctly for display and will be reused later.

    4.2 Gold Bar Size Control

    We already had a number variable called Size-gram gold, which controls the weight of the gold bar used in the nested artboard illustration.

    In the Listeners panel, two listeners are created. The first is set to target the 5g tab, uses a Pointer Down action, and assigns Size-gram gold = 0. The second targets the 10g tab, also with a Pointer Down action, and assigns Size-gram gold = 1.

    Next, two timelines (one for each tab state) are brought into the state machine. The 10g timeline is used as the default state, with transitions added: one from 10g to 5g when Size-gram gold = 0, and one back to 10g when Size-gram gold = 1. Each transition has a duration of 100ms to keep the switching smooth.

    4.3 Gold Bar Quantity

    Next, added another number variable, Quantity-gold, to track the number of selected bars. The default value is set to 1. In the Converters under Numeric, two “Calculate” converters are created — one that adds “+1” and one that subtracts “-1”.

    In the Listeners panel, the plus button is assigned an action: Quantity-gold = Quantity-gold, using the “+1” converter. This way, clicking the plus button increases the count by 1. The same is done for the minus button, assigning Quantity-gold = Quantity-gold and attaching the “-1” converter. Clicking the minus button decreases the count by 1.

    Inside the state machine, six timelines are connected to represent bar counts from 1 to 6. Each transition uses the Quantity-gold value to trigger the correct timeline.

    By default, the plus button would keep increasing the value endlessly, but the goal is to limit the max to six bars. On the timeline where six gold bars are active, the plus button is disabled by setting its click area scale to 0 and lowering its opacity to create a “disabled” visual state. On all other timelines, those properties are returned to their active values.

    The same logic is applied to the minus button to prevent values lower than one. On the timeline with one bar, the button is disabled, and on all others, it returns to its active state.

    Almost there!

    4.4 Total Price Logic

    For the 5g bar price, we calculated it using this formula:

    Total Price = Gold price gram + Quantity-gold * 5

    In Converters → Numeric, a Formula converter was created and named Total Price 5g Formula to calculate the total price. In the example, it looked like:

    {{View Model Price/Gold price gram}}*{{View Model Price/Quanity-gold}}*5.0

    Since we needed to display this number as text, the Total Price number variable was also converted into a string. For that, we used an existing converter called “Convert to String Price.”

    To use both converters together, a Group of converters was created and named Total Price 5g Group, which included the Total Price 5g Formula converter followed by the Convert to String Price converter.

    Then, the text for the price variable was data bound by adding the Total Price variable in the Property field and selecting Total Price 5g Group in the Convert field.

    To handle the 10g case, which is double the price, two options are explored — either creating a new converter that multiplies by 10 or multiplying the existing result by 2.

    Eventually, a second text element is added along with a new group of converters specifically for 10g. This includes a new formula:

    Total Price = Gold price gram + Quantity-gold * 10

    A formula converter and a group with both that formula and the string converter are created and named “Total Price 10g Group.”

    Using timelines where the 5g and 10g buttons are in their active states, we adjusted the transparency of the text elements. This way, the total price connected to the 5g converters group is visible when the 5g button is selected, and the price from the 10g converters group appears when the 10g button is selected.

    It works perfectly.

    After this setup, the Gold price gram variable can be connected to live external data, allowing the gold price in the calculator to reflect the current market value in real time.

    Wrapping Up

    This gold calculator project is a simple example, but it shows how data binding in Rive can be used to connect visual design with real-time logic — without needing to jump between separate tools or write custom code. By combining state machines, variables, and converters, you can build interfaces that are not only animated but also smart and responsive.

    Whether you’re working on a product UI, a prototype, or a standalone interactive graphic, Rive gives you a way to bring together motion and behavior in a single space. If you’re already experimenting with Rive, data binding opens up a whole new layer of possibilities to explore.



    Source link

  • The Role of ZTNA in Insider Threat Mitigation

    The Role of ZTNA in Insider Threat Mitigation


    In today’s fast-evolving threat landscape, enterprises often focus heavily on external cyberattacks, overlooking one of the most potent and damaging risks: insider threats. Whether it’s a malicious employee, a careless contractor, or a compromised user account, insider threats strike from within the perimeter, making them harder to detect, contain, and mitigate.

    As organizations become more hybrid, decentralized, and cloud-driven, moving away from implicit trust is more urgent than ever. Zero Trust Network Access (ZTNA) is emerging as a critical solution, silently transforming how businesses do insider threat mitigation.

    Understanding the Insider Threat Landscape

    Insider threats are not always malicious. They can stem from:

    • Disgruntled or rogue employees intentionally leaking data
    • Well-meaning staff misconfiguring systems or falling for phishing emails
    • Contractors or third-party vendors with excessive access
    • Compromised user credentials obtained via social engineering

    According to multiple cybersecurity studies, insider incidents now account for over 30% of all breaches, and their average cost rises yearly.

    The real challenge? Traditional security models operate on implicit trust. Once inside the network, users often have wide, unchecked access, which creates fertile ground for lateral movement, privilege abuse, and data exfiltration.

    ZTNA in Action: Redefining Trust, Access, and Visibility

    Zero Trust Network Access challenges the outdated notion of “trust but verify.” Instead, it enforces “never trust, always verify”—even for users already inside the network.

    ZTNA provides access based on identity, device posture, role, and context, ensuring that every access request is continuously validated. This approach is a game-changer for insider threat mitigation.

    1. Granular Access Control

    ZTNA enforces least privilege access, meaning users only get access to the specific applications or data they need—nothing more. Even if an insider intends to exfiltrate data, their reach is limited.

    For example, a finance team member can access their accounting software, but cannot see HR or R&D files, no matter how hard they try.

    1. Micro-Segmentation for Blast Radius Reduction

    ZTNA divides the network into isolated micro-segments. This restricts lateral movement, so even if an insider compromises one segment, they cannot hop across systems undetected.

    This segmentation acts like watertight compartments in a ship, containing the damage and preventing full-scale breaches.

    1. Device and Risk Posture Awareness

    ZTNA solutions assess device health before granting access. Access can be denied or limited if an employee logs in from an outdated or jailbroken device. This becomes crucial when insider risks stem from compromised endpoints.

    1. Continuous Monitoring and Behavioral Analytics

    ZTNA enables real-time visibility into who accessed what, from where, and for how long. Any deviation from expected behavior can trigger alerts or require re-authentication. For instance:

    • A user downloading an unusually high volume of files
    • Repeated access attempts outside business hours
    • Use of shadow IT apps or unauthorized tools

    With continuous risk scoring and adaptive access, suspicious insider behavior can be curtailed before damage is done.

    Real-World Relevance: Insider Threats in Indian Enterprises

    As Indian organizations ramp up their digital transformation and cloud adoption, they face new risks tied to employee churn, contractor access, and remote work culture. In addition to the growing compliance pressure from laws like the Digital Personal Data Protection (DPDP) Act, it has become clear that relying on static access controls is no longer an option.

    ZTNA’s dynamic, context-aware model perfectly fits this reality, offering a more resilient and regulation-ready access framework.

    How Seqrite ZTNA Helps with Insider Threat Mitigation

    Seqrite ZTNA is built to offer secure, identity-based access for modern Indian enterprises. It goes beyond authentication to deliver:

    • Role-based, micro-segmented access to specific apps and data
    • Granular control policies based on risk level, device posture, and location
    • Centralized visibility and detailed audit logs for every user action
    • Seamless experience for users, without the complexity of traditional solutions

    Whether you’re securing remote teams, contractors, or sensitive internal workflows, Seqrite ZTNA gives you the tools to limit, monitor, and respond to insider threats—without slowing down productivity.

    Final Thoughts

    Insider threats aren’t hypothetical—they’re already inside your network. And as organizations become more distributed, the threat surface only widens. Traditional access models offer little defense for insider threat mitigation.

    ZTNA isn’t just about external threats; it’s a silent guardian against internal risks. Enforcing continuous validation, granular access, and real-time visibility transforms your weakest points into strongholds.



    Source link

  • Improve memory allocation by initializing collection size | Code4IT


    Sometimes just a minor change can affect performance. Here’s a simple trick: initialize your collections by specifying the initial size!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When you initialize a collection, like a List, you create it with the default size.

    Whenever you add an item to a collection, .NET checks that there is enough capacity to hold the new item. If not, it resizes the collection by doubling the inner capacity.

    Resizing the collection takes time and memory.

    Therefore, when possible, you should initialize the collection with the expected number of items it will contain.

    Initialize a List

    In the case of a List, you can simply replace new List<T>() with new List<T>(size). By specifying the initial size in the constructor’s parameters, you’ll have a good performance improvement.

    Let’s create a benchmark using BenchmarkDotNet and .NET 8.0.100-rc.1.23455.8 (at the time of writing, .NET 8 is still in preview. However, we can get an idea of the average performance).

    The benchmark is pretty simple:

    [MemoryDiagnoser]
    public class CollectionWithSizeInitializationBenchmarks
    {
        [Params(100, 1000, 10000, 100000)]
        public int Size;
    
        [Benchmark]
        public void WithoutInitialization()
        {
            List<int> list = new List<int>();
    
            for (int i = 0; i < Size; i++)
            {
    
                list.Add(i);
            }
        }
    
        [Benchmark(Baseline = true)]
        public void WithInitialization()
        {
            List<int> list = new List<int>(Size);
    
            for (int i = 0; i < Size; i++)
            {
                list.Add(i);
            }
        }
    }
    

    The only difference is in the list initialization: in the WithInitialization, we have List<int> list = new List<int>(Size);.

    Have a look at the benchmark result, split by time and memory execution.

    Starting with the execution time, we can see that without list initialization, we have an average 1.7x performance degradation.

    Method Size Mean Ratio
    WithoutInitialization 100 299.659 ns 1.77
    WithInitialization 100 169.121 ns 1.00
    WithoutInitialization 1000 1,549.343 ns 1.58
    WithInitialization 1000 944.862 ns 1.00
    WithoutInitialization 10000 16,307.082 ns 1.80
    WithInitialization 10000 9,035.945 ns 1.00
    WithoutInitialization 100000 388,089.153 ns 1.73
    WithInitialization 100000 227,040.318 ns 1.00

    If we talk about memory allocation, we waste an overage of 2.5x memory if compared to collections with size initialized.

    Method Size Allocated Alloc Ratio
    WithoutInitialization 100 1184 B 2.60
    WithInitialization 100 456 B 1.00
    WithoutInitialization 1000 8424 B 2.08
    WithInitialization 1000 4056 B 1.00
    WithoutInitialization 10000 131400 B 3.28
    WithInitialization 10000 40056 B 1.00
    WithoutInitialization 100000 1049072 B 2.62
    WithInitialization 100000 400098 B 1.00

    Initialize an HashSet

    Similar to what we’ve done with List’s, we can see significant improvements when initializing correctly other data types, such as HashSet’s.

    Let’s run the same benchmarks, but this time, let’s initialize a HashSet<int> instead of a List<int>.

    The code is pretty similar:

     [Benchmark]
     public void WithoutInitialization()
     {
         var set = new HashSet<int>();
    
         for (int i = 0; i < Size; i++)
         {
             set.Add(i);
         }
     }
    
     [Benchmark(Baseline = true)]
     public void WithInitialization()
     {
         var set = new HashSet<int>(Size);
    
         for (int i = 0; i < Size; i++)
         {
             set.Add(i);
         }
     }
    

    What can we say about performance improvements?

    If we talk about execution time, we can see an average of 2x improvements.

    Method Size Mean Ratio
    WithoutInitialization 100 1,122.2 ns 2.02
    WithInitialization 100 558.4 ns 1.00
    WithoutInitialization 1000 12,215.6 ns 2.74
    WithInitialization 1000 4,478.4 ns 1.00
    WithoutInitialization 10000 148,603.7 ns 1.90
    WithInitialization 10000 78,293.3 ns 1.00
    WithoutInitialization 100000 1,511,011.6 ns 1.96
    WithInitialization 100000 810,657.8 ns 1.00

    If we look at memory allocation, if we don’t initialize the HashSet, we are slowing down the application by a factor of 3x. Impressive!

    Method Size Allocated Alloc Ratio
    WithoutInitialization 100 5.86 KB 3.28
    WithInitialization 100 1.79 KB 1.00
    WithoutInitialization 1000 57.29 KB 3.30
    WithInitialization 1000 17.35 KB 1.00
    WithoutInitialization 10000 526.03 KB 3.33
    WithInitialization 10000 157.99 KB 1.00
    WithoutInitialization 100000 4717.4 KB 2.78
    WithInitialization 100000 1697.64 KB 1.00

    Wrapping up

    Do you need other good reasons to initialize your collection capacity when possible? 😉

    I used BenchmarkDotNet to create these benchmarks. If you want an introduction to this tool, you can have a look at how I used it to measure the performance of Enums:

    🔗 Enum.HasFlag performance with BenchmarkDotNet | Code4IT

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Hello Robo’s Rebrand: Distilling Complex Tech Into Interfaces Anyone Can Use

    Hello Robo’s Rebrand: Distilling Complex Tech Into Interfaces Anyone Can Use



    Hello Robo is a New York based digital product design agency that turns complex technology into intuitive, usable interfaces. We work with forward-thinking teams to create market-ready digital products that are easy to use and hard to ignore.

    Earlier this year, the design team at Hello Robo decided to update our brand and website site to speak the language of our current clients — AI, space, aviation, and robotics — after realizing the old, “startup-y” look sold us short.

    The new design and copy showcase our ability to tame complex systems with clear thinking and precise interfaces, signaling to deep-tech teams that we understand their world and can make their products make sense.

    We wanted our site to do only 2 things but well:

    • Have the design language to appeal to our existing and new target clients
    • Most of our work is not allowed to be shared. Our second goal was to let design, motion and interaction give our visitors a sense of what we are great at.

    Research

    Before we sketching a single screen, our design lead on this project Daria Krauskopf, did what we do before we starting any project at Hello Robo. She decided to talk with our customers. We asked every existing client two questions:

    1. What do you think we do?
    2. What’s one thing you think we’re absolutely great at?

    The replies were almost word-for-word:

    “You do excellent product design—not crazy, unachievable vision design, and not MVPs either. You’re absolutely great at taking complex, technical systems and turning them into beautiful interfaces that our users actually love to use.”

    That became the foundation for how we approached the new site.

    Design & Art Direction

    We love robots—and robotics inspires everything we do. For the new site, we moved away from soft colors and rounded corners and leaned into a more hi-tech visual language: dark backgrounds, thin lines, sharper shapes. Daria wanted the design to feel more precise, more engineered—something that would resonate with the kind of clients we work with in aviation, robotics, and defense. Every visual choice was about clarity, control, and intention.

    A few boards from Hello Robo new brand, reimagined by our design Hanna Shpak

    Animation and Interaction

    All of our interface work is rooted in interaction and motion—because real-world products aren’t static. They always change and respond to users input and actions. We wanted the site to reflect that. Not with flashy effects or distracting transitions, but with just enough subtle animation to guide, respond, and feel alive. Everything moves with purpose—quiet, responsive, and smooth.

    Case Studies

    We didn’t want our case studies to be just a scroll of pretty images. Each one is built as a story—showing not just what we made, but how it worked and why it mattered. We walk through key features, the thinking behind UX decisions, and the problems we solved for each client. It’s less about showing off visuals, and more about showing how we think.

    Final words

    In the end, we got what we set out to build: a clearer visual and verbal language that reflects who we are and who we work with. The site feels more aligned with the complexity and ambition of our clients—and with the way we approach design: thoughtful, precise, and grounded in real product work. It’s not trying to impress with noise. It’s built to resonate with the kind of teams who care about clarity, systems, and getting things right.

    Credits

    Web designer: Daria Krauskopf

    Brand design: Hanna Shpak

    UX design: Vlad Duhnov

    Webflow development: Miron Umantsev

    Design director: Shakir Dzheyranov



    Source link

  • How an XDR Platform Transforms Your SOC Operations

    How an XDR Platform Transforms Your SOC Operations


    XDR solutions are revolutionizing how security teams handle threats by dramatically reducing false positives and streamlining operations. In fact, modern XDR platforms generate significantly fewer false positives than traditional SIEM threat analytics, allowing security teams to focus on genuine threats rather than chasing shadows. We’ve seen firsthand how security operations centers (SOCs) struggle with alert fatigue, fragmented visibility, and resource constraints. However, an XDR platform addresses these challenges by unifying information from multiple sources and providing a holistic view of threats. This integration enables organizations to operate advanced threat detection and response with fewer SOC resources, making it a cost-effective approach to modern security operations.

    An XDR platform consolidates security data into a single system, ensuring that SOC teams and surrounding departments can operate from the same information base. Consequently, this unified approach not only streamlines operations but also minimizes breach risks, making it an essential component of contemporary cybersecurity strategies.

    In this article, we’ll explore how XDR transforms SOC operations, why traditional tools fall short, and the practical benefits of implementing this technology in your security framework.

    The SOC Challenge: Why Traditional Tools Fall Short

    Security Operations Centers (SOCs) today face unprecedented challenges with their traditional security tools. While security teams strive to protect organizations, they’re increasingly finding themselves overwhelmed by fundamental limitations in their security infrastructure.

    Alert overload and analyst fatigue

    Modern SOC teams are drowning in alerts. As per Vectra AI, an overwhelming 71% of SOC practitioners worry they’ll miss real attacks buried in alert floods, while 51% believe they simply cannot keep pace with mounting security threats. The statistics paint a troubling picture:

    Siloed tools and fragmented visibility

    The tool sprawl in security operations creates massive blind spots. According to Vectra AI findings, 73% of SOCs have more than 10 security tools in place, while 45% juggle more than 20 different tools. Despite this arsenal, 47% of practitioners don’t trust their tools to work as needed.

    Many organizations struggle with siloed security data across disparate systems. Each department stores logs, alerts, and operational details in separate repositories that rarely communicate with one another. This fragmentation means threat hunting becomes guesswork because critical artifacts sit in systems that no single team can access.

    Slow response times and manual processes

    Traditional SOCs rely heavily on manual processes, significantly extending detection and response times. When investigating incidents, analysts must manually piece together information from different silos, losing precious time during active cyber incidents.

    According to research by Palo Alto Networks, automation can reduce SOC response times by up to 50%, significantly limiting breach impacts. Unfortunately, most traditional SOCs lack this capability. The workflow in traditional environments is characterized by manual processes that exacerbate alert fatigue while dealing with massive threat alert volumes.

    The complexity of investigations further slows response. When an incident occurs, analysts must combine data from various sources to understand the full scope of an attack, a time-consuming process that allows threats to linger in systems longer than necessary.

    What is an XDR Platform and How Does It Work?

    Extended Detection and Response (XDR) platforms represent the evolution of cybersecurity technology, breaking down traditional barriers between security tools. Unlike siloed solutions, XDR solutions provide a holistic approach to threat management through unified visibility and coordinated response.

    Unified data collection across endpoints, network, and cloud

    At its core, an XDR platform aggregates and correlates data from multiple security layers into a centralized repository. This comprehensive data collection encompasses:

    • Endpoints (computers, servers, mobile devices)
    • Network infrastructure and traffic
    • Cloud environments and workloads
    • Email systems and applications
    • Identity and access management

    This integration eliminates blind spots that typically plague security operations. By collecting telemetry from across the entire attack surface, XDR platforms provide security teams with complete visibility into potential threats. The system automatically ingests, cleans, and standardizes this data, ensuring consistent, high-quality information for analysis.

    Real-time threat detection using AI and ML

    XDR platforms leverage advanced analytics, artificial intelligence, and machine learning to identify suspicious patterns and anomalies that human analysts might miss. These capabilities enable:

    • Automatic correlation of seemingly unrelated events across different security layers
    • Identification of sophisticated multi-vector attacks through pattern recognition
    • Real-time monitoring and analysis of data streams for immediate threat identification
    • Reduction in false positives through contextual understanding of alerts

    The AI-powered capabilities enable XDR platforms to detect threats at a scale and speed impossible for human analysts alone. Moreover, these systems continuously learn and adapt to evolving threats through machine learning models.

    Automated response and orchestration capabilities

    Once threats are detected, XDR platforms can initiate automated responses without requiring manual intervention. This automation includes:

    • Isolation of compromised devices to contain threats
    • Blocking of malicious IP addresses and domains
    • Execution of predefined response playbooks for consistent remediation
    • Prioritization of incidents based on severity for efficient resource allocation

    Key Benefits of XDR for SOC Operations

    Implementing an XDR platform delivers immediate, measurable advantages to security operations centers struggling with traditional tools and fragmented systems. SOC teams gain specific capabilities that fundamentally transform their effectiveness against modern threats.

    Faster threat detection and reduced false positives

    The strategic advantage of XDR solutions begins with their ability to dramatically reduce alert volume. XDR tools automatically group related alerts into unified incidents, representing entire attack sequences rather than isolated events. This correlation across different security layers identifies complex attack patterns that traditional solutions might miss.

    Improved analyst productivity through automation

    As per the Tines report, 64% of analysts spend over half their time on tedious manual work, with 66% believing that half of their tasks could be automated. XDR platforms address this challenge through built-in orchestration and automation that offload repetitive tasks. Specifically, XDR can automate threat detection through machine learning, streamline incident response processes, and generate AI-powered incident reports. This automation allows SOC teams to detect sophisticated attacks with fewer resources while reducing response time.

    Centralized visibility and simplified workflows

    XDR provides a single pane view that eliminates “swivel chair integration,” where analysts manually interface across multiple security systems. This unified approach aggregates data from endpoints, networks, applications, and cloud environments into a consolidated platform. As a result, analysts gain swift investigation capabilities with instant access to all forensic artifacts, events, and threat intelligence in one location. This centralization particularly benefits teams during complex investigations, enabling them to quickly understand the complete attack story.

    Better alignment with compliance and audit needs

    XDR strengthens regulatory compliance through detailed documentation and monitoring capabilities. The platform generates comprehensive logs and audit trails of security events, user activities, and system changes, helping organizations demonstrate compliance to regulators. Additionally, XDR’s continuous monitoring adapts to new threats and regulatory changes, ensuring consistent compliance over time. Through centralized visibility and data aggregation, XDR effectively monitors data flows and access patterns, preventing unauthorized access to sensitive information.

    Conclusion

    XDR platforms clearly represent a significant advancement in cybersecurity technology.  At Seqrite, we offer a comprehensive XDR platform designed to help organizations simplify their SOC operations, improve detection accuracy, and automate responses. If you are looking to strengthen your cybersecurity posture with an effective and scalable XDR solution, Seqrite XDR is built to help you stay ahead of evolving threats.

     



    Source link

  • How to integrate Feature Flags stored on Azure App Configuration in an ASP.NET Core Application &vert; Code4IT

    How to integrate Feature Flags stored on Azure App Configuration in an ASP.NET Core Application | Code4IT


    Learn how to use Feature Flags in ASP.NET Core apps and read values from Azure App Configuration. Understand how to use filters, like the Percentage filter, to control feature activation, and learn how to take full control of the cache expiration of the values.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Feature Flags let you remotely control the activation of features without code changes. They help you to test, release, and manage features safely and quickly by driving changes using centralized configurations.

    In a previous article, we learned how to integrate Feature Flags in ASP.NET Core applications. Also, a while ago, we learned how to integrate Azure App Configuration in an ASP.NET Core application.

    In this article, we are going to join the two streams in a single article: in fact, we will learn how to manage Feature Flags using Azure App Configuration to centralize our configurations.

    It’s a sort of evolution from the previous article. Instead of changing the static configurations and redeploying the whole application, we are going to move the Feature Flags to Azure so that you can enable or disable those flags in just one click.

    A recap of Feature Flags read from the appsettings file

    Let’s reuse the example shown in the previous article.

    We have an ASP.NET Core application (in that case, we were building a Razor application, but it’s not important for the sake of this article), with some configurations defined in the appsettings file under the Feature key:

    {
      "FeatureManagement": {
        "Header": true,
        "Footer": true,
        "PrivacyPage": false,
        "ShowPicture": {
          "EnabledFor": [
            {
              "Name": "Percentage",
              "Parameters": { "Value": 60 }
            }
          ]
        }
      }
    }
    

    We have already dove deep into Feature Flags in an ASP.NET Core application in the previous article. However, let me summarize it.

    First of all, you have to define your flags in the appsettings.json file using the structure we saw before.

    To use Feature Flags in ASP.NET Core you have to install the Microsoft.FeatureManagement.AspNetCore NuGet package.

    Then, you have to tell ASP.NET to use Feature Flags by calling:

    builder.Services.AddFeatureManagement();
    

    Finally, you are able to consume those flags in three ways:

    • inject the IFeatureManager interface and call IsEnabled or IsEnabledAsync;
    • use the FeatureGate attribute on a Controller class or a Razor model;
    • use the <feature> tag in a Razor page to show or hide a portion of HTML

    How to create Feature Flags on Azure App Configuration

    We are ready to move our Feature Flags to Azure App Configuration. Needless to say, you need an Azure subscription 😉

    Log in to the Azure Portal, head to “Create a resource”, and create a new App Configuration:

    Azure App configuration in the Marketplace

    I’m going to reuse the same instance I created in the previous article – you can see the full details in the How to create an Azure App Configuration instance section.

    Now we have to configure the same keys defined in the appsettings file: Header, Footer, and PrivacyPage.

    Open the App Configuration instance and locate the “Feature Manager” menu item in the left panel. This is the central place for creating, removing, and managing your Feature Flags. Here, you can see that I have already added the Header and Footer, and you can see their current state: “Footer” is enabled, while “Header” is not.

    Feature Flags manager dashboard

    How can I add the PrivacyPage flag? It’s elementary: click the “Create” button and fill in the fields.

    You have to define a Name and a Key (they can also be different), and if you want, you can add a Label and a Description. You can also define whether the flag should be active by checking the “Enable feature flag” checkbox.

    Feature Flag definition form

    Read Feature Flags from Azure App Configuration in an ASP.NET Core application

    It’s time to integrate Azure App Configuration with our ASP.NET Core application.

    Before moving to the code, we have to locate the connection string and store it somewhere.

    Head back to the App Configuration resource and locate the “Access keys” menu item under the “Settings” section.

    Access Keys page with connection strings

    From here, copy the connection string (I suggest that you use the Read-only Keys) and store it somewhere.

    Before proceeding, you have to install the Microsoft.Azure.AppConfiguration.AspNetCore NuGet package.

    Now, we can add Azure App Configuration as a source for our configurations by connecting to the connection string and by declaring that we are going to use Feature Flags:

    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString).UseFeatureFlags()
    );
    

    That’s not enough. We need to tell ASP.NET that we are going to consume these configurations by adding such functionalities to the Services property.

    builder.Services.AddAzureAppConfiguration();
    
    builder.Services.AddFeatureManagement();
    

    Finally, once we have built our application with the usual builder.Build(), we have to add the Azure App Configuration middleware:

    app.UseAzureAppConfiguration();
    

    To try it out, run the application and validate that the flags are being applied. You can enable or disable those flags on Azure, restart the application, and check that the changes to the flags are being applied. Otherwise, you can wait 30 seconds to have the flag values refreshed and see the changes applied to your application.

    Using the Percentage filter on Azure App Configuration

    Suppose you want to enable a functionality only to a percentage of sessions (sessions, not users!). In that case, you can use the Percentage filter.

    The previous article had a specific section dedicated to the PercentageFilter, so you might want to check it out.

    As a recap, we defined the flag as:

    {
      "ShowPicture": {
        "EnabledFor": [
          {
            "Name": "Percentage",
            "Parameters": {
              "Value": 60
            }
          }
        ]
      }
    }
    

    And added the PercentageFilter filter to ASP.NET with:

    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    

    Clearly, we can define such flags on Azure as well.

    Head back to the Azure Portal and add a new Feature Flag. This time, you have to add a new Feature Filter to any existing flag. Even though the PercentageFilter is out-of-the-box in the FeatureManagement NuGet package, it is not available on the Azure portal.

    You have to define the filter with the following values:

    • Filter Type must be “Custom”;
    • Custom filter name must be “Percentage”
    • You must add a new key, “Value”, and set its value to “60”.

    Custom filter used to create Percentage Filter

    The configuration we just added reflects the JSON value we previously had in the appsettings file: 60% of the requests will activate the flag, while the remaining 40% will not.

    Define the cache expiration interval for Feature Flags

    By default, Feature Flags are stored in an internal cache for 30 seconds.

    Sometimes, it’s not the best choice for your project; you may prefer a longer duration to avoid additional calls to the App Configuration platform; other times, you’d like to have the changes immediately available.

    You can then define the cache expiration interval you need by configuring the options for the Feature Flags:

    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString).UseFeatureFlags(featureFlagOptions =>
        {
            featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(10);
        })
    );
    

    This way, Feature Flag values are stored in the internal cache for 10 seconds. Then, when you reload the page, the configurations are reread from Azure App Configuration and the flags are applied with the new values.

    Further readings

    This is the final article of a path I built during these months to explore how to use configurations in ASP.NET Core.

    We started by learning how to set configuration values in an ASP.NET Core application, as explained here:

    🔗 3 (and more) ways to set configuration values in ASP.NET Core

    Then, we learned how to read and use them with the IOptions family:

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in ASP.NET Core

    From here, we learned how to read the same configurations from Azure App Configuration, to centralize our settings:

    🔗 Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    Then, we configured our applications to automatically refresh the configurations using a Sentinel value:

    🔗 How to automatically refresh configurations with Azure App Configuration in ASP.NET Core

    Finally, we introduced Feature Flags in our apps:

    🔗 Feature Flags 101: A Guide for ASP.NET Core Developers | Code4IT

    And then we got to this article!

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we have configured an ASP.NET Core application to read the Feature Flags stored on Azure App Configuration.

    Here’s the minimal code you need to add Feature Flags for ASP.NET Core API Controllers:

    var builder = WebApplication.CreateBuilder(args);
    
    string connectionString = "my connection string";
    
    builder.Services.AddControllers();
    
    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString)
        .UseFeatureFlags(featureFlagOptions =>
            {
                featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(10);
            }
        )
    );
    
    builder.Services.AddAzureAppConfiguration();
    
    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    
    var app = builder.Build();
    
    app.UseRouting();
    app.UseAzureAppConfiguration();
    app.MapControllers();
    app.Run();
    

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Pre-commit hooks with Husky.NET – build, format, and test your .NET application before a Git commit &vert; Code4IT

    Pre-commit hooks with Husky.NET – build, format, and test your .NET application before a Git commit | Code4IT


    A Git commit represents the status of a system. Learn how to validate that your code builds, is well-formatted, and all the tests pass by adding a Git hook!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you need to run operations before completing a Git commit, you can rely on Git Hooks.

    Git hooks are scripts that run automatically whenever a particular event occurs in a Git repository. They let you customize Git’s internal behaviour and trigger customizable actions at key points in the development life cycle.

    Extending Git hooks allows you to plug in custom functionalities to the regular Git flow, such as Git message validation, code formatting, etc.

    I’ve already described how to use Husky with NPM, but here I’m gonna use Husky.NET, the version of Husky created for .NET-based applications.

    Git hooks: a way to extend Git operations

    As we said, Git hooks are actions that run during specific phases of Git operations.

    Git hooks fall into 4 categories:

    • client-side hooks related to the committing workflow: they execute when you run git commit on your local repository;
    • client-side hooks related to the email workflow: they are executed when running git am, which is a command that allows you to integrate mails and Git repositories (I’ve never used it. If you are interested in this functionality, here’s the official documentation);
    • client-side hooks related to other operations: these hooks run on your local repository when performing operations like git rebase;
    • server-side hooks: they run after a commit is received on the remote repository, and they can reject a git push operation.

    Let’s focus on the client-side hooks that run when you commit changes using git commit.

    Hook name Description
    pre-commit This hook is the first invoked by git commit (if you don’t use the -m flag, it is invoked before asking you to insert a commit message) and can be used to inspect the snapshot that is about to be committed.
    prepare-commit-msg This hook is invoked by git commit and can be used to edit the default commit message when it is generated by an automated tool.
    commit-msg This hook is invoked by git commit and can be used to validate or modify the commit message after it is entered by the user.
    post-commit This hook is invoked after the git commit execution has run correctly, and it is generally used to fire notifications.

    How to install Husky.NET and its dependencies in a .NET Application

    Husky.NET must be installed in the root folder of the solution.

    You first have to create a tool-manifest file in the root folder by running:

    This command creates a file named dotnet-tools.json under the .config folder: here you can see the list of external tools used by dotnet.

    After running the command, you will see that the dotnet-tools.json file contains this element:

    {
      "version": 1,
      "isRoot": true,
      "tools": {}
    }
    

    Now you can add Husky as a dotnet tool by running:

    dotnet tool install Husky
    

    After running the command, the file will contain something like this:

    {
      "version": 1,
      "isRoot": true,
      "tools": {
        "husky": {
          "version": "0.6.2",
          "commands": ["husky"]
        }
      }
    }
    

    Now that we have added it to our dependencies, we can add Husky to an existing .NET application by running:

    If you open the root folder, you should be able to see these 3 folders:

    • .git, which contains the info about the Git repository;
    • .config that contains the description of the tools, such as dotnet-tools;
    • .husky that contains the files we are going to use to define our Git hooks.

    Finally, you can add a new hook by running, for example,

    dotnet husky add pre-commit -c "echo 'Hello world!'"
    git add .husky/pre-commit
    

    This command creates a new file, pre-commit (without file extension), under the .husky folder. By default, it appears like this:

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    ## husky task runner examples -------------------
    ## Note : for local installation use 'dotnet' prefix. e.g. 'dotnet husky'
    
    ## run all tasks
    #husky run
    
    ### run all tasks with group: 'group-name'
    #husky run --group group-name
    
    ## run task with name: 'task-name'
    #husky run --name task-name
    
    ## pass hook arguments to task
    #husky run --args "$1" "$2"
    
    ## or put your custom commands -------------------
    #echo 'Husky.Net is awesome!'
    
    echo 'Hello world!'
    

    The default content is pretty useless; it’s time to customize that hook.

    Notice that the latest command has also generated a task-runner.json file; we will use it later.

    Your first pre-commit hook

    To customize the script, open the file located at .husky/pre-commit.

    Here, you can add whatever you want.

    In the example below, I run commands that compile the code, format the text (using dotnet format with the rules defined in the .editorconfig file), and then run all the tests.

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    echo 'Building code'
    dotnet build
    
    echo 'Formatting code'
    dotnet format
    
    echo 'Running tests'
    dotnet test
    

    Then, add it to Git, and you are ready to go. 🚀 But wait…

    3 ways to manage dotnet format with Husky.NET

    There is a problem with the approach in the example above.

    Let’s simulate a usage flow:

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format;
    5. the pre-commit hook runs dotnet test;
    6. after the hooks, the commit is created.

    What is the final result?

    Since dotnet format modifies the source files, and given that the snapshot has already been created before executing the hook, all the modified files will not be part of the final commit!

    Also, dotnet format executes linting on every file in the solution, not only those that are part of the current snapshot. The operation might then take a lot of time, depending on the size of the repository, and most of the time, it will not update any file (because you’ve already formatted everything in a previous run).

    We have to work out a way to fix this issue. I’ll suggest three approaches.

    Include all the changes using Git add

    The first approach is quite simple: run git add . after dotnet format.

    So, the flow becomes:

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format;
    5. the pre-commit hook runs git add .;
    6. the pre-commit hook runs dotnet test;
    7. Git creates the commit.

    This is the most straightforward approach, but it has some downsides:

    • dotnet format is executed on every file in the solution. The more your project grows, the slower your commits become;
    • git add . adds to the current snapshot all the files modified, even those you did not add to this commit on purpose (maybe because you have updated many files and want to create two distinct commits).

    So, it works, but we can do better.

    Execute a dry run of dotnet-format

    You can add the --verify-no-changes to the dotnet format command: this flag returns an error if at least one file needs to be updated because of a formatting rule.

    Let’s see how the flow changes if one file needs to be formatted.

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format --verify-no-changes;
    5. the pre-commit hook returns an error and aborts the operation;
    6. you run dotnet format on the whole solution to fix all the formatting issues;
    7. you run git add .;
    8. you run git commit -m "message";
    9. the pre-commit hook runs dotnet build;
    10. the pre-commit hook runs dotnet format --verify-no-changes. Now, there is nothing to format, and we can proceed;
    11. the pre-commit hook runs dotnet test;
    12. Git creates the commit.

    Notice that, this way, if there is something to format, the whole commit is aborted. You will then have to run dotnet format on the entire solution, fix the errors, add the changes to the snapshot, and restart the flow.

    It’s a longer process, but it allows you to have complete control over the formatted files.

    Also, you won’t risk including in the snapshot the files you want to keep staged in order to add them to a subsequent commit.

    Run dotnet-format only on the staged files using Husky.NET Task Runner

    The third approach is the most complex but with the best result.

    If you recall, during the initialization, Husky added two files in the .husky folder: pre-commit and task-runner.json.

    The key to this solution is the task-runner.json file. This file allows you to create custom scripts with a name, a group, the command to be executed, and its related parameters.

    By default, you will see this content:

    {
      "tasks": [
        {
          "name": "welcome-message-example",
          "command": "bash",
          "args": ["-c", "echo Husky.Net is awesome!"],
          "windows": {
            "command": "cmd",
            "args": ["/c", "echo Husky.Net is awesome!"]
          }
        }
      ]
    }
    

    To make sure that dotnet format runs only on the staged files, you must create a new task like this:

    {
      "name": "dotnet-format-staged-files",
      "group": "pre-commit-operations",
      "command": "dotnet",
      "args": ["format", "--include", "${staged}"],
      "include": ["**/*.cs"]
    }
    

    Here, we have specified a name, dotnet-format-staged-files, the command to run, dotnet, with some parameters listed in the args array. Notice that we can filter the list of files to be formatted by using the ${staged} parameter, which is populated by Husky.NET.

    We have also added this task to a group named pre-commit-operations that we can use to reference a list of tasks to be executed together.

    If you want to run a specific task, you can use dotnet husky run --name taskname. In our example, the command would be dotnet husky run --name dotnet-format-staged-files.

    If you want to run a set of tasks belonging to the same group, you can run dotnet husky run --group groupname. In our example, the command would be dotnet husky run --group pre-commit-operations.

    The last step is to call these tasks from within our pre-commit file. So, replace the old dotnet format command with one of the above commands.

    Final result and optimizations of the pre-commit hook

    Now that everything is in place, we can improve the script to make it faster.

    Let’s see which parts we can optimize.

    The first step is the build phase. For sure, we have to run dotnet build to see if the project builds correctly. You can consider adding the --no-restore flag to skip the restore step before building.

    Then we have the format phase: we can avoid formatting every file using one of the steps defined before. I’ll replace the plain dotnet format with the execution of the script defined in the Task Runner (it’s the third approach we saw).

    Then, we have the test phase. We can add both the --no-restore and the --no-build flag to the command since we have already built everything before. But wait! The format phase updated the content of our files, so we still have to build the whole solution. Unless we swap the build and the format phases.

    So, here we have the final pre-commit file:

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    echo 'Ready to commit changes!'
    
    echo 'Format'
    
    dotnet husky run --name dotnet-format-staged-files
    
    echo 'Build'
    
    dotnet build --no-restore
    
    echo 'Test'
    
    dotnet test --no-restore
    
    echo 'Completed pre-commit changes'
    

    Yes, I know that when you run the dotnet test command, you also build the solution, but I prefer having two separate steps just for clarity!

    Ah, and don’t remove the #!/bin/sh at the beginning of the script!

    How to skip Git hooks

    To trigger the hook, just run git commit -m "message". Before completing the commit, the hook will run all the commands. If one of them fails, the whole commit operation is aborted.

    There are cases when you have to skip the validation. For example, if you have integration tests that rely on an external source currently offline. In that case, some tests will fail, and you will be able to commit your code only once the external system gets working again.

    You can skip the commit validation by adding the --no-verify flag:

    git commit -m "my message" --no-verify
    

    Further readings

    Husky.NET is a porting of the Husky tool we already used in a previous article, using it as an NPM dependency. In that article, we also learned how to customize Conventional Commits using Git hooks.

    🔗 How to customize Conventional Commits in a .NET application using GitHooks | Code4IT

    As we learned, there are many more Git hooks that we can use. You can see the complete list on the Git documentation:

    🔗 Customizing Git – Git Hooks | Git docs

    This article first appeared on Code4IT 🐧

    Of course, if you want to get the best out of Husky.NET, I suggest you have a look at the official documentation:

    🔗 Husky.Net documentation

    One last thing: we installed Husky.NET using dotnet tools. If you want to learn more about this topic, I found an excellent article online that you might want to read:

    🔗 Using dotnet tools | Gustav Ehrenborg

    Wrapping up

    In this article, we learned how to create a pre-commit Git hook and validate all our changes before committing them to our Git repository.

    We also focused on the formatting of our code: how can we format only the files we have changed without impacting the whole solution?

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to create Unit Tests for Model Validation | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Model validation is fundamental to any project: it brings security and robustness acting as a first shield against an invalid state.

    You should then add Unit Tests focused on model validation. In fact, when defining the input model, you should always consider both the valid and, even more, the invalid models, making sure that all the invalid models are rejected.

    BDD is a good approach for this scenario, and you can use TDD to implement it gradually.

    Okay, but how can you validate that the models and model attributes you defined are correct?

    Let’s define a simple model:

    public class User
    {
        [Required]
        [MinLength(3)]
        public string FirstName { get; set; }
    
        [Required]
        [MinLength(3)]
        public string LastName { get; set; }
    
        [Range(18, 100)]
        public int Age { get; set; }
    }
    

    Have we defined our model correctly? Are we covering all the edge cases? A well-written Unit Test suite is our best friend here!

    We have two choices: we can write Integration Tests to send requests to our system, which is running an in-memory server, and check the response we receive. Or we can use the internal Validator class, the one used by ASP.NET to validate input models, to create slim and fast Unit Tests. Let’s use the second approach.

    Here’s a utility method we can use in our tests:

    public static IList<ValidationResult> ValidateModel(object model)
    {
        var results = new List<ValidationResult>();
    
        var validationContext = new ValidationContext(model, null, null);
    
        Validator.TryValidateObject(model, validationContext, results, true);
    
        if (model is IValidatableObject validatableModel)
           results.AddRange(validatableModel.Validate(validationContext));
    
        return results;
    }
    

    In short, we create a validation context without any external dependency, focused only on the input model: new ValidationContext(model, null, null).

    Next, we validate each field by calling TryValidateObject and store the validation results in a list, result.

    Finally, if the Model implements the IValidatableObject interface, which exposes the Validate method, we call that Validate() method and store the returned validation errors in the final result list created before.

    As you can see, we can handle both validation coming from attributes on the fields, such as [Required], and custom validation defined in the model class’s Validate() method.

    Now, we can use this method to verify whether the validation passes and, in case it fails, which errors are returned:

    [Test]
    public void User_ShouldPassValidation_WhenModelIsValid()
    {
        var model = new User { FirstName = "Davide", LastName = "Bellone", Age = 32 };
        var validationResult = ModelValidationHelper.ValidateModel(mode);
        Assert.That(validationResult, Is.Empty);
    }
    
    [Test]
    public void User_ShouldNotPassValidation_WhenLastNameIsEmpty()
    {
        var model = new User { FirstName = "Davide", LastName = null, Age = 32 };
        var validationResult = ModelValidationHelper.ValidateModel(mode);
        Assert.That(validationResult, Is.Not.Empty);
    }
    
    
    [Test]
    public void User_ShouldNotPassValidation_WhenAgeIsLessThan18()
    {
        var model = new User { FirstName = "Davide", LastName = "Bellone", Age = 10 };
        var validationResult = ModelValidationHelper.ValidateModel(mode);
        Assert.That(validationResult, Is.Not.Empty);
    }
    

    Further readings

    Model Validation allows you to create more robust APIs. To improve robustness, you can follow Postel’s law:

    🔗 Postel’s law for API Robustness | Code4IT

    This article first appeared on Code4IT 🐧

    Model validation, in my opinion, is one of the cases where Unit Tests are way better than Integration Tests. This is a perfect example of Testing Diamond, the best (in most cases) way to structure a test suite:

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage) | Code4IT

    If you still prefer writing Integration Tests for this kind of operation, you can rely on the WebApplicationFactory class and use it in your NUnit tests:

    🔗 Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit | Code4IT

    Wrapping up

    Model validation is crucial. Testing the correctness of model validation can make or break your application. Please don’t skip it!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal &vert; Code4IT

    OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal | Code4IT


    Learn how to integrate Oh My Posh, a cross-platform tool that lets you create beautiful and informative prompts for PowerShell.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    The content of the blog you are reading right now is stored in a Git repository. Every time I create an article, I create a new Git Branch to isolate the changes.

    To generate the skeleton of the articles, I use the command line (well, I generally use PowerShell); in particular, given that I’m using both Windows 10 and Windows 11 – depending on the laptop I’m working on – I use the Integrated Terminal, which allows you to define the style, the fonts, and so on of every terminal configured in the settings.

    Windows terminal with default style

    The default profile is pretty basic: no info is shown except for the current path – I want to customize the appearance.

    I want to show the status of the Git repository, including:

    • repository name
    • branch name
    • outgoing commits

    There are lots of articles that teach how to use OhMyPosh with Cascadia Code. Unfortunately, I couldn’t make them work.

    In this article, I teach you how I fixed it on my local machine. It’s a step-by-step guide I wrote while installing it on my local machine. I hope it works for you as well!

    Step 1: Create the $PROFILE file if it does not exist

    In PowerShell, you can customize the current execution by updating the $PROFILE file.

    Clearly, you first have to check if the profile file exists.

    Open the PowerShell and type:

    $PROFILE # You can also use $profile lowercase - it's the same!
    

    This command shows you the expected path of this file. The file, if it exists, is stored in that location.

    The Profile file is expected to be under a specific folder whose path can be found using the $PROFILE command

    In this case, the $Profile file should be available under the folder C:\Users\d.bellone\Documents\WindowsPowerShell. In my case, it does not exist, though!

    The Profile file is expected to be under a specific path, but it may not exist

    Therefore, you must create it manually: head to that folder and create a file named Microsoft.PowerShell_profile.ps1.

    Note: it might happen that not even the WindowsPowerShell folder exists. If it’s missing, well, create it!

    Step 2: Install OhMyPosh using Winget, Scoop, or PowerShell

    To use OhMyPosh, we have to – of course – install it.

    As explained in the official documentation, we have three ways to install OhMyPosh, depending on the tool you prefer.

    If you use Winget, just run:

    winget install JanDeDobbeleer.OhMyPosh -s winget
    

    If you prefer Scoop, the command is:

    scoop install https://github.com/JanDeDobbeleer/oh-my-posh/releases/latest/download/oh-my-posh.json
    

    And, if you like working with PowerShell, execute:

    Set-ExecutionPolicy Bypass -Scope Process -Force; Invoke-Expression ((New-Object System.Net.WebClient).DownloadString('https://ohmyposh.dev/install.ps1'))
    

    I used Winget, and you can see the installation process here:

    Install OhMyPosh with Winget

    Now, to apply these changes, you have to restart the PowerShell.

    Step 3: Add OhMyPosh to the PowerShell profile

    Open the Microsoft.PowerShell_profile.ps1 file and add the following line:

    oh-my-posh init pwsh | Invoke-Expression
    

    This command is executed every time you open the PowerShell with the default profile, and it initializes OhMyPosh to have it available during the current session.

    Now, you can save and close the file.

    Hint: you can open the profile file with Notepad by running notepad $PROFILE.

    Step 4: Set the Execution Policy to RemoteSigned

    Restart the terminal. In all probability, you will see an error like this:

    &ldquo;The file .ps1 is not digitally signed&rdquo; error

    The error message

    The file <path>\Microsoft.PowerShell_profile.ps1 is
    not digitally signed. You cannot run this script on the current system

    means that PowerShell does not trust the script it’s trying to load.

    To see which Execution Policy is currently active, run:

    You’ll probably see that the value is AllSigned.

    To enable the execution of scripts created on your local machine, you have to set the Execution Policy value to RemoteSigned, using this command by running the PowerShell in administrator mode:

    Set-ExecutionPolicy RemoteSigned
    

    Let’s see the definition of the RemoteSigned Execution policy as per SQLShack’s article:

    This is also a safe PowerShell Execution policy to set in an enterprise environment. This policy dictates that any script that was not created on the system that the script is running on, should be signed. Therefore, this will allow you to write your own script and execute it.

    So, yeah, feel free to proceed and set the new Execution policy to have your PowerShell profile loaded correctly every time you open a new PowerShell instance.

    Now, OhMyPosh can run in the current profile.

    Head to a Git repository and notice that… It’s not working!🤬 Or, well, we have the Git information, but we are missing some icons and glyphs.

    Oh My Posh is loaded correctly, but some icons are missing due to the wrong font

    Step 5: Use CaskaydiaCove, not Cascadia Code, as a font

    We still have to install the correct font with the missing icons.

    We will install it using Chocolatey, a package manager available for Windows 11.

    To check if you have it installed, run:

    Now, to install the correct font family, open a PowerShell with administration privileges and run:

    choco install cascadia-code-nerd-font
    

    Once the installation is complete, you must tell Integrated Terminal to use the correct font by following these steps:

    1. open to the Settings page (by hitting CTRL + ,)
    2. select the profile you want to update (in my case, I’ll update the default profile)
    3. open the Appearance section
    4. under Font face select CaskaydiaCove Nerd Font

    PowerShell profile settings - Font Face should be CaskaydiaCove Nerd Font

    Now close the Integrated Terminal to apply the changes.

    Open it again, navigate to a Git repository, and admire the result.

    OhMyPosh with icons and fonts loaded correctly

    Further readings

    The first time I read about OhMyPosh, it was on Scott Hanselman’s blog. I couldn’t make his solution work – and that’s the reason I wrote this article. However, in his article, he shows how he customized his own Terminal with more glyphs and icons, so you should give it a read.

    🔗 My Ultimate PowerShell prompt with Oh My Posh and the Windows Terminal | Scott Hanselman’s blog

    We customized our PowerShell profile with just one simple configuration. However, you can do a lot more. You can read Ruud’s in-depth article about PowerShell profiles.

    🔗 How to Create a PowerShell Profile – Step-by-Step | Lazyadmin

    One of the core parts of this article is that we have to use CaskaydiaCove as a font instead of the (in)famous Cascadia Code. But why?

    🔗 Why CaskaydiaCove and not Cascadia Code? | GitHub

    Finally, as I said at the beginning of this article, I use Git and Git Branches to handle the creation and management of my blog articles. That’s just the tip of the iceberg! 🏔️

    If you want to steal my (previous) workflow, have a look at the behind-the-scenes of my blogging process (note: in the meanwhile, a lot of things have changed, but these steps can still be helpful for you)

    🔗 From idea to publishing, and beyond: how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we learned how to install OhMyPosh in PowerShell and overcome all the errors you (well, I) don’t see described in other articles.

    I wrote this step-by-step article alongside installing these tools on my local machine, so I’m confident the solution will work.

    Did this solution work for you? Let me know! 📨

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link