بلاگ

  • How an XDR Platform Transforms Your SOC Operations

    How an XDR Platform Transforms Your SOC Operations


    XDR solutions are revolutionizing how security teams handle threats by dramatically reducing false positives and streamlining operations. In fact, modern XDR platforms generate significantly fewer false positives than traditional SIEM threat analytics, allowing security teams to focus on genuine threats rather than chasing shadows. We’ve seen firsthand how security operations centers (SOCs) struggle with alert fatigue, fragmented visibility, and resource constraints. However, an XDR platform addresses these challenges by unifying information from multiple sources and providing a holistic view of threats. This integration enables organizations to operate advanced threat detection and response with fewer SOC resources, making it a cost-effective approach to modern security operations.

    An XDR platform consolidates security data into a single system, ensuring that SOC teams and surrounding departments can operate from the same information base. Consequently, this unified approach not only streamlines operations but also minimizes breach risks, making it an essential component of contemporary cybersecurity strategies.

    In this article, we’ll explore how XDR transforms SOC operations, why traditional tools fall short, and the practical benefits of implementing this technology in your security framework.

    The SOC Challenge: Why Traditional Tools Fall Short

    Security Operations Centers (SOCs) today face unprecedented challenges with their traditional security tools. While security teams strive to protect organizations, they’re increasingly finding themselves overwhelmed by fundamental limitations in their security infrastructure.

    Alert overload and analyst fatigue

    Modern SOC teams are drowning in alerts. As per Vectra AI, an overwhelming 71% of SOC practitioners worry they’ll miss real attacks buried in alert floods, while 51% believe they simply cannot keep pace with mounting security threats. The statistics paint a troubling picture:

    Siloed tools and fragmented visibility

    The tool sprawl in security operations creates massive blind spots. According to Vectra AI findings, 73% of SOCs have more than 10 security tools in place, while 45% juggle more than 20 different tools. Despite this arsenal, 47% of practitioners don’t trust their tools to work as needed.

    Many organizations struggle with siloed security data across disparate systems. Each department stores logs, alerts, and operational details in separate repositories that rarely communicate with one another. This fragmentation means threat hunting becomes guesswork because critical artifacts sit in systems that no single team can access.

    Slow response times and manual processes

    Traditional SOCs rely heavily on manual processes, significantly extending detection and response times. When investigating incidents, analysts must manually piece together information from different silos, losing precious time during active cyber incidents.

    According to research by Palo Alto Networks, automation can reduce SOC response times by up to 50%, significantly limiting breach impacts. Unfortunately, most traditional SOCs lack this capability. The workflow in traditional environments is characterized by manual processes that exacerbate alert fatigue while dealing with massive threat alert volumes.

    The complexity of investigations further slows response. When an incident occurs, analysts must combine data from various sources to understand the full scope of an attack, a time-consuming process that allows threats to linger in systems longer than necessary.

    What is an XDR Platform and How Does It Work?

    Extended Detection and Response (XDR) platforms represent the evolution of cybersecurity technology, breaking down traditional barriers between security tools. Unlike siloed solutions, XDR solutions provide a holistic approach to threat management through unified visibility and coordinated response.

    Unified data collection across endpoints, network, and cloud

    At its core, an XDR platform aggregates and correlates data from multiple security layers into a centralized repository. This comprehensive data collection encompasses:

    • Endpoints (computers, servers, mobile devices)
    • Network infrastructure and traffic
    • Cloud environments and workloads
    • Email systems and applications
    • Identity and access management

    This integration eliminates blind spots that typically plague security operations. By collecting telemetry from across the entire attack surface, XDR platforms provide security teams with complete visibility into potential threats. The system automatically ingests, cleans, and standardizes this data, ensuring consistent, high-quality information for analysis.

    Real-time threat detection using AI and ML

    XDR platforms leverage advanced analytics, artificial intelligence, and machine learning to identify suspicious patterns and anomalies that human analysts might miss. These capabilities enable:

    • Automatic correlation of seemingly unrelated events across different security layers
    • Identification of sophisticated multi-vector attacks through pattern recognition
    • Real-time monitoring and analysis of data streams for immediate threat identification
    • Reduction in false positives through contextual understanding of alerts

    The AI-powered capabilities enable XDR platforms to detect threats at a scale and speed impossible for human analysts alone. Moreover, these systems continuously learn and adapt to evolving threats through machine learning models.

    Automated response and orchestration capabilities

    Once threats are detected, XDR platforms can initiate automated responses without requiring manual intervention. This automation includes:

    • Isolation of compromised devices to contain threats
    • Blocking of malicious IP addresses and domains
    • Execution of predefined response playbooks for consistent remediation
    • Prioritization of incidents based on severity for efficient resource allocation

    Key Benefits of XDR for SOC Operations

    Implementing an XDR platform delivers immediate, measurable advantages to security operations centers struggling with traditional tools and fragmented systems. SOC teams gain specific capabilities that fundamentally transform their effectiveness against modern threats.

    Faster threat detection and reduced false positives

    The strategic advantage of XDR solutions begins with their ability to dramatically reduce alert volume. XDR tools automatically group related alerts into unified incidents, representing entire attack sequences rather than isolated events. This correlation across different security layers identifies complex attack patterns that traditional solutions might miss.

    Improved analyst productivity through automation

    As per the Tines report, 64% of analysts spend over half their time on tedious manual work, with 66% believing that half of their tasks could be automated. XDR platforms address this challenge through built-in orchestration and automation that offload repetitive tasks. Specifically, XDR can automate threat detection through machine learning, streamline incident response processes, and generate AI-powered incident reports. This automation allows SOC teams to detect sophisticated attacks with fewer resources while reducing response time.

    Centralized visibility and simplified workflows

    XDR provides a single pane view that eliminates “swivel chair integration,” where analysts manually interface across multiple security systems. This unified approach aggregates data from endpoints, networks, applications, and cloud environments into a consolidated platform. As a result, analysts gain swift investigation capabilities with instant access to all forensic artifacts, events, and threat intelligence in one location. This centralization particularly benefits teams during complex investigations, enabling them to quickly understand the complete attack story.

    Better alignment with compliance and audit needs

    XDR strengthens regulatory compliance through detailed documentation and monitoring capabilities. The platform generates comprehensive logs and audit trails of security events, user activities, and system changes, helping organizations demonstrate compliance to regulators. Additionally, XDR’s continuous monitoring adapts to new threats and regulatory changes, ensuring consistent compliance over time. Through centralized visibility and data aggregation, XDR effectively monitors data flows and access patterns, preventing unauthorized access to sensitive information.

    Conclusion

    XDR platforms clearly represent a significant advancement in cybersecurity technology.  At Seqrite, we offer a comprehensive XDR platform designed to help organizations simplify their SOC operations, improve detection accuracy, and automate responses. If you are looking to strengthen your cybersecurity posture with an effective and scalable XDR solution, Seqrite XDR is built to help you stay ahead of evolving threats.

     



    Source link

  • How to integrate Feature Flags stored on Azure App Configuration in an ASP.NET Core Application | Code4IT

    How to integrate Feature Flags stored on Azure App Configuration in an ASP.NET Core Application | Code4IT


    Learn how to use Feature Flags in ASP.NET Core apps and read values from Azure App Configuration. Understand how to use filters, like the Percentage filter, to control feature activation, and learn how to take full control of the cache expiration of the values.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Feature Flags let you remotely control the activation of features without code changes. They help you to test, release, and manage features safely and quickly by driving changes using centralized configurations.

    In a previous article, we learned how to integrate Feature Flags in ASP.NET Core applications. Also, a while ago, we learned how to integrate Azure App Configuration in an ASP.NET Core application.

    In this article, we are going to join the two streams in a single article: in fact, we will learn how to manage Feature Flags using Azure App Configuration to centralize our configurations.

    It’s a sort of evolution from the previous article. Instead of changing the static configurations and redeploying the whole application, we are going to move the Feature Flags to Azure so that you can enable or disable those flags in just one click.

    A recap of Feature Flags read from the appsettings file

    Let’s reuse the example shown in the previous article.

    We have an ASP.NET Core application (in that case, we were building a Razor application, but it’s not important for the sake of this article), with some configurations defined in the appsettings file under the Feature key:

    {
      "FeatureManagement": {
        "Header": true,
        "Footer": true,
        "PrivacyPage": false,
        "ShowPicture": {
          "EnabledFor": [
            {
              "Name": "Percentage",
              "Parameters": { "Value": 60 }
            }
          ]
        }
      }
    }
    

    We have already dove deep into Feature Flags in an ASP.NET Core application in the previous article. However, let me summarize it.

    First of all, you have to define your flags in the appsettings.json file using the structure we saw before.

    To use Feature Flags in ASP.NET Core you have to install the Microsoft.FeatureManagement.AspNetCore NuGet package.

    Then, you have to tell ASP.NET to use Feature Flags by calling:

    builder.Services.AddFeatureManagement();
    

    Finally, you are able to consume those flags in three ways:

    • inject the IFeatureManager interface and call IsEnabled or IsEnabledAsync;
    • use the FeatureGate attribute on a Controller class or a Razor model;
    • use the <feature> tag in a Razor page to show or hide a portion of HTML

    How to create Feature Flags on Azure App Configuration

    We are ready to move our Feature Flags to Azure App Configuration. Needless to say, you need an Azure subscription 😉

    Log in to the Azure Portal, head to “Create a resource”, and create a new App Configuration:

    Azure App configuration in the Marketplace

    I’m going to reuse the same instance I created in the previous article – you can see the full details in the How to create an Azure App Configuration instance section.

    Now we have to configure the same keys defined in the appsettings file: Header, Footer, and PrivacyPage.

    Open the App Configuration instance and locate the “Feature Manager” menu item in the left panel. This is the central place for creating, removing, and managing your Feature Flags. Here, you can see that I have already added the Header and Footer, and you can see their current state: “Footer” is enabled, while “Header” is not.

    Feature Flags manager dashboard

    How can I add the PrivacyPage flag? It’s elementary: click the “Create” button and fill in the fields.

    You have to define a Name and a Key (they can also be different), and if you want, you can add a Label and a Description. You can also define whether the flag should be active by checking the “Enable feature flag” checkbox.

    Feature Flag definition form

    Read Feature Flags from Azure App Configuration in an ASP.NET Core application

    It’s time to integrate Azure App Configuration with our ASP.NET Core application.

    Before moving to the code, we have to locate the connection string and store it somewhere.

    Head back to the App Configuration resource and locate the “Access keys” menu item under the “Settings” section.

    Access Keys page with connection strings

    From here, copy the connection string (I suggest that you use the Read-only Keys) and store it somewhere.

    Before proceeding, you have to install the Microsoft.Azure.AppConfiguration.AspNetCore NuGet package.

    Now, we can add Azure App Configuration as a source for our configurations by connecting to the connection string and by declaring that we are going to use Feature Flags:

    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString).UseFeatureFlags()
    );
    

    That’s not enough. We need to tell ASP.NET that we are going to consume these configurations by adding such functionalities to the Services property.

    builder.Services.AddAzureAppConfiguration();
    
    builder.Services.AddFeatureManagement();
    

    Finally, once we have built our application with the usual builder.Build(), we have to add the Azure App Configuration middleware:

    app.UseAzureAppConfiguration();
    

    To try it out, run the application and validate that the flags are being applied. You can enable or disable those flags on Azure, restart the application, and check that the changes to the flags are being applied. Otherwise, you can wait 30 seconds to have the flag values refreshed and see the changes applied to your application.

    Using the Percentage filter on Azure App Configuration

    Suppose you want to enable a functionality only to a percentage of sessions (sessions, not users!). In that case, you can use the Percentage filter.

    The previous article had a specific section dedicated to the PercentageFilter, so you might want to check it out.

    As a recap, we defined the flag as:

    {
      "ShowPicture": {
        "EnabledFor": [
          {
            "Name": "Percentage",
            "Parameters": {
              "Value": 60
            }
          }
        ]
      }
    }
    

    And added the PercentageFilter filter to ASP.NET with:

    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    

    Clearly, we can define such flags on Azure as well.

    Head back to the Azure Portal and add a new Feature Flag. This time, you have to add a new Feature Filter to any existing flag. Even though the PercentageFilter is out-of-the-box in the FeatureManagement NuGet package, it is not available on the Azure portal.

    You have to define the filter with the following values:

    • Filter Type must be “Custom”;
    • Custom filter name must be “Percentage”
    • You must add a new key, “Value”, and set its value to “60”.

    Custom filter used to create Percentage Filter

    The configuration we just added reflects the JSON value we previously had in the appsettings file: 60% of the requests will activate the flag, while the remaining 40% will not.

    Define the cache expiration interval for Feature Flags

    By default, Feature Flags are stored in an internal cache for 30 seconds.

    Sometimes, it’s not the best choice for your project; you may prefer a longer duration to avoid additional calls to the App Configuration platform; other times, you’d like to have the changes immediately available.

    You can then define the cache expiration interval you need by configuring the options for the Feature Flags:

    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString).UseFeatureFlags(featureFlagOptions =>
        {
            featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(10);
        })
    );
    

    This way, Feature Flag values are stored in the internal cache for 10 seconds. Then, when you reload the page, the configurations are reread from Azure App Configuration and the flags are applied with the new values.

    Further readings

    This is the final article of a path I built during these months to explore how to use configurations in ASP.NET Core.

    We started by learning how to set configuration values in an ASP.NET Core application, as explained here:

    🔗 3 (and more) ways to set configuration values in ASP.NET Core

    Then, we learned how to read and use them with the IOptions family:

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in ASP.NET Core

    From here, we learned how to read the same configurations from Azure App Configuration, to centralize our settings:

    🔗 Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    Then, we configured our applications to automatically refresh the configurations using a Sentinel value:

    🔗 How to automatically refresh configurations with Azure App Configuration in ASP.NET Core

    Finally, we introduced Feature Flags in our apps:

    🔗 Feature Flags 101: A Guide for ASP.NET Core Developers | Code4IT

    And then we got to this article!

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we have configured an ASP.NET Core application to read the Feature Flags stored on Azure App Configuration.

    Here’s the minimal code you need to add Feature Flags for ASP.NET Core API Controllers:

    var builder = WebApplication.CreateBuilder(args);
    
    string connectionString = "my connection string";
    
    builder.Services.AddControllers();
    
    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString)
        .UseFeatureFlags(featureFlagOptions =>
            {
                featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(10);
            }
        )
    );
    
    builder.Services.AddAzureAppConfiguration();
    
    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    
    var app = builder.Build();
    
    app.UseRouting();
    app.UseAzureAppConfiguration();
    app.MapControllers();
    app.Run();
    

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Pre-commit hooks with Husky.NET – build, format, and test your .NET application before a Git commit &vert; Code4IT

    Pre-commit hooks with Husky.NET – build, format, and test your .NET application before a Git commit | Code4IT


    A Git commit represents the status of a system. Learn how to validate that your code builds, is well-formatted, and all the tests pass by adding a Git hook!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you need to run operations before completing a Git commit, you can rely on Git Hooks.

    Git hooks are scripts that run automatically whenever a particular event occurs in a Git repository. They let you customize Git’s internal behaviour and trigger customizable actions at key points in the development life cycle.

    Extending Git hooks allows you to plug in custom functionalities to the regular Git flow, such as Git message validation, code formatting, etc.

    I’ve already described how to use Husky with NPM, but here I’m gonna use Husky.NET, the version of Husky created for .NET-based applications.

    Git hooks: a way to extend Git operations

    As we said, Git hooks are actions that run during specific phases of Git operations.

    Git hooks fall into 4 categories:

    • client-side hooks related to the committing workflow: they execute when you run git commit on your local repository;
    • client-side hooks related to the email workflow: they are executed when running git am, which is a command that allows you to integrate mails and Git repositories (I’ve never used it. If you are interested in this functionality, here’s the official documentation);
    • client-side hooks related to other operations: these hooks run on your local repository when performing operations like git rebase;
    • server-side hooks: they run after a commit is received on the remote repository, and they can reject a git push operation.

    Let’s focus on the client-side hooks that run when you commit changes using git commit.

    Hook name Description
    pre-commit This hook is the first invoked by git commit (if you don’t use the -m flag, it is invoked before asking you to insert a commit message) and can be used to inspect the snapshot that is about to be committed.
    prepare-commit-msg This hook is invoked by git commit and can be used to edit the default commit message when it is generated by an automated tool.
    commit-msg This hook is invoked by git commit and can be used to validate or modify the commit message after it is entered by the user.
    post-commit This hook is invoked after the git commit execution has run correctly, and it is generally used to fire notifications.

    How to install Husky.NET and its dependencies in a .NET Application

    Husky.NET must be installed in the root folder of the solution.

    You first have to create a tool-manifest file in the root folder by running:

    This command creates a file named dotnet-tools.json under the .config folder: here you can see the list of external tools used by dotnet.

    After running the command, you will see that the dotnet-tools.json file contains this element:

    {
      "version": 1,
      "isRoot": true,
      "tools": {}
    }
    

    Now you can add Husky as a dotnet tool by running:

    dotnet tool install Husky
    

    After running the command, the file will contain something like this:

    {
      "version": 1,
      "isRoot": true,
      "tools": {
        "husky": {
          "version": "0.6.2",
          "commands": ["husky"]
        }
      }
    }
    

    Now that we have added it to our dependencies, we can add Husky to an existing .NET application by running:

    If you open the root folder, you should be able to see these 3 folders:

    • .git, which contains the info about the Git repository;
    • .config that contains the description of the tools, such as dotnet-tools;
    • .husky that contains the files we are going to use to define our Git hooks.

    Finally, you can add a new hook by running, for example,

    dotnet husky add pre-commit -c "echo 'Hello world!'"
    git add .husky/pre-commit
    

    This command creates a new file, pre-commit (without file extension), under the .husky folder. By default, it appears like this:

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    ## husky task runner examples -------------------
    ## Note : for local installation use 'dotnet' prefix. e.g. 'dotnet husky'
    
    ## run all tasks
    #husky run
    
    ### run all tasks with group: 'group-name'
    #husky run --group group-name
    
    ## run task with name: 'task-name'
    #husky run --name task-name
    
    ## pass hook arguments to task
    #husky run --args "$1" "$2"
    
    ## or put your custom commands -------------------
    #echo 'Husky.Net is awesome!'
    
    echo 'Hello world!'
    

    The default content is pretty useless; it’s time to customize that hook.

    Notice that the latest command has also generated a task-runner.json file; we will use it later.

    Your first pre-commit hook

    To customize the script, open the file located at .husky/pre-commit.

    Here, you can add whatever you want.

    In the example below, I run commands that compile the code, format the text (using dotnet format with the rules defined in the .editorconfig file), and then run all the tests.

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    echo 'Building code'
    dotnet build
    
    echo 'Formatting code'
    dotnet format
    
    echo 'Running tests'
    dotnet test
    

    Then, add it to Git, and you are ready to go. 🚀 But wait…

    3 ways to manage dotnet format with Husky.NET

    There is a problem with the approach in the example above.

    Let’s simulate a usage flow:

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format;
    5. the pre-commit hook runs dotnet test;
    6. after the hooks, the commit is created.

    What is the final result?

    Since dotnet format modifies the source files, and given that the snapshot has already been created before executing the hook, all the modified files will not be part of the final commit!

    Also, dotnet format executes linting on every file in the solution, not only those that are part of the current snapshot. The operation might then take a lot of time, depending on the size of the repository, and most of the time, it will not update any file (because you’ve already formatted everything in a previous run).

    We have to work out a way to fix this issue. I’ll suggest three approaches.

    Include all the changes using Git add

    The first approach is quite simple: run git add . after dotnet format.

    So, the flow becomes:

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format;
    5. the pre-commit hook runs git add .;
    6. the pre-commit hook runs dotnet test;
    7. Git creates the commit.

    This is the most straightforward approach, but it has some downsides:

    • dotnet format is executed on every file in the solution. The more your project grows, the slower your commits become;
    • git add . adds to the current snapshot all the files modified, even those you did not add to this commit on purpose (maybe because you have updated many files and want to create two distinct commits).

    So, it works, but we can do better.

    Execute a dry run of dotnet-format

    You can add the --verify-no-changes to the dotnet format command: this flag returns an error if at least one file needs to be updated because of a formatting rule.

    Let’s see how the flow changes if one file needs to be formatted.

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format --verify-no-changes;
    5. the pre-commit hook returns an error and aborts the operation;
    6. you run dotnet format on the whole solution to fix all the formatting issues;
    7. you run git add .;
    8. you run git commit -m "message";
    9. the pre-commit hook runs dotnet build;
    10. the pre-commit hook runs dotnet format --verify-no-changes. Now, there is nothing to format, and we can proceed;
    11. the pre-commit hook runs dotnet test;
    12. Git creates the commit.

    Notice that, this way, if there is something to format, the whole commit is aborted. You will then have to run dotnet format on the entire solution, fix the errors, add the changes to the snapshot, and restart the flow.

    It’s a longer process, but it allows you to have complete control over the formatted files.

    Also, you won’t risk including in the snapshot the files you want to keep staged in order to add them to a subsequent commit.

    Run dotnet-format only on the staged files using Husky.NET Task Runner

    The third approach is the most complex but with the best result.

    If you recall, during the initialization, Husky added two files in the .husky folder: pre-commit and task-runner.json.

    The key to this solution is the task-runner.json file. This file allows you to create custom scripts with a name, a group, the command to be executed, and its related parameters.

    By default, you will see this content:

    {
      "tasks": [
        {
          "name": "welcome-message-example",
          "command": "bash",
          "args": ["-c", "echo Husky.Net is awesome!"],
          "windows": {
            "command": "cmd",
            "args": ["/c", "echo Husky.Net is awesome!"]
          }
        }
      ]
    }
    

    To make sure that dotnet format runs only on the staged files, you must create a new task like this:

    {
      "name": "dotnet-format-staged-files",
      "group": "pre-commit-operations",
      "command": "dotnet",
      "args": ["format", "--include", "${staged}"],
      "include": ["**/*.cs"]
    }
    

    Here, we have specified a name, dotnet-format-staged-files, the command to run, dotnet, with some parameters listed in the args array. Notice that we can filter the list of files to be formatted by using the ${staged} parameter, which is populated by Husky.NET.

    We have also added this task to a group named pre-commit-operations that we can use to reference a list of tasks to be executed together.

    If you want to run a specific task, you can use dotnet husky run --name taskname. In our example, the command would be dotnet husky run --name dotnet-format-staged-files.

    If you want to run a set of tasks belonging to the same group, you can run dotnet husky run --group groupname. In our example, the command would be dotnet husky run --group pre-commit-operations.

    The last step is to call these tasks from within our pre-commit file. So, replace the old dotnet format command with one of the above commands.

    Final result and optimizations of the pre-commit hook

    Now that everything is in place, we can improve the script to make it faster.

    Let’s see which parts we can optimize.

    The first step is the build phase. For sure, we have to run dotnet build to see if the project builds correctly. You can consider adding the --no-restore flag to skip the restore step before building.

    Then we have the format phase: we can avoid formatting every file using one of the steps defined before. I’ll replace the plain dotnet format with the execution of the script defined in the Task Runner (it’s the third approach we saw).

    Then, we have the test phase. We can add both the --no-restore and the --no-build flag to the command since we have already built everything before. But wait! The format phase updated the content of our files, so we still have to build the whole solution. Unless we swap the build and the format phases.

    So, here we have the final pre-commit file:

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    echo 'Ready to commit changes!'
    
    echo 'Format'
    
    dotnet husky run --name dotnet-format-staged-files
    
    echo 'Build'
    
    dotnet build --no-restore
    
    echo 'Test'
    
    dotnet test --no-restore
    
    echo 'Completed pre-commit changes'
    

    Yes, I know that when you run the dotnet test command, you also build the solution, but I prefer having two separate steps just for clarity!

    Ah, and don’t remove the #!/bin/sh at the beginning of the script!

    How to skip Git hooks

    To trigger the hook, just run git commit -m "message". Before completing the commit, the hook will run all the commands. If one of them fails, the whole commit operation is aborted.

    There are cases when you have to skip the validation. For example, if you have integration tests that rely on an external source currently offline. In that case, some tests will fail, and you will be able to commit your code only once the external system gets working again.

    You can skip the commit validation by adding the --no-verify flag:

    git commit -m "my message" --no-verify
    

    Further readings

    Husky.NET is a porting of the Husky tool we already used in a previous article, using it as an NPM dependency. In that article, we also learned how to customize Conventional Commits using Git hooks.

    🔗 How to customize Conventional Commits in a .NET application using GitHooks | Code4IT

    As we learned, there are many more Git hooks that we can use. You can see the complete list on the Git documentation:

    🔗 Customizing Git – Git Hooks | Git docs

    This article first appeared on Code4IT 🐧

    Of course, if you want to get the best out of Husky.NET, I suggest you have a look at the official documentation:

    🔗 Husky.Net documentation

    One last thing: we installed Husky.NET using dotnet tools. If you want to learn more about this topic, I found an excellent article online that you might want to read:

    🔗 Using dotnet tools | Gustav Ehrenborg

    Wrapping up

    In this article, we learned how to create a pre-commit Git hook and validate all our changes before committing them to our Git repository.

    We also focused on the formatting of our code: how can we format only the files we have changed without impacting the whole solution?

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to create Unit Tests for Model Validation | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Model validation is fundamental to any project: it brings security and robustness acting as a first shield against an invalid state.

    You should then add Unit Tests focused on model validation. In fact, when defining the input model, you should always consider both the valid and, even more, the invalid models, making sure that all the invalid models are rejected.

    BDD is a good approach for this scenario, and you can use TDD to implement it gradually.

    Okay, but how can you validate that the models and model attributes you defined are correct?

    Let’s define a simple model:

    public class User
    {
        [Required]
        [MinLength(3)]
        public string FirstName { get; set; }
    
        [Required]
        [MinLength(3)]
        public string LastName { get; set; }
    
        [Range(18, 100)]
        public int Age { get; set; }
    }
    

    Have we defined our model correctly? Are we covering all the edge cases? A well-written Unit Test suite is our best friend here!

    We have two choices: we can write Integration Tests to send requests to our system, which is running an in-memory server, and check the response we receive. Or we can use the internal Validator class, the one used by ASP.NET to validate input models, to create slim and fast Unit Tests. Let’s use the second approach.

    Here’s a utility method we can use in our tests:

    public static IList<ValidationResult> ValidateModel(object model)
    {
        var results = new List<ValidationResult>();
    
        var validationContext = new ValidationContext(model, null, null);
    
        Validator.TryValidateObject(model, validationContext, results, true);
    
        if (model is IValidatableObject validatableModel)
           results.AddRange(validatableModel.Validate(validationContext));
    
        return results;
    }
    

    In short, we create a validation context without any external dependency, focused only on the input model: new ValidationContext(model, null, null).

    Next, we validate each field by calling TryValidateObject and store the validation results in a list, result.

    Finally, if the Model implements the IValidatableObject interface, which exposes the Validate method, we call that Validate() method and store the returned validation errors in the final result list created before.

    As you can see, we can handle both validation coming from attributes on the fields, such as [Required], and custom validation defined in the model class’s Validate() method.

    Now, we can use this method to verify whether the validation passes and, in case it fails, which errors are returned:

    [Test]
    public void User_ShouldPassValidation_WhenModelIsValid()
    {
        var model = new User { FirstName = "Davide", LastName = "Bellone", Age = 32 };
        var validationResult = ModelValidationHelper.ValidateModel(mode);
        Assert.That(validationResult, Is.Empty);
    }
    
    [Test]
    public void User_ShouldNotPassValidation_WhenLastNameIsEmpty()
    {
        var model = new User { FirstName = "Davide", LastName = null, Age = 32 };
        var validationResult = ModelValidationHelper.ValidateModel(mode);
        Assert.That(validationResult, Is.Not.Empty);
    }
    
    
    [Test]
    public void User_ShouldNotPassValidation_WhenAgeIsLessThan18()
    {
        var model = new User { FirstName = "Davide", LastName = "Bellone", Age = 10 };
        var validationResult = ModelValidationHelper.ValidateModel(mode);
        Assert.That(validationResult, Is.Not.Empty);
    }
    

    Further readings

    Model Validation allows you to create more robust APIs. To improve robustness, you can follow Postel’s law:

    🔗 Postel’s law for API Robustness | Code4IT

    This article first appeared on Code4IT 🐧

    Model validation, in my opinion, is one of the cases where Unit Tests are way better than Integration Tests. This is a perfect example of Testing Diamond, the best (in most cases) way to structure a test suite:

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage) | Code4IT

    If you still prefer writing Integration Tests for this kind of operation, you can rely on the WebApplicationFactory class and use it in your NUnit tests:

    🔗 Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit | Code4IT

    Wrapping up

    Model validation is crucial. Testing the correctness of model validation can make or break your application. Please don’t skip it!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal &vert; Code4IT

    OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal | Code4IT


    Learn how to integrate Oh My Posh, a cross-platform tool that lets you create beautiful and informative prompts for PowerShell.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    The content of the blog you are reading right now is stored in a Git repository. Every time I create an article, I create a new Git Branch to isolate the changes.

    To generate the skeleton of the articles, I use the command line (well, I generally use PowerShell); in particular, given that I’m using both Windows 10 and Windows 11 – depending on the laptop I’m working on – I use the Integrated Terminal, which allows you to define the style, the fonts, and so on of every terminal configured in the settings.

    Windows terminal with default style

    The default profile is pretty basic: no info is shown except for the current path – I want to customize the appearance.

    I want to show the status of the Git repository, including:

    • repository name
    • branch name
    • outgoing commits

    There are lots of articles that teach how to use OhMyPosh with Cascadia Code. Unfortunately, I couldn’t make them work.

    In this article, I teach you how I fixed it on my local machine. It’s a step-by-step guide I wrote while installing it on my local machine. I hope it works for you as well!

    Step 1: Create the $PROFILE file if it does not exist

    In PowerShell, you can customize the current execution by updating the $PROFILE file.

    Clearly, you first have to check if the profile file exists.

    Open the PowerShell and type:

    $PROFILE # You can also use $profile lowercase - it's the same!
    

    This command shows you the expected path of this file. The file, if it exists, is stored in that location.

    The Profile file is expected to be under a specific folder whose path can be found using the $PROFILE command

    In this case, the $Profile file should be available under the folder C:\Users\d.bellone\Documents\WindowsPowerShell. In my case, it does not exist, though!

    The Profile file is expected to be under a specific path, but it may not exist

    Therefore, you must create it manually: head to that folder and create a file named Microsoft.PowerShell_profile.ps1.

    Note: it might happen that not even the WindowsPowerShell folder exists. If it’s missing, well, create it!

    Step 2: Install OhMyPosh using Winget, Scoop, or PowerShell

    To use OhMyPosh, we have to – of course – install it.

    As explained in the official documentation, we have three ways to install OhMyPosh, depending on the tool you prefer.

    If you use Winget, just run:

    winget install JanDeDobbeleer.OhMyPosh -s winget
    

    If you prefer Scoop, the command is:

    scoop install https://github.com/JanDeDobbeleer/oh-my-posh/releases/latest/download/oh-my-posh.json
    

    And, if you like working with PowerShell, execute:

    Set-ExecutionPolicy Bypass -Scope Process -Force; Invoke-Expression ((New-Object System.Net.WebClient).DownloadString('https://ohmyposh.dev/install.ps1'))
    

    I used Winget, and you can see the installation process here:

    Install OhMyPosh with Winget

    Now, to apply these changes, you have to restart the PowerShell.

    Step 3: Add OhMyPosh to the PowerShell profile

    Open the Microsoft.PowerShell_profile.ps1 file and add the following line:

    oh-my-posh init pwsh | Invoke-Expression
    

    This command is executed every time you open the PowerShell with the default profile, and it initializes OhMyPosh to have it available during the current session.

    Now, you can save and close the file.

    Hint: you can open the profile file with Notepad by running notepad $PROFILE.

    Step 4: Set the Execution Policy to RemoteSigned

    Restart the terminal. In all probability, you will see an error like this:

    &ldquo;The file .ps1 is not digitally signed&rdquo; error

    The error message

    The file <path>\Microsoft.PowerShell_profile.ps1 is
    not digitally signed. You cannot run this script on the current system

    means that PowerShell does not trust the script it’s trying to load.

    To see which Execution Policy is currently active, run:

    You’ll probably see that the value is AllSigned.

    To enable the execution of scripts created on your local machine, you have to set the Execution Policy value to RemoteSigned, using this command by running the PowerShell in administrator mode:

    Set-ExecutionPolicy RemoteSigned
    

    Let’s see the definition of the RemoteSigned Execution policy as per SQLShack’s article:

    This is also a safe PowerShell Execution policy to set in an enterprise environment. This policy dictates that any script that was not created on the system that the script is running on, should be signed. Therefore, this will allow you to write your own script and execute it.

    So, yeah, feel free to proceed and set the new Execution policy to have your PowerShell profile loaded correctly every time you open a new PowerShell instance.

    Now, OhMyPosh can run in the current profile.

    Head to a Git repository and notice that… It’s not working!🤬 Or, well, we have the Git information, but we are missing some icons and glyphs.

    Oh My Posh is loaded correctly, but some icons are missing due to the wrong font

    Step 5: Use CaskaydiaCove, not Cascadia Code, as a font

    We still have to install the correct font with the missing icons.

    We will install it using Chocolatey, a package manager available for Windows 11.

    To check if you have it installed, run:

    Now, to install the correct font family, open a PowerShell with administration privileges and run:

    choco install cascadia-code-nerd-font
    

    Once the installation is complete, you must tell Integrated Terminal to use the correct font by following these steps:

    1. open to the Settings page (by hitting CTRL + ,)
    2. select the profile you want to update (in my case, I’ll update the default profile)
    3. open the Appearance section
    4. under Font face select CaskaydiaCove Nerd Font

    PowerShell profile settings - Font Face should be CaskaydiaCove Nerd Font

    Now close the Integrated Terminal to apply the changes.

    Open it again, navigate to a Git repository, and admire the result.

    OhMyPosh with icons and fonts loaded correctly

    Further readings

    The first time I read about OhMyPosh, it was on Scott Hanselman’s blog. I couldn’t make his solution work – and that’s the reason I wrote this article. However, in his article, he shows how he customized his own Terminal with more glyphs and icons, so you should give it a read.

    🔗 My Ultimate PowerShell prompt with Oh My Posh and the Windows Terminal | Scott Hanselman’s blog

    We customized our PowerShell profile with just one simple configuration. However, you can do a lot more. You can read Ruud’s in-depth article about PowerShell profiles.

    🔗 How to Create a PowerShell Profile – Step-by-Step | Lazyadmin

    One of the core parts of this article is that we have to use CaskaydiaCove as a font instead of the (in)famous Cascadia Code. But why?

    🔗 Why CaskaydiaCove and not Cascadia Code? | GitHub

    Finally, as I said at the beginning of this article, I use Git and Git Branches to handle the creation and management of my blog articles. That’s just the tip of the iceberg! 🏔️

    If you want to steal my (previous) workflow, have a look at the behind-the-scenes of my blogging process (note: in the meanwhile, a lot of things have changed, but these steps can still be helpful for you)

    🔗 From idea to publishing, and beyond: how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we learned how to install OhMyPosh in PowerShell and overcome all the errors you (well, I) don’t see described in other articles.

    I wrote this step-by-step article alongside installing these tools on my local machine, so I’m confident the solution will work.

    Did this solution work for you? Let me know! 📨

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Designer Spotlight: Ivan Ermakov | Codrops

    Designer Spotlight: Ivan Ermakov | Codrops


    Hi, I’m Ivan—a Dubai-based designer focused on fintech products and branding. I run Moonsight, where we craft thoughtful digital experiences and sharp visual identities for financial companies around the world.

    Background

    My path into design wasn’t a childhood calling—I wasn’t drawing wireframes at age ten or dreaming of Helvetica (can you imagine XD). I just knew I didn’t want the typical office life. I wanted freedom, movement, and a way to create things that felt useful. Design turned out to be the sweet spot between independence and impact.

    So I studied design at university by day, and took on agency work by night—what you might call the full-stack student hustle. That rhythm—study, work, repeat—taught me discipline. I also kept learning on the side, exploring tools, trends, and techniques to sharpen my craft.

    Eventually, I found myself gravitating toward fintech.

    Why fintech? Because it’s real. It’s personal. Everyone interacts with money. And when you build something that helps them feel more in control of it—you’re not just improving UX, you’re improving lives.

    You’re designing trust. That’s a responsibility I take seriously.

    From there, I explored both sides of the industry: in-house roles at product companies, and fast-paced agency work. Later, I shifted into consultancy—partnering with fintechs across Europe, the Gulf, and Asia. That chapter taught me a lot—not just about design, but about people, culture, and how different teams think about trust and money.

    All of that led me to start Moonsight—a space where I could bring all those experiences together. Today, we partner with fintechs and financial companies to create sharp, useful, brand-led digital experiences. And while I still stay hands-on, I’m also building a team that’s just as obsessed with clarity, thoughtfulness, and execution as I am.

    Featured Work

    Monetto

    A game-changer in the world of freelancing. Designed to simplify and elevate the financial journey for freelancers, Monetto is more than just an app – it’s a holistic solution that empowers creatives like me to manage their finances with confidence.

    BlastUp

    Blastup’s mission is simple—help users grow their social media presence, fast. We crafted a bold, dynamic identity that reflects Blastup’s energetic and friendly personality as well is their website.

    Alinma Bank

    This project for Alinma Bank involved a comprehensive redesign across all brand touchpoints: the logo, physical cards, website, and mobile app. The goal was to modernize and streamline the visual identity while maintaining the bank’s core values.

    Coinly

    Coinly is more than just a banking app — it’s a full-fledged financial literacy ecosystem for kids, designed to empower the next generation with money skills that grow with them. Built around an engaging coin mascot and a colorful 3D world, Coinly blends gamification, interactive storytelling, and real financial tools.

    Design Philosophy

    Design should be highly functional and intuitive, solving both business and user problems while delivering an engaging experience that users want to return to.

    Design is clarity. And clarity builds trust.

    Especially in fintech—where most of my projects happen—you don’t have the luxury of vague. Your design has to work, first and foremost. It has to feel smart, trustworthy, smooth. When people trust your interface, they trust your product. And when they trust your product, they’re more likely to use it again. That’s where design really proves its value.

    My job is to make things useful first, beautiful second. But ideally, both at once.

    The way I approach projects is structured but adaptable.

    I start with full immersion—understanding the business, the audience, and the problem we’re solving. From there, I look for a unique angle, something that gives the product or brand a distinct voice. Then I push that idea as far as I can—visually, functionally, and emotionally.

    And no, I don’t believe in reinventing everything 🙂

    Use the patterns that work. But when something feels off or underwhelming, be bold enough to rethink it. That’s where the real creative work lives—not in chaos, but in considered evolution.

    I don’t want to be known for a style. I want to be known for range.

    For every project, I try to find a distinct visual language. That means experimenting—pulling in 3D, motion, illustration—whatever it takes to bring the concept to life.

    And I rarely do it alone.

    I collaborate closely with animators, developers, motion designers, illustrators—the kind of people who not only support the vision, but expand it. When everyone brings their strengths to the table, the result is always richer, sharper, more memorable.

    What matters most is that the end result has presence. That it feels alive, intentional, and built with care.

    And I care deeply about how work is presented. Every project—client or personal—is framed with context, rationale, and craft. Because good design solves problems, but great design tells a story.

    Process In Bits

    My process is structured, but not rigid. Usually, it looks something like this:

    Polish and present
    Clear storytelling. Clean handoff. Confident rationale.

    Understand the business
    What’s broken? What’s needed? What are we really solving?

    Understand the user
    What do they expect? What’s familiar to them? What do they fear?

    Explore the visual angle
    Moodboards, motion cues, layout patterns, unexpected directions

    Build and iterate
    Fast feedback loops with clients and the team

    One benchmark I use: if I don’t understand what I designed, how can I expect a user to?

    For me, good design starts with intention. Every screen, every button, every microinteraction—there should be a reason it exists. So when a feature’s built, I walk through it in my head as if I’ve never seen it before. What would I click? What would I expect next? Can I explain what each part does without second-guessing?

    After working on financial interfaces for so long, you start to internalize these flows—you almost know them by muscle memory. But that doesn’t mean you skip the test. You still go through each stage. You still assume nothing.

    Sometimes, the best insights come from a teammate asking, “Wait, what does this do?” That’s your cue to look closer.

    And when it comes to working with clients?

    I walk clients through every stage—from moodboards to microinteractions—so there are no surprises and no last-minute pivots.

    It’s about mutual trust: they trust my process, and I trust their vision.

    This structure helps me manage expectations, prevent scope drift, and deliver thoughtful work—on time, without the drama.

    What keeps me inspired? Looking outside the bubble.

    I don’t have a list of designers I religiously follow. What inspires me is great work—wherever it lives. Sometimes it’s a slick piece of web design, sometimes a brutalist poster on the street, art style from a video game, or the typography on a jazz record sleeve.

    Music plays a huge role in my creative life—I sing a bit, and I think that kind of rhythm and structure naturally finds its way into how I build interfaces.

    I’m also a huge gamer, and I’m fascinated by how game mechanics influence user behavior. There’s a lot designers can learn from how games guide, reward, and surprise users.

    Sometimes I’ll see a cool effect, a character design, or even just a motion detail and immediately think:

    That could be the anchor for a whole experience

    Not necessarily for the project I’m working on in the moment, but something I’d love to build around later. So I sort, I collect, I sketch.

    I’m often looking for inspiration for one project, but bookmarking ideas for two or three others. It’s not just moodboarding—it’s pattern recognition, and planting seeds for future concepts.

    Inspiration can come from anywhere—but only if you keep your eyes open.

    What’s Next

    Right now, I’m fully focused on building Moonsight into a studio known for bold, strategic fintech design—especially across the MENA region.

    On my personal radar:

    • Master 3D
    • Launch my own product
    • Speak at more design events
    • Make Moonsight’s design Conference in Dubai happen
    • Join awwwards jury panel
    • Do more meaningful work
    • Mostly? Just grow. As a designer, a founder, and a creative

    Parting Thoughts

    If I could give one piece of advice to younger designers, it would be this:

    Find what excites you. Stay obsessed with it. And don’t waste time comparing yourself to others.

    We’re overexposed to each other’s work these days. It’s easy to feel behind.

    But your only competition is yourself a year ago. That’s where growth lives.

    This industry moves fast. But if you move with intent, your work will always find its place.



    Source link

  • 6.61 Million Google Clicks! 💸

    6.61 Million Google Clicks! 💸


    Yesterday Online PNG Tools smashed through 6.60M Google clicks and today it’s smashed through 6.61M Google clicks! That’s 10,000 new clicks in a single day – the smash train keeps on rollin’!

    What Are Online PNG Tools?

    Online PNG Tools offers a collection of easy-to-use web apps that help you work with PNG images right in your browser. It’s like a Swiss Army Knife for anything PNG-related. On this site, you can create transparent PNGs, edit icons, clean up logos, crop stamps, change colors of signatures, and customize stickers – there’s a tool for it all. The best part is that you don’t need to install anything or be a graphic designer. All tools are made for regular people who just want to get stuff done with their images. No sign-ups, no downloads – just quick and easy PNG editing tools.

    Who Created Online PNG Tools?

    Online PNG Tools were created by me and my team at Browserling. We’ve build simple, browser-based tools that anyone can use without needing to download or install anything. Along with PNG tools, we also work on cross-browser testing to help developers make sure their websites work great on all web browsers. Our mission is to make online tools that are fast, easy to use, and that are helpful for everyday tasks like editing icons, logos, and signatures.

    Who Uses Online PNG Tools?

    Online PNG Tools and Browserling are used by everyone – from casual users to professionals and even Fortune 100 companies. Casual users often use them to make memes, edit profile pictures, or remove backgrounds. Professionals use them to clean up logos, design icons, or prepare images for websites and apps.

    Smash too and see you tomorrow at 6.62M clicks! 📈

    PS. Use coupon code SMASHLING for a 30% discount on these tools at onlinePNGtools.com/pricing. 💸



    Source link

  • Three.js Instances: Rendering Multiple Objects Simultaneously

    Three.js Instances: Rendering Multiple Objects Simultaneously


    When building the basement studio site, we wanted to add 3D characters without compromising performance. We used instancing to render all the characters simultaneously. This post introduces instances and how to use them with React Three Fiber.

    Introduction

    Instancing is a performance optimization that lets you render many objects that share the same geometry and material simultaneously. If you have to render a forest, you’d need tons of trees, rocks, and grass. If they share the same base mesh and material, you can render all of them in a single draw call.

    A draw call is a command from the CPU to the GPU to draw something, like a mesh. Each unique geometry or material usually needs its own call. Too many draw calls hurt performance. Instancing reduces that by batching many copies into one.

    Basic instancing

    As an example, let’s start by rendering a thousand boxes in a traditional way, and let’s loop over an array and generate some random boxes:

    const boxCount = 1000
    
    function Scene() {
      return (
        <>
          {Array.from({ length: boxCount }).map((_, index) => (
            <mesh
              key={index}
              position={getRandomPosition()}
              scale={getRandomScale()}
            >
              <boxGeometry />
              <meshBasicMaterial color={getRandomColor()} />
            </mesh>
          ))}
        </>
      )
    }
    Live | Source

    If we add a performance monitor to it, we’ll notice that the number of “calls” matches our boxCount.

    A quick way to implement instances in our project is to use drei/instances.

    The Instances component acts as a provider; it needs a geometry and materials as children that will be used each time we add an instance to our scene.

    The Instance component will place one of those instances in a particular position/rotation/scale. Every Instance will be rendered simultaneously, using the geometry and material configured on the provider.

    import { Instance, Instances } from "@react-three/drei"
    
    const boxCount = 1000
    
    function Scene() {
      return (
        <Instances limit={boxCount}>
          <boxGeometry />
          <meshBasicMaterial />
          {Array.from({ length: boxCount }).map((_, index) => (
            <Instance
              key={index}
              position={getRandomPosition()}
              scale={getRandomScale()}
              color={getRandomColor()}
            />
          ))}
        </Instances>
      )
    }

    Notice how “calls” is now reduced to 1, even though we are showing a thousand boxes.

    Live | Source

    What is happening here? We are sending the geometry of our box and the material just once to the GPU, and ordering that it should reuse the same data a thousand times, so all boxes are drawn simultaneously.

    Notice that we can have multiple colors even though they use the same material because Three.js supports this. However, other properties, like the map, should be the same because all instances share the exact same material.

    We’ll see how we can hack Three.js to support multiple maps later in the article.

    Having multiple sets of instances

    If we are rendering a forest, we may need different instances, one for trees, another for rocks, and one for grass. However, the example from before only supports one instance in its provider. How can we handle that?

    The creteInstnace() function from drei allows us to create multiple instances. It returns two React components, the first one a provider that will set up our instance, the second, a component that we can use to position one instance in our scene.

    Let’s see how we can set up a provider first:

    import { createInstances } from "@react-three/drei"
    
    const boxCount = 1000
    const sphereCount = 1000
    
    const [CubeInstances, Cube] = createInstances()
    const [SphereInstances, Sphere] = createInstances()
    
    function InstancesProvider({ children }: { children: React.ReactNode }) {
      return (
        <CubeInstances limit={boxCount}>
          <boxGeometry />
          <meshBasicMaterial />
          <SphereInstances limit={sphereCount}>
            <sphereGeometry />
            <meshBasicMaterial />
            {children}
          </SphereInstances>
        </CubeInstances>
      )
    }

    Once we have our instance provider, we can add lots of Cubes and Spheres to our scene:

    function Scene() {
      return (
        <InstancesProvider>
          {Array.from({ length: boxCount }).map((_, index) => (
            <Cube
              key={index}
              position={getRandomPosition()}
              color={getRandomColor()}
              scale={getRandomScale()}
            />
          ))}
    
          {Array.from({ length: sphereCount }).map((_, index) => (
            <Sphere
              key={index}
              position={getRandomPosition()}
              color={getRandomColor()}
              scale={getRandomScale()}
            />
          ))}
        </InstancesProvider>
      )
    }

    Notice how even though we are rendering two thousand objects, we are just running two draw calls on our GPU.

    Live | Source

    Instances with custom shaders

    Until now, all the examples have used Three.js’ built-in materials to add our meshes to the scene, but sometimes we need to create our own materials. How can we add support for instances to our shaders?

    Let’s first set up a very basic shader material:

    import * as THREE from "three"
    
    const baseMaterial = new THREE.RawShaderMaterial({
      vertexShader: /*glsl*/ `
        attribute vec3 position;
        attribute vec3 instanceColor;
        attribute vec3 normal;
        attribute vec2 uv;
        uniform mat4 modelMatrix;
        uniform mat4 viewMatrix;
        uniform mat4 projectionMatrix;
    
        void main() {
          vec4 modelPosition = modelMatrix * vec4(position, 1.0);
          vec4 viewPosition = viewMatrix * modelPosition;
          vec4 projectionPosition = projectionMatrix * viewPosition;
          gl_Position = projectionPosition;
        }
      `,
      fragmentShader: /*glsl*/ `
        void main() {
          gl_FragColor = vec4(1, 0, 0, 1);
        }
      `
    })
    
    export function Scene() {
      return (
        <mesh material={baseMaterial}>
          <sphereGeometry />
        </mesh>
      )
    }

    Now that we have our testing object in place, let’s add some movement to the vertices:

    We’ll add some movement on the X axis using a time and amplitude uniform and use it to create a blob shape:

    const baseMaterial = new THREE.RawShaderMaterial({
      // some unifroms
      uniforms: {
        uTime: { value: 0 },
        uAmplitude: { value: 1 },
      },
      vertexShader: /*glsl*/ `
        attribute vec3 position;
        attribute vec3 instanceColor;
        attribute vec3 normal;
        attribute vec2 uv;
        uniform mat4 modelMatrix;
        uniform mat4 viewMatrix;
        uniform mat4 projectionMatrix;
    
        // Added this code to shift the vertices
        uniform float uTime;
        uniform float uAmplitude;
        vec3 movement(vec3 position) {
          vec3 pos = position;
          pos.x += sin(position.y + uTime) * uAmplitude;
          return pos;
        }
    
        void main() {
          vec3 blobShift = movement(position);
          vec4 modelPosition = modelMatrix * vec4(blobShift, 1.0);
          vec4 viewPosition = viewMatrix * modelPosition;
          vec4 projectionPosition = projectionMatrix * viewPosition;
          gl_Position = projectionPosition;
        }
      `,
      fragmentShader: /*glsl*/ `
        void main() {
          gl_FragColor = vec4(1, 0, 0, 1);
        }
      `,
    });
    
    export function Scene() {
      useFrame((state) => {
        // update the time uniform
        baseMaterial.uniforms.uTime.value = state.clock.elapsedTime;
      });
    
      return (
        <mesh material={baseMaterial}>
          <sphereGeometry args={[1, 32, 32]} />
        </mesh>
      );
    }
    

    Now, we can see the sphere moving around like a blob:

    Live | Source

    Now, let’s render a thousand blobs using instancing. First, we need to add the instance provider to our scene:

    import { createInstances } from '@react-three/drei';
    
    const [BlobInstances, Blob] = createInstances();
    
    function Scene() {
      useFrame((state) => {
        baseMaterial.uniforms.uTime.value = state.clock.elapsedTime;
      });
    
      return (
        <BlobInstances material={baseMaterial} limit={sphereCount}>
          <sphereGeometry args={[1, 32, 32]} />
          {Array.from({ length: sphereCount }).map((_, index) => (
            <Blob key={index} position={getRandomPosition()} />
          ))}
        </BlobInstances>
      );
    }
    

    The code runs successfully, but all spheres are in the same place, even though we added different positions.

    This is happening because when we calculated the position of each vertex in the vertexShader, we returned the same position for all vertices, all these attributes are the same for all spheres, so they end up in the same spot:

    vec3 blobShift = movement(position);
    vec4 modelPosition = modelMatrix * vec4(deformedPosition, 1.0);
    vec4 viewPosition = viewMatrix * modelPosition;
    vec4 projectionPosition = projectionMatrix * viewPosition;
    gl_Position = projectionPosition;

    To solve this issue, we need to use a new attribute called instanceMatrix. This attribute will be different for each instance that we are rendering.

      attribute vec3 position;
      attribute vec3 instanceColor;
      attribute vec3 normal;
      attribute vec2 uv;
      uniform mat4 modelMatrix;
      uniform mat4 viewMatrix;
      uniform mat4 projectionMatrix;
      // this attribute will change for each instance
      attribute mat4 instanceMatrix;
    
      uniform float uTime;
      uniform float uAmplitude;
    
      vec3 movement(vec3 position) {
        vec3 pos = position;
        pos.x += sin(position.y + uTime) * uAmplitude;
        return pos;
      }
    
      void main() {
        vec3 blobShift = movement(position);
        // we can use it to transform the position of the model
        vec4 modelPosition = instanceMatrix * modelMatrix * vec4(blobShift, 1.0);
        vec4 viewPosition = viewMatrix * modelPosition;
        vec4 projectionPosition = projectionMatrix * viewPosition;
        gl_Position = projectionPosition;
      }

    Now that we have used the instanceMatrix attribute, each blob is in its corresponding position, rotation, and scale.

    Live | Source

    Changing attributes per instance

    We managed to render all the blobs in different positions, but since the uniforms are shared across all instances, they all end up having the same animation.

    To solve this issue, we need a way to provide custom information for each instance. We actually did this before, when we used the instanceMatrix to move each instance to its corresponding location. Let’s debug the magic behind instanceMatrix, so we can learn how we can create own instanced attributes.

    Taking a look at the implementation of instancedMatrix we can see that it is using something called InstancedAttribute:

    https://github.com/mrdoob/three.js/blob/master/src/objects/InstancedMesh.js#L57

    InstancedBufferAttribute allows us to create variables that will change for each instance. Let’s use it to vary the animation of our blobs.

    Drei has a component to simplify this called InstancedAttribute that allows us to define custom attributes easily.

    // Tell typescript about our custom attribute
    const [BlobInstances, Blob] = createInstances<{ timeShift: number }>()
    
    function Scene() {
      useFrame((state) => {
        baseMaterial.uniforms.uTime.value = state.clock.elapsedTime
      })
    
      return (
        <BlobInstances material={baseMaterial} limit={sphereCount}>
          {/* Declare an instanced attribute with a default value */}
          <InstancedAttribute name="timeShift" defaultValue={0} />
          
          <sphereGeometry args={[1, 32, 32]} />
          {Array.from({ length: sphereCount }).map((_, index) => (
            <Blob
              key={index}
              position={getRandomPosition()}
              
              // Set the instanced attribute value for this instance
              timeShift={Math.random() * 10}
              
            />
          ))}
        </BlobInstances>
      )
    }

    We’ll use this time shift attribute in our shader material to change the blob animation:

    uniform float uTime;
    uniform float uAmplitude;
    // custom instanced attribute
    attribute float timeShift;
    
    vec3 movement(vec3 position) {
      vec3 pos = position;
      pos.x += sin(position.y + uTime + timeShift) * uAmplitude;
      return pos;
    }

    Now, each blob has its own animation:

    Live | Source

    Creating a forest

    Let’s create a forest using instanced meshes. I’m going to use a 3D model from SketchFab: Stylized Pine Tree Tree by Batuhan13.

    import { useGLTF } from "@react-three/drei"
    import * as THREE from "three"
    import { GLTF } from "three/examples/jsm/Addons.js"
    
    // I always like to type the models so that they are safer to work with
    interface TreeGltf extends GLTF {
      nodes: {
        tree_low001_StylizedTree_0: THREE.Mesh<
          THREE.BufferGeometry,
          THREE.MeshStandardMaterial
        >
      }
    }
    
    function Scene() {
    
      // Load the model
      const { nodes } = useGLTF(
        "/stylized_pine_tree_tree.glb"
      ) as unknown as TreeGltf
    
      return (
        <group>
          {/* add one tree to our scene */ }
          <mesh
            scale={0.02}
            geometry={nodes.tree_low001_StylizedTree_0.geometry}
            material={nodes.tree_low001_StylizedTree_0.material}
          />
        </group>
      )
    }
    

    (I added lights and a ground in a separate file.)

    Now that we have one tree, let’s apply instancing.

    const getRandomPosition = () => {
      return [
        (Math.random() - 0.5) * 10000,
        0,
        (Math.random() - 0.5) * 10000
      ] as const
    }
    
    const [TreeInstances, Tree] = createInstances()
    const treeCount = 1000
    
    function Scene() {
      const { scene, nodes } = useGLTF(
        "/stylized_pine_tree_tree.glb"
      ) as unknown as TreeGltf
    
      return (
        <group>
          <TreeInstances
            limit={treeCount}
            scale={0.02}
            geometry={nodes.tree_low001_StylizedTree_0.geometry}
            material={nodes.tree_low001_StylizedTree_0.material}
          >
            {Array.from({ length: treeCount }).map((_, index) => (
              <Tree key={index} position={getRandomPosition()} />
            ))}
          </TreeInstances>
        </group>
      )
    }

    Our entire forest is being rendered in only three draw calls: one for the skybox, another one for the ground plane, and a third one with all the trees.

    To make things more interesting, we can vary the height and rotation of each tree:

    const getRandomPosition = () => {
      return [
        (Math.random() - 0.5) * 10000,
        0,
        (Math.random() - 0.5) * 10000
      ] as const
    }
    
    function getRandomScale() {
      return Math.random() * 0.7 + 0.5
    }
    
    // ...
    <Tree
      key={index}
      position={getRandomPosition()}
      scale={getRandomScale()}
      rotation-y={Math.random() * Math.PI * 2}
    />
    // ...
    Live | Source

    Further reading

    There are some topics that I didn’t cover in this article, but I think they are worth mentioning:

    • Batched Meshes: Now, we can render one geometry multiple times, but using a batched mesh will allow you to render different geometries at the same time, sharing the same material. This way, you are not limited to rendering one tree geometry; you can vary the shape of each one.
    • Skeletons: They are not currently supported with instancing, to create the latest basement.studio site we managed to hack our own implementation, I invite you to read our implementation there.
    • Morphing with batched mesh: Morphing is supported with instances but not with batched meshes. If you want to implement it yourself, I’d suggest you read these notes.



    Source link

  • 6.62 Million Google Clicks! 💸

    6.62 Million Google Clicks! 💸


    Yesterday Online PNG Tools smashed through 6.61M Google clicks and today it’s smashed through 6.62M Google clicks! That’s 10,000 new clicks in a single day – the smash train keeps on rollin’!

    What Are Online PNG Tools?

    Online PNG Tools offers a collection of easy-to-use web apps that help you work with PNG images right in your browser. It’s like a Swiss Army Knife for anything PNG-related. On this site, you can create transparent PNGs, edit icons, clean up logos, crop stamps, change colors of signatures, and customize stickers – there’s a tool for it all. The best part is that you don’t need to install anything or be a graphic designer. All tools are made for regular people who just want to get stuff done with their images. No sign-ups, no downloads – just quick and easy PNG editing tools.

    Who Created Online PNG Tools?

    Online PNG Tools were created by me and my team at Browserling. We’ve build simple, browser-based tools that anyone can use without needing to download or install anything. Along with PNG tools, we also work on cross-browser testing to help developers make sure their websites work great on all web browsers. Our mission is to make online tools that are fast, easy to use, and that are helpful for everyday tasks like editing icons, logos, and signatures.

    Who Uses Online PNG Tools?

    Online PNG Tools and Browserling are used by everyone – from casual users to professionals and even Fortune 100 companies. Casual users often use them to make memes, edit profile pictures, or remove backgrounds. Professionals use them to clean up logos, design icons, or prepare images for websites and apps.

    Smash too and see you tomorrow at 6.63M clicks! 📈

    PS. Use coupon code SMASHLING for a 30% discount on these tools at onlinePNGtools.com/pricing. 💸



    Source link