برچسب: How

  • How to deploy .NET APIs on Azure using GitHub actions | Code4IT

    How to deploy .NET APIs on Azure using GitHub actions | Code4IT


    Building APIs with .NET is easy. Deploying them on Azure is easy too, with GitHub Actions!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    With Continuous Delivery (CD), you can deploy your code in a fast-paced and stable way.

    To deploy applications, you’ll need workflows that run and automate the process. In that way, you don’t have to perform repetitive tasks and the whole process becomes less error-prone.

    In this article, we will learn how to implement CD pipelines using GitHub Actions. In particular, we will focus on the case of a .NET API application that will be deployed on Azure.

    Create a .NET API project

    Since the focus of this article is on the deployment part, we won’t create complex APIs. Just a simple Hello Word is enough.

    To do that, we’re gonna use dotnet Minimal API – a way to create APIs without scaffolding lots of files and configurations.

    Our API, the BooksAPI, has a single endpoint: /, the root, simply returns “Hello World!”.

    All our code is stored in the Program file:

    var builder = WebApplication.CreateBuilder(args);
    
    var app = builder.Build();
    
    app.UseHttpsRedirection();
    
    app.MapGet("/", () => "Hello World!");
    
    app.Run();
    

    Nothing fancy: run the application locally, and navigate to the root. You will see the Hello World message.

    Lastly, put your code on GitHub: initialize a repository and publish it on GitHub – it can either be a public or a private repository.

    Create an App Service on Azure

    Now, to deploy an application, we need to define its destination. We’re going to deploy it on Azure, so you need an Azure account before moving on.

    Open the Azure Portal, navigate to the App Service section, and create a new one.

    Configure it as you wish, and then proceed until you have it up and running.

    Once everything is done, you should have something like this:

    Azure App Service overview

    Now the application is ready to be used: we now need to deploy our code here.

    Generate the GitHub Action YAML file for deploying .NET APIs on Azure

    It’s time to create our Continuous Delivery pipeline.

    Luckily, GitHub already provides lots of templates for GitHub Actions. We will need one specific for our .NET APIs.

    On GitHub, navigate to your repository, head to the Actions menu, and select New workflow.

    New Workflow button on GitHub

    You will see several predefined actions that allow you to do stuff with your repository. We are now interested in the one called “Deploy a .NET Core app to an Azure Web App”:

    Template for deploying the .NET Application on Azure

    Clicking on “Configure” you will see a template. Read carefully the instructions, as they will guide you to the correct configuration of the GitHub action.

    In particular, you will have to update the environment variables specified in this section:

    env:
      AZURE_WEBAPP_NAME: your-app-name # set this to the name of your Azure Web App
      AZURE_WEBAPP_PACKAGE_PATH: "." # set this to the path to your web app project, defaults to the repository root
      DOTNET_VERSION: "5" # set this to the .NET Core version to use
    

    Clearly, AZURE_WEBAPP_NAME must match the name you’ve defined on Azure, while DOTNET_VERSION must match the version you’re using to create your dotnet APIs.

    For my specific project, I’ve replaced that section with

    env:
      AZURE_WEBAPP_NAME: BooksAPI<myName> # set this to the name of your Azure Web App
      AZURE_WEBAPP_PACKAGE_PATH: "." # set this to the path to your web app project, defaults to the repository root
      DOTNET_VERSION: "6.0" # set this to the .NET Core version to use
    

    🟧 DOTNET_VERSION requires also the minor version of dotnet. Setting 6 will now work: you need to specify 6.0. 🟧

    Now you can save your YAML file in your repository: it will be saved under ./.github/workflows.

    So, as a reference, here’s the full YAML file I’m using to deploy my APIs:

    name: Build and deploy ASP.Net Core app to an Azure Web App
    
    env:
      AZURE_WEBAPP_NAME: BooksAPI<myName>
      AZURE_WEBAPP_PACKAGE_PATH: "."
      DOTNET_VERSION: "6.0"
    
    on:
      push:
        branches: ["master"]
      workflow_dispatch:
    
    permissions:
      contents: read
    
    jobs:
      build:
        runs-on: ubuntu-latest
    
        steps:
          - uses: actions/checkout@v3
    
          - name: Set up .NET Core
            uses: actions/setup-dotnet@v2
            with:
              dotnet-version: ${{ env.DOTNET_VERSION }}
    
          - name: Set up dependency caching for faster builds
            uses: actions/cache@v3
            with:
              path: ~/.nuget/packages
              key: ${{ runner.os }}-nuget-${{ hashFiles('**/packages.lock.json') }}
              restore-keys: |
                            ${{ runner.os }}-nuget-
    
          - name: Build with dotnet
            run: dotnet build --configuration Release
    
          - name: dotnet publish
            run: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp
    
          - name: Upload artifact for deployment job
            uses: actions/upload-artifact@v3
            with:
              name: .net-app
              path: ${{env.DOTNET_ROOT}}/myapp
    
      deploy:
        permissions:
          contents: none
        runs-on: ubuntu-latest
        needs: build
        environment:
          name: "Development"
          url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
    
        steps:
          - name: Download artifact from build job
            uses: actions/download-artifact@v3
            with:
              name: .net-app
    
          - name: Deploy to Azure Web App
            id: deploy-to-webapp
            uses: azure/webapps-deploy@v2
            with:
              app-name: ${{ env.AZURE_WEBAPP_NAME }}
              publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
              package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
    

    As you can see, we have 2 distinct steps: build and deploy.

    In the build phase, we check out our code, restore the NuGet dependencies, build the project, pack it and store the final result as an artifact.

    In the deploy step, we retrieve the newly created artifact and publish it on Azure.

    Store the Publish profile as GitHub Secret

    As you can see in the instructions of the workflow file, you have to

    Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE, paste the publish profile contents as the value of the secret.

    That Create a secret in your repository named AZURE_WEBAPP_PUBLISH_PROFILE statement was not clear to me: I thought you had to create that key within your .NET project. Turns out you can create secrets related to repositories on GitHub (so, it’s language-agnostic).

    A Publish profile is a file that contains information and settings used to deploy applications to Azure. It’s nothing but an XML file that lists the possible ways to deploy your application, such as FTP, Web Deploy, Zip Deploy, and so on.

    We have to get our publish profile and save it into GitHub secrets.

    To retrieve the Publish profile, head to the Azure App Service page and click Get publish profile to download the file.

    Get Publish Profile button on Azure Portal

    Now, get back on GitHub, Head to Settings > Security > Secrets > Actions.

    Here you can create a new secret related to your repository.

    Create a new one, name it AZURE_WEBAPP_PUBLISH_PROFILE, and paste the content of the Publish profile file you’ve just downloaded.

    You will then see something like this:

    GitHub secret for Publish profile

    Notice that the secret name must be AZURE_WEBAPP_PUBLISH_PROFILE. That constraint is set because we are accessing the Publish profile by key:

    - name: Deploy to Azure Web App
        id: deploy-to-webapp
        uses: azure/webapps-deploy@v2
        with:
            app-name: ${{ env.AZURE_WEBAPP_NAME }}
            publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
            package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
    

    In particular, notice the publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} part.

    Clearly, the two names must match: nothing stops you from changing the name of the secret in both the YAML file and the GitHub Secret page.

    Final result

    It’s time to see the final result.

    Update the application code (I’ve slightly modified the Hello world message), and push your changes to GitHub.

    Under the Actions tab, you will see your CD pipeline run.

    CD workflow run

    Once it’s completed, you can head to your application root and see the final result.

    Final result of the API

    Further readings

    Automating repetitive tasks allows you to perform more actions with fewer errors. Generally speaking, the more stuff you can automate, the better.

    My own blog heavily relies on automation: scaffolding content, tracking ideas, and publishing online…

    If you want to peek at what I do, here are my little secrets:

    🔗 From idea to publishing, and beyond: how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT

    In this article, we’ve only built and deployed our application. We can do more: run tests and keep track of code coverage. If you want to learn how you can do it using Azure DevOps, here we go:

    🔗 Cobertura, YAML, and Code Coverage Protector: how to view Code Coverage report on Azure DevOps | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    I have to admit that I struggled a lot in setting up the CD pipeline. I was using the one proposed by default on Visual Studio – but it didn’t work.

    Using the template found on GitHub worked almost instantly – I just had to figure out what did they mean by repository secrets.

    Now we have everything in place. Since the workflow is stored in a text file within my repository, if I have to create and deploy a new API project I can simply do that by copying that file and fixing the references.

    Nice and easy, right? 😉

    Happy coding!

    🐧



    Source link

  • How to create an API Gateway using Azure API Management &vert; Code4IT

    How to create an API Gateway using Azure API Management | Code4IT


    In a microservices architecture, an API Gateway hides your real endpoints. We will create one using Azure API Management

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you’re building an application that exposes several services you might not want to expose them on different hosts. Consumers will have a hard time configuring their application with all the different hostnames, and you will be forced to maintain the same URLs even if you need to move to other platforms or, for instance, you want to transform a REST endpoint into an Azure Function.

    In this case, you should mask the real endpoints beneath a facade: maybe… an API Gateway? 🙂

    In this article, we will learn how to configure Azure API Management (from now on: APIM) service to create an API Gateway and “hide” our real services.

    Demo: publish .NET API services and locate the OpenAPI definition

    For the sake of this article, we will work with 2 API services: BooksService and VideosService.

    They are both .NET 6 APIs, deployed on Azure using GitHub Actions (using the steps I described in a previous article).

    Both services expose their Swagger pages and a bunch of endpoints that we will gonna hide behind Azure APIM.

    Swagger pages

    How to create Azure API Management (APIM) Service from Azure Portal

    Now, we want to hide their real endpoints. The clients will then only know about the existence of the API Gateway, and not of the two separate API services:

    An API Gateway hides origin endpoints to clients

    It’s time to create our APIM resource.👷‍♂️

    Head to the Azure Portal, and create a new API Management instance. I suggest reading the short overview of the functionalities provided by Azure API Management services as listed in the screenshot below.

    API Management description on Azure Portal

    The wizard will ask you for some info, such as the resource name, the region, and an email used to send communications (honestly speaking, I still haven’t figured out why they’re asking for your email).

    Fill in all the fields, pick your preferred pricing tier (mine is Developer: it doesn’t have an SLA and is quite cheap), and then proceed with the service creation.

    After several minutes (it took 50 minutes – fifty!💢 – to scaffold my instance), you will have your instance ready to be used.

    API management dashboard

    We are now ready to add our APIs and expose them to our clients.

    How to add APIs to Azure API Management using Swagger definition (OpenAPI)

    As we’ve seen in a previous article, Swagger creates a JSON file that describes the operations available in your APIs, as well as the object structures accepted as input and returned as output.

    Let me use as an example the Books API: once that API project is deployed on the cloud (it’s not mandatory to use Azure: it will work the same using other cloud vendors), you will see the Swagger UI and the related JSON definition.

    Swagger UI for BooksAPI

    We have 3 endpoints, /, /echo, and /books; those endpoints are described in the swagger.json file linked in the Swagger page; put that link aside: we will use it soon.

    Finally, we can add our Books APIs to our Azure Management API Service! Head to the resource on Azure, locate the APIs menu item on the left panel, and create a new API definition using OpenAPI (which is the standard used by Swagger to create its UI).

    Import API from OpenAPI specification

    You will see a form that allows you to create new resources from OpenAPI specifications.

    Paste here the link to the swagger.json file you located before, populate the required fields and, if you want, add a prefix to identify these endpoints: I choose MyBooks.

    Wizard to import APIs from OpenAPI

    You will then see your APIs appear in the panel shown below. It is composed of different parts:

    • The list of services exposed. In the screenshot below, BooksAPI, Echo API, and VideosAPI;
    • The list of endpoints exposed for each service: here, BooksAPI exposes endpoints at /, /echo, and /books;
    • A list of policies that are applied to the inbound requests before hitting the real endpoint;
    • The real endpoint used when calling the facade exposed by APIM;
    • A list of policies applied to the outbound requests after the origin has processed the requests.

    API detail panel

    For now, we will ignore both Inbound and Outbound processing, as they will be the topic of a future article.

    Consuming APIs exposed on the API Gateway

    We’re ready to go! Head back to the Azure API Management service dashboard and locate the URL of the API Gateway under Custom domains > Gateway URL.

    Where to find the Gateway URL

    This will be the root URL that our clients will use.

    We can then access Books API and Videos API both on the Origin and the Gateway (we’re doing it just for demonstrating that things are working; clients will only use the APIs exposed by the API Gateway).

    The Videos API maintains the exact same structure, mapping the endpoints as they are defined in Origin.

    Videos API on Origin and on API Gateway

    On the contrary, to access the Books APIs we have to access the /mybooks path (because we defined it a few steps ago when we imported the BooksAPI from OpenAPI definition: it’s the API Url Suffix field), as shown below:

    Books API on Origin and on API Gateway

    Further readings

    As usual, a bunch of interesting readings 📚

    In this article, we’ve only scratched the surface of Azure API Management. There’s way lot – and you can read about it on the Microsoft Docs website:

    🔗 What is Azure API Management? | Microsoft docs

    To integrate Azure APIM, we used two simple dotNET 6 Web APIs deployed on Azure. If you wanna know how to set up GitHub Actions to build and deploy dotNET APIs, I recently published an article on that topic.

    🔗 How to deploy .NET APIs on Azure using GitHub actions | Code4IT

    Lastly, since we’ve talked about Swagger, here’s an article where I dissected how you can integrate Swagger in dotNET Core applications:

    🔗 Understanding Swagger integration in .NET Core | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    This can be just the beginning of a long journey; APIM allows you to highly customize your API Gateway by defining API access by user role, creating API documentation using custom templates and themes, and a lot of different stuff.

    We will come back to this topic soon.

    Happy coding!

    🐧



    Source link

  • How to customize Swagger UI with custom CSS in .NET 7 &vert; Code4IT

    How to customize Swagger UI with custom CSS in .NET 7 | Code4IT


    Exposing Swagger UI is a good way to help developers consume your APIs. But don’t be boring: customize your UI with some fancy CSS

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Brace yourself, Christmas is coming! 🎅

    If you want to add a more festive look to your Swagger UI, it’s just a matter of creating a CSS file and injecting it.

    You should create a custom CSS for your Swagger endpoints, especially if you are exposing them outside your company: if your company has a recognizable color palette, using it in your Swagger pages can make your brand stand out.

    In this article, we will learn how to inject a CSS file in the Swagger UI generated using .NET Minimal APIs.

    How to add Swagger in your .NET Minimal APIs

    There are plenty of tutorials about how to add Swagger to your APIs. I wrote some too, where I explained how every configuration impacts what you see in the UI.

    That article was targeting older dotNET versions without Minimal APIs. Now everything’s easier.

    When you create your API project, Visual Studio asks you if you want to add OpenAPI support (aka Swagger). By adding it, you will have everything in place to get started with Swagger.

    You Minimal APIs will look like this:

    public static void Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);
    
        builder.Services.AddEndpointsApiExplorer();
        builder.Services.AddSwaggerGen();
    
        var app = builder.Build();
    
        app.UseSwagger();
        app.UseSwaggerUI();
    
    
        app.MapGet("/weatherforecast", (HttpContext httpContext) =>
        {
            // return something
        })
        .WithName("GetWeatherForecast")
        .WithOpenApi();
    
        app.Run();
    }
    

    The key parts are builder.Services.AddEndpointsApiExplorer(), builder.Services.AddSwaggerGen(), app.UseSwagger(), app.UseSwaggerUI() and WithOpenApi(). Do you know that those methods do? If so, drop a comment below! 📩

    Now, if we run our application, we will see a UI similar to the one below.

    Basic Swagger UI

    That’s a basic UI. Quite boring, uh? Let’s add some style

    Create the CSS file for Swagger theming

    All the static assets must be stored within the wwwroot folder. It does not exist by default, so you have to create it manually. Click on the API project, add a new folder, and name it “wwwroot”. Since it’s a special folder, by default Visual Studio will show it with a special icon (it’s a sort of blue world, similar to 🌐).

    Now you can add all the folders and static resources needed.

    I’ve created a single CSS file under /wwwroot/assets/css/xmas-style.css. Of course, name it as you wish – as long as it is within the wwwroot folder, it’s fine.

    My CSS file is quite minimal:

    body {
      background-image: url("../images/snowflakes.webp");
    }
    
    div.topbar {
      background-color: #34a65f !important;
    }
    
    h2,
    h3 {
      color: #f5624d !important;
    }
    
    .opblock-summary-get > button > span.opblock-summary-method {
      background-color: #235e6f !important;
    }
    
    .opblock-summary-post > button > span.opblock-summary-method {
      background-color: #0f8a5f !important;
    }
    
    .opblock-summary-delete > button > span.opblock-summary-method {
      background-color: #cc231e !important;
    }
    

    There are 3 main things to notice:

    1. the element selectors are taken directly from the Swagger UI – you’ll need a bit of reverse-engineering skills: just open the Browser Console and find the elements you want to update;
    2. unless the element does not already have the rule you want to apply, you have to add the !important CSS operator. Otherwise, your code won’t affect the UI;
    3. you can add assets from other folders: I’ve added background-image: url("../images/snowflakes.webp"); to the body style. That image is, as you can imagine, under the wwwroot folder we created before.

    Just as a recap, here’s my project structure:

    Static assets files

    Of course, it’s not enough: we have to tell Swagger to take into consideration that file

    How to inject a CSS file in Swagger UI

    This part is quite simple: you have to update the UseSwaggerUI command within the Main method:

    app.UseSwaggerUI(c =>
    +   c.InjectStylesheet("/assets/css/xmas-style.css")
    );
    

    Notice how that path begins: no wwwroot, no ~, no .. It starts with /assets.

    One last step: we have to tell dotNET to consider static files when building and running the application.

    You just have to add UseStaticFiles()

    After builder.Build():

    var app = builder.Build();
    + app.UseStaticFiles();
    
    app.UseSwagger();
    app.UseSwaggerUI(c =>
        c.InjectStylesheet("/assets/css/xmas-style.css")
    );
    

    Now we can run our APIs as admire our wonderful Xmas-style UI 🎅

    XMAS-style Swagger UI

    Further readings

    This article is part of 2022 .NET Advent, created by Dustin Moris 🐤:

    🔗 dotNET Advent Calendar 2022

    CSS is not the only part you can customize, there’s way more. Here’s an article I wrote about Swagger integration in .NET Core 3 APIs, but it’s still relevant (I hope! 😁)

    🔗 Understanding Swagger integration in .NET Core | Code4IT

    This article first appeared on Code4IT

    Wrapping up

    Theming is often not considered an important part of API development. That’s generally correct: why should I bother adding some fancy colors to APIs that are not expected to have a UI?

    This makes sense if you’re working on private APIs. In fact, theming is often useful to improve brand recognition for public-facing APIs.

    You should also consider using theming when deploying APIs to different environments: maybe Blue for Development, Yellow for Staging, and Green for Production. That way your developers can understand which environment they’re exploring right easily.

    Happy coding!
    🐧





    Source link

  • Build your own Static Code Analysis tool in .NET by knowing how Assembly, Type, MethodInfo, ParameterInfo work. &vert; Code4IT

    Build your own Static Code Analysis tool in .NET by knowing how Assembly, Type, MethodInfo, ParameterInfo work. | Code4IT


    Why buy a whole tool when you can build your own? Learn how the Type system works in .NET, and create your own minimal type analyser.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Analysing your code is helpful to get an idea of the overall quality. At the same time, having an automatic tool that identifies determinate characteristics or performs some analysis for you can be useful.

    Sure, there are many fantastic tools available, but having a utility class that you can build as needed and run without setting up a complex infrastructure is sufficient.

    In this article, we are going to see how to navigate assemblies, classes, methods and parameters to perfor some custom analysis.

    For this article, my code is structured into 3 Assemblies:

    • CommonClasses, a Class Library that contains some utility classes;
    • NetCoreScripts, a Class Library that contains the code we are going to execute;
    • ScriptsRunner, a Console Application that runs the scripts defined in the NetCoreScripts library.

    The dependencies between the modules are shown below: ScriptsRunner depends on NetCoreScripts, and NetCoreScripts depends on CommonClasses.

    Class library dependencies

    In this article, we are going to write the examples in the NetCoreScripts class library, in a class named AssemblyAnalysis.

    How to load an Assembly in C#, with different methods

    The starting point to analyse an Assembly is, well, to have an Assembly.

    So, in the Scripts Class Library (the middle one), I wrote:

    var assembly = DefineAssembly();
    

    In the DefineAssembly method we can choose the Assembly we are going to analyse.

    Load the Assembly containing a specific class

    The easiest way is to do something like this:

    private static Assembly DefineAssembly()
        => typeof(AssemblyAnalysis).Assembly;
    

    Where AssemblyAnalysis is the class that contains our scripts.

    Similarly, we can get the Assembly info for a class belonging to another Assembly, like this:

    private static Assembly DefineAssembly()
        => typeof(CommonClasses.BaseExecutable).Assembly;
    

    In short, you can access the Assembly info of whichever class you know – if you can reference it directly, of course!

    Load the current, the calling, and the executing Assembly

    The Assembly class provides you with some methods that may look similar, but give you totally different info depending on how your code is structured.

    Remember the ScriptsRunner –> NetCoreScripts –> CommonClasses sequence? To better explain how things work, let’s run the following examples in a method in the CommonClasses class library (the last one in the dependency chain).

    var executing = System.Reflection.Assembly.GetExecutingAssembly();
    var calling = System.Reflection.Assembly.GetCallingAssembly();
    var entry = System.Reflection.Assembly.GetEntryAssembly();
    

    Assembly.GetExecutingAssembly returns the Assembly that contains the actual code instructions (so, in short, the Assembly that actually contains the code). In this case, it’s the CommonClasses Assembly.

    Assembly.GetCallingAssembly returns the caller Assembly, so the one that references the Executing Assembly. In this case, given that the CommonClasses library is referenced only by the NetCoreScripts library, well, we are getting info about the NetCoreScripts class library.

    Assembly.GetEntryAssembly returns the info of the Assembly that is executing the whole application – so, the entry point. In our case, it’s the ScriptsRunner Console Application.

    Deciding which one to choose is crucial, especially when you are going to distribute your libraries, for example, as NuGet packages. For sure, you’ll know the Executing Assembly. Most probably, depending on how the project is structured, you’ll also know the Calling Assembly. But almost certainly you won’t know the Entry Assembly.

    Method name Meaning In this example…
    GetExecutingAssembly The current Assembly CommonClasses
    GetCallingAssembly The caller Assembly NetCoreScripts
    GetEntryAssembly The top-level executor ScriptsRunner

    How to retrieve classes of a given .NET Assembly

    Now you have an Assembly to analyse. It’s time to load the classes belonging to your Assembly.

    You can start with assembly.GetTypes(): this method returns all the types (in the form of a Type array) belonging to the Assembly.

    For each Type you can access several properties, such as IsClass, IsPublic, IsAbstract, IsGenericType, IsEnum and so on. The full list of properties of a Type is available 🔗here.

    You may want to analyse public classes: therefore, you can do something like:

    private static List<Type> GetAllPublicTypes(Assembly assembly) => assembly
                .GetTypes()
                .Where(t => t.IsClass && t.IsPublic)
                .ToList();
    

    How to list the Methods belonging to a C# Type

    Given a Type, you can extract the info about all the available methods.

    The Type type contains several methods that can help you find useful information, such as GetConstructors.

    In our case, we are only interested in public methods, declared in that class (and not inherited from a base class):

    private static MethodInfo[] GetPublicMethods(Type type) =>
        type.GetMethods(BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.DeclaredOnly);
    

    The BindingFlags enum is a 🔗Flagged Enum: it’s an enum with special values that allow you to perform an OR operation on the values.

    Value Description Example
    Public Includes public members. public void Print()
    NonPublic Includes non-public members (private, protected, etc.). private void Calculate()
    Instance Includes instance (non-static) members. public void Save()
    Static Includes static members. public static void Log(string msg)
    FlattenHierarchy Includes static members up the inheritance chain. public static void Helper() (this method exists in the base class)
    DeclaredOnly Only members declared in the given type, not inherited. public void MyTypeSpecific() (this method does not exist in the base class)

    How to get the parameters of a MethodInfo object

    The final step is to retrieve the list of parameters from a MethodInfo object.

    This step is pretty easy: just call the GetParameter() method:

    public ParameterInfo[] GetParameters(MethodInfo method) => method.GetParameters();
    

    A ParameterInfo object contains several pieces of information, such as the name, the type and the default value of the parameter.

    Let’s consider this silly method:

    public static void RandomCity(string[] cities, string fallback = "Rome")
    { }
    

    If we have a look at its parameters, we will find the following values:

    Properties of a ParameterInfo object

    Bonus tip: Auto-properties act as Methods

    Let’s focus a bit more on the properties of a class.

    Consider this class:

    public class User
    {
      public string Name { get; set; }
    }
    

    There are no methods; only one public property.

    But hey! It turns out that properties, under the hood, are treated as methods. In fact, you can find two methods, named get_Name and set_Name, that act as an access point to the Name property.

    Automatic Getter and Setter of the Name property in C#

    Further readings

    Do you remember that exceptions are, in the end, Types?

    And that, in the catch block, you can filter for exceptions of a specific type or with a specific condition?

    If not, check this article out!

    🔗 Exception handling with WHEN clause | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up (plus the full example)

    From here, you can use all this info to build whatever you want. Personally, I used it to analyse my current project, checking how many methods accept more than N parameters as input, and which classes have the highest number of public methods.

    In short, an example of a simple code analyser can be this one:

    public void Execute()
    {
        var assembly = DefineAssembly();
        var paramsInfo = AnalyzeAssembly(assembly);
    
        AnalyzeParameters(paramsInfo);
    }
    
    private static Assembly DefineAssembly()
        => Assembly.GetExecutingAssembly();
    
    public static List<ParamsMethodInfo> AnalyzeAssembly(Assembly assembly)
    {
        List<ParamsMethodInfo> all = new List<ParamsMethodInfo>();
        var types = GetAllPublicTypes(assembly);
    
        foreach (var type in types)
        {
            var publicMethods = GetPublicMethods(type);
    
            foreach (var method in publicMethods)
            {
                var parameters = method.GetParameters();
                if (parameters.Length > 0)
                {
                    var f = parameters.First();
                }
    
                all.Add(new ParamsMethodInfo(
                    assembly.GetName().Name,
                    type.Name,
                    method
                    ));
            }
        }
        return all;
    }
    
    private static MethodInfo[] GetPublicMethods(Type type) =>
        type.GetMethods(BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.DeclaredOnly);
    
    private static List<Type> GetAllPublicTypes(Assembly assembly) => assembly.GetTypes()
                .Where(t => t.IsClass && t.IsPublic)
                .ToList();
    
    public class ParamsMethodInfo(string AssemblyName, string ClassName, MethodInfo Method)
    {
        public string MethodName => Method.Name;
        public ParameterInfo[] Parameters => Method.GetParameters();
    }
    

    And then, in the AnalyzeParameters, you can add your own logic.

    As you can see, you don’t need to adopt complex tools to perform operations like this: just knowing that you can access the static details of each class and method can be enough (of course, it depends on the use!).

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to customize Conventional Commits in a .NET application using GitHooks &vert; Code4IT

    How to customize Conventional Commits in a .NET application using GitHooks | Code4IT


    Using Conventional Commits you can define a set of rules useful for writing meaningful commit messages. Using NPM. Yes, in a dotNET application!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Setting teams conventions is a crucial step to have the project prepared to live long and prosper 🖖

    A good way to set some clarity is by enforcing rules on GIT commit messages: you can enforce devs to specify the reason behind some code changes so that you can understand the history and the reason for each of those commits. Also, if you have well-crafted commit messages, Pull Requests become easier to understand, leading to better code.

    Conventional Commits help you set such rules, and help you level up your commit history. In this article, we will learn how to add Conventional Commits in a .NET application.

    Conventional Commits

    Conventional Commits are a set of rules that help you write commit messages using a format that has multiple purposes:

    • they help developers understand the history of a git branch;
    • they help PR reviewers focus on the Pull Request by understanding the changes proposed by the developer;
    • using automated tools, they help versioning the application – this is useful when using Semantic Versioning;
    • they allow you to create automated Changelog files.

    So, what does an average Conventional Commit look like?

    There’s not just one way to specify such formats.

    For example, you can specify that you’ve added a new feature (feat) to your APIs and describe it shortly:

    feat(api): send an email to the customer
    

    Or you can explain that you’ve fixed a bug (using fix) and add a full description of the scope of the commit.

    fix: prevent racing condition
    
    Introduce a request id and a reference to latest request. Dismiss
    incoming responses other than from latest request.
    

    There are several types of commits that you can support, such as:

    • feat, used when you add a new feature to the application;
    • fix, when you fix a bug;
    • docs, used to add or improve documentation to the project;
    • refactor, used – well – after some refactoring;
    • test, when adding tests or fixing broken ones

    All of this prevents developers write commit messages such as “something”, “fixed bug”, “some stuff”.

    Some Changes

    So, now, it’s time to include Conventional Commits in our .NET applications.

    What is our goal?

    For the sake of this article, I’m going to add Conventional Commits in a .NET 7 API project. The same approach works for all the other types of .NET projects: as long as you have a Solution to work with, I’ve got you covered.

    Well, actually, the following approach can be used by every project, not only those based on .NET: the reason I wrote this article is that many dotnet developers are not confident in using and configuring NPM packages, so my personal goal with this article is to give you the basics of such tools and configurations.

    For the sake of this article, I’m going to explain how to add Conventional Commits with a custom format.

    Say that you want to associate each commit to a Jira task. As you may know, Jira tasks have an ID composed of a project prefix and a numeric Id. So, for a project named FOO, you can have a task with Id FOO-123.

    The goal of this article is, then, to force developers to create Commit messages such as

    feat/FOO-123: commit short description
    

    or, if you want to add a full description of the commit,

    feat/FOO-123: commit short description
    
    Here we can have the full description of the task.
    And it can also be on multiple lines.
    

    We are going to work at Solution level; you don’t even need an IDE: just Notepad and a Terminal are fine. Before continuing, open your solution folder and a Console pointing to the same folder.

    Install NPM in your folder

    Yes, even if the main application is built with .NET, we are gonna need some NPM packages to set up our Conventional Commits.

    First things first: head to the Command Line and run

    After specifying some configurations (Package name? Licence? Author?), you will have a brand new package.json file.

    Now we can move on and add a GIT Hook.

    Husky: integrate GIT Hooks to improve commit messages

    To use conventional commits we have to “intercept” our GIT actions: we will need to run a specific tool right after having written a commit message; we have to validate it and, in case it does not follow the rules we’ve set, abort the operations.

    We will use Husky 🔗: it’s a facility package that allows us to do stuff with our commit messages and, in general, integrate work with Git Hooks.

    Head to the terminal, and install Husky by running

    npm install husky --save-dev
    

    This command will add a dependency to Husky, as you can see from the new item listed in the package.json file:

    "devDependencies": {
        "husky": "^8.0.3"
    }
    

    Finally, to enable Git Hooks, we have to run

    npm pkg set scripts.prepare="husky install"
    

    and notice the new section in the package.json.

    "scripts": {
        "prepare": "husky install"
    },
    

    Even with just these simple steps, we can see a first result: if you run git commit you will see a text editor open. Here you can write your commit message.

    Git commit message editor

    Save and close the file. The commit message has been applied, as you can see by running git log --oneline.

    CommitLint: a package to validate Commit messages

    We need to install and configure CommitLint, the NPM package that does the dirty job.

    On the same terminal as before, run

    npm install --save-dev @commitlint/config-conventional @commitlint/cli
    

    to install both commitlint/config-conventional, which add the generic functionalities, and commitlint/cli, which allows us to run the scripts via CLI.

    You will see both packages listed in your package.json file:

    "devDependencies": {
        "@commitlint/cli": "^17.4.2",
        "@commitlint/config-conventional": "^17.4.2",
        "husky": "^8.0.3"
    }
    

    Next step: scaffold the file that handles the configurations on how we want our Commit Messages to be structured.

    On the root, create a brand new file, commitlint.config.js, and paste this snippet:

    module.exports = {
      extends: ["@commitlint/config-conventional"],
    }
    

    This snippet tells Commitlint to use the default conventions, such as feat(api): send an email.

    To test the default rules without issuing any real commit, we have to install the previous packages globally, so that they can be accessed outside the scope of the git hooks:

    npm install -g @commitlint/cli @commitlint/config-conventional
    

    and, in a console, we can run

    echo 'foo: a message with wrong format' | commitlint
    

    and see the error messages

    Testing commitlint with errors

    At this point, we still don’t have CommitLint ready to validate our commit messages. In fact, if you try to commit your changes with an invalid message, you will see that the message passes the checks (because there are no checks!), and your changes get committed.

    We need to do some more steps.

    First of all, we have to create a folder named .husky that will be used by Husky to understand which commands are supported.

    Notice: you have to keep the dot at the beginning of the folder name: it’s .husky, not husky.

    Then we need to add a new file within that folder to tell Husky that it needs to run CommitLint.

    npx husky add .husky/commit-msg  'npx --no -- commitlint --edit ${1}'
    

    We’re almost ready: everything is set, but we need to activate the functionality. So you just have to run

    to see it working:

    CommitLint correctly validates the commit message

    Commitlint.config.js: defining explicit rules on Git Messages

    Now, remember that we want to enforce certain rules on the commit message.

    We don’t want them to be like

    feat(api): send an email to the customer when a product is shipped
    

    but rather like

    feat/FOO-123: commit short description
    
    Here we can have the full description of the task.
    And it can also be on multiple lines.
    

    This means that we have to configure the commitlint.config.js file to override default values.

    Let’s have a look at a valid Commitlint file:

    module.exports = {
      extends: ["./node_modules/@commitlint/config-conventional"],
      parserPreset: {
        parserOpts: {
          headerPattern: /^(\w*)\/FOO-(\w*): (.*)$/,
          headerCorrespondence: ["type", "scope", "subject"],
        },
      },
      rules: {
        "type-enum": [2, "always", ["feat", "fix", "hot", "chore"]],
        "header-min-length": [2, "always", 10],
        "header-max-length": [2, "always", 50],
        "body-max-line-length": [2, "always", 72],
        "subject-case": [
          2,
          "never",
          ["sentence-case", "start-case", "pascal-case", "upper-case"],
        ],
      },
    }
    

    Time to deep dive into those sections:

    The ParserOpts section: define how CommitList should parse text

    The first part tells the parser how to parse the header message:

    parserOpts: {
        headerPattern: /^(\w*)\/FOO-(\w*): (.*)$/,
        headerCorrespondence: ["type", "scope", "subject"],
    },
    

    It’s a regular expression, where every matching part has its correspondence in the headerCorrespondence array:

    So, in the message hello/FOO-123: my tiny message, we will have type=hello, scope=123, subject=my tiny message.

    Rules: define specific rules for each message section

    The rules section defines the rules to be applied to each part of the message structure.

    rules:
    {
        "type-enum": [2, "always", ["feat", "fix", "hot", "chore"]],
        "header-min-length": [2, "always", 10],
        "header-max-length": [2, "always", 50],
        "body-max-line-length": [2, "always", 72],
        "subject-case": [
            2,
            "never",
            ["sentence-case", "start-case", "pascal-case", "upper-case"],
        ],
    },
    

    The first value is a number that expresses the severity of the rule:

    • 0: the rule is disabled;
    • 1: show a warning;
    • 2: it’s an error.

    The second value defines if the rule must be applied (using always), or if it must be reversed (using never).

    The third value provides generic arguments for the related rule. For example, "header-max-length": [2, "always", 50], tells that the header must always have a length with <= 50 characters.

    You can read more about each and every configuration on the official documentation 🔗.

    Setting the commit structure using .gitmessage

    Now that everything is set, we can test it.

    But not before helping devs with a simple trick! As you remember, when you run git commit without specifying the message, an editor appears with some hints about the structure of the commit message.

    Default commit editor

    You can set your own text with hints about the structure of the messages.

    You just need to create a file named .gitmessage and put some text in it, such as:

    # <type>/FOO-<jira-ticket-id>: <title>
    # YOU CAN WRITE WHATEVER YOU WANT HERE
    # allowed types: feat | fix | hot | chore
    # Example:
    #
    # feat/FOO-01: first commit
    #
    # No more than 50 chars. #### 50 chars is here:  #
    
    # Remember blank line between title and body.
    
    # Body: Explain *what* and *why* (not *how*)
    # Wrap at 72 chars. ################################## which is here:  #
    #
    

    Now, we have to tell Git to use that file as a template:

    git config commit.template ./.gitmessage
    

    and.. TA-DAH! Here’s your message template!

    Customized message template

    Putting all together

    Finally, we have everything in place: git hooks, commit template, and template hints.

    If we run git commit, we will see an IDE open and the message we’ve defined before. Now, type A message with wrong format, save, close the editor, and you’ll see that the commit is aborted.

    Commit message with wrong format gets rejected

    Now you run git commit again, you’ll see again the IDE, and type feat/FOO-123: a valid message, and you’ll see it working

    Further readings

    Conventional Commits is a project that lists a set of specifications for writing such good messages. You can read more here:

    🔗 Conventional Commits

    As we saw before, there are a lot of configurations that you can set for your commits. You can see the full list here:

    🔗 CommitLint rules

    This article first appeared on Code4IT 🐧

    This new kind of commit message works well with Semantic Versioning, which can be useful to publish package versions with a meaningful version number, such as 2.0.1:
    🔗 Semantic Versioning

    And, to close the loop, Semantic Versioning can be easily integrated with CI pipelines. If you use .NET APIs and want to deploy your APIs to Azure using GitHub Actions, you can start from this article and add SemVer:
    🔗 How to deploy .NET APIs on Azure using GitHub actions

    Wrapping up

    In this article, we’ve learned what are Conventional Commits, how to add them using Husky and NPM, and how to configure our folder to use such tools.

    The steps we’ve seen before work for every type of application, even not related to dotnet.

    So, to recap everything, we have to:

    1. Install NPM: npm init;
    2. Install Husky: npm install husky --save-dev;
    3. Enable Husky: npm pkg set scripts.prepare="husky install";
    4. Install CommitLint: npm install --save-dev @commitlint/config-conventional @commitlint/cli;
    5. Create the commitlint.config.js file: module.exports = { extends: '@commitlint/config-conventional']};;
    6. Create the Husky folder: mkdir .husky;
    7. Link Husky and CommitLint: npx husky add .husky/commit-msg 'npx --no -- commitlint --edit ${1}';
    8. Activate the whole functionality: npx husky install;

    Then, you can customize the commitlint.config.js file and, if you want, create a better .gitmessage file.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to download an online file and store it on file system with C# &vert; Code4IT

    How to download an online file and store it on file system with C# | Code4IT


    Downloading a file from a remote resource seems an easy task: download the byte stream and copy it to a local file. Beware of edge cases!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Downloading files from an online source and saving them on the local machine seems an easy task.

    And guess what? It is!

    In this article, we will learn how to download an online file, perform some operations on it – such as checking its file extension – and store it in a local folder. We will also learn how to deal with edge cases: what if the file does not exist? Can we overwrite existing files?

    How to download a file stream from an online resource using HttpClient

    Ok, this is easy. If you have the file URL, it’s easy to just download it using HttpClient.

    HttpClient httpClient = new HttpClient();
    Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
    

    Using HttpClient can cause some trouble, especially when you have a huge computational load. As a matter of fact, using HttpClientFactory is preferred, as we’ve already explained in a previous article.

    But, ok, it looks easy – way too easy! There are two more parts to consider.

    How to handle errors while downloading a stream of data

    You know, shit happens!

    There are at least 2 cases that stop you from downloading a file: the file does not exist or the file requires authentication to be accessed.

    In both cases, an HttpRequestException exception is thrown, with the following stack trace:

    at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
    at System.Net.Http.HttpClient.GetStreamAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)
    

    As you can see, we are implicitly calling EnsureSuccessStatusCode while getting the stream of data.

    You can tell the consumer that we were not able to download the content in two ways: throw a custom exception or return Stream.Null. We will use Stream.Null for the sake of this article.

    Note: always throw custom exceptions and add context to them: this way, you’ll add more useful info to consumers and logs, and you can hide implementation details.

    So, let me refactor the part that downloads the file stream and put it in a standalone method:

    async Task<Stream> GetFileStream(string fileUrl)
    {
        HttpClient httpClient = new HttpClient();
        try
        {
            Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
            return fileStream;
        }
        catch (Exception ex)
        {
            return Stream.Null;
        }
    }
    

    so that we can use Stream.Null to check for the existence of the stream.

    How to store a file in your local machine

    Now we have our stream of data. We need to store it somewhere.

    We will need to copy our input stream to a FileStream object, placed within a using block.

    using (FileStream outputFileStream = new FileStream(path, FileMode.Create))
    {
        await fileStream.CopyToAsync(outputFileStream);
    }
    

    Possible errors and considerations

    When creating the FileStream instance, we have to pass the constructor both the full path of the image, with also the file name, and FileMode.Create, which tells the stream what type of operations should be supported.

    FileMode is an enum coming from the System.IO namespace, and has different values. Each value fits best for some use cases.

    public enum FileMode
    {
        CreateNew = 1,
        Create,
        Open,
        OpenOrCreate,
        Truncate,
        Append
    }
    

    Again, there are some edge cases that we have to consider:

    • the destination folder does not exist: you will get an DirectoryNotFoundException exception. You can easily fix it by calling Directory.CreateDirectory to generate all the hierarchy of folders defined in the path;
    • the destination file already exists: depending on the value of FileMode, you will see a different behavior. FileMode.Create overwrites the image, while FileMode.CreateNew throws an IOException in case the image already exists.

    Full Example

    It’s time to put the pieces together:

    async Task DownloadAndSave(string sourceFile, string destinationFolder, string destinationFileName)
    {
        Stream fileStream = await GetFileStream(sourceFile);
    
        if (fileStream != Stream.Null)
        {
            await SaveStream(fileStream, destinationFolder, destinationFileName);
        }
    }
    
    async Task<Stream> GetFileStream(string fileUrl)
    {
        HttpClient httpClient = new HttpClient();
        try
        {
            Stream fileStream = await httpClient.GetStreamAsync(fileUrl);
            return fileStream;
        }
        catch (Exception ex)
        {
            return Stream.Null;
        }
    }
    
    async Task SaveStream(Stream fileStream, string destinationFolder, string destinationFileName)
    {
        if (!Directory.Exists(destinationFolder))
            Directory.CreateDirectory(destinationFolder);
    
        string path = Path.Combine(destinationFolder, destinationFileName);
    
        using (FileStream outputFileStream = new FileStream(path, FileMode.CreateNew))
        {
            await fileStream.CopyToAsync(outputFileStream);
        }
    }
    

    Bonus tips: how to deal with file names and extensions

    You have the file URL, and you want to get its extension and its plain file name.

    You can use some methods from the Path class:

    string image = "https://website.come/csharptips/format-interpolated-strings/featuredimage.png";
    Path.GetExtension(image); // .png
    Path.GetFileNameWithoutExtension(image); // featuredimage
    

    But not every image has a file extension. For example, Twitter cover images have this format: https://pbs.twimg.com/profile_banners/1164441929679065088/1668758793/1080×360

    string image = "https://pbs.twimg.com/profile_banners/1164441929679065088/1668758793/1080x360";
    Path.GetExtension(image); // [empty string]
    Path.GetFileNameWithoutExtension(image); // 1080x360
    

    Further readings

    As I said, you should not instantiate a new HttpClient() every time. You should use HttpClientFactory instead.

    If you want to know more details, I’ve got an article for you:

    🔗 C# Tip: use IHttpClientFactory to generate HttpClient instances | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    This was a quick article, quite easy to understand – I hope!

    My main point here is that not everything is always as easy as it seems – you should always consider edge cases!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or on LinkedIn, if you want! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Beyond the Corporate Mold: How 21 TSI Sets the Future of Sports in Motion

    Beyond the Corporate Mold: How 21 TSI Sets the Future of Sports in Motion



    21 TSI isn’t your typical sports holding company. Overseeing a portfolio of brands in the sports equipment space, the team set out to break from the mold of the standard corporate website. Instead, they envisioned a digital experience that would reflect their DNA—where innovation, design, and technology converge into a rich, immersive journey.

    The result is a site that goes beyond static content, inviting users to explore through motion, interactivity, and meticulously crafted visuals. Developed through a close collaboration between type8 Studio and DEPARTMENT Maison de Création, the project pushes creative and technical boundaries to deliver a seamless, engaging experience.

    Concept & Art Direction

    The creative direction led by Paul Barbin played a crucial role in shaping the website’s identity. The design embraces a minimalist yet bold aesthetic—strictly monochromatic, anchored by a precise and structured typographic system. The layout is intentionally clean, but the experience stays dynamic thanks to well-orchestrated WebGL animations and subtle interactions.

    Grid & Style

    The definition of the grid played a fundamental role in structuring and clarifying the brand’s message. More than just a layout tool, the grid became a strategic framework—guiding content organization, enhancing readability, and ensuring visual consistency across all touchpoints.

    We chose an approach inspired by the Swiss style, also known as the International Typographic Style, celebrated for its clarity, precision, and restraint. This choice reflects our commitment to clear, direct, and functional communication, with a strong focus on user experience. The grid allows each message to be delivered with intention, striking a subtle balance between aesthetics and efficiency.

    A unique aspect of the project was the integration of AI-generated imagery. These visuals were thoughtfully curated and refined to align with the brand’s futuristic and enigmatic identity, further reinforcing the immersive nature of the website.

    Interaction & Motion Design

    The experience of 21 TSI is deeply rooted in movement. The site feels alive—constantly shifting and morphing in response to user interactions. Every detail works together to evoke a sense of fluidity:

    • WebGL animations add depth and dimension, making the site feel tactile and immersive.
    • Morphing transitions enable smooth navigation between sections, avoiding abrupt visual breaks.
    • Cursor distortion effects introduce a subtle layer of interactivity, letting users influence their journey through motion.
    • Scroll-based animations strike a careful balance between engagement and clarity, ensuring motion enhances the experience without overwhelming it.

    This dynamic approach creates a browsing experience that feels both organic and responsive—keeping users engaged without ever overwhelming them.

    Technical Implementation & Motion Design

    For this project, we chose a technology stack designed to deliver high performance and smooth interactions, all while maintaining the flexibility needed for creative exploration:

    • OGL: A lightweight alternative to Three.js, used for WebGL-powered animations and visual effects.
    • Anime.js: Handles motion design elements and precise animation timing.
    • Locomotive Scroll: Enables smooth, controlled scroll behavior throughout the site.
    • Eleventy (11ty): A static site generator that ensures fast load times and efficient content management.
    • Netlify: Provides seamless deployment and version control, keeping the development workflow agile.

    One of the key technical challenges was optimizing performance across devices while preserving the same fluid motion experience. Carefully balancing GPU-intensive WebGL elements with lightweight animations made seamless performance possible.

    Challenges & Solutions

    One of the primary challenges was ensuring that the high level of interactivity didn’t compromise usability. The team worked extensively to refine transitions so they felt natural, while keeping navigation intuitive. Balancing visual complexity with performance was equally critical—avoiding unnecessary elements while preserving a rich, engaging experience.

    Another challenge was the use of AI-generated visuals. While they introduced unique artistic possibilities, these assets required careful curation and refinement to align with the creative vision. Ensuring coherence between the AI-generated content and the designed elements was a meticulous process.

    Conclusion

    The 21 TSI website is a deep exploration of digital storytelling through design and interactivity. It captures the intersection of technology and aesthetics, offering an experience that goes well beyond a traditional corporate presence.

    The project was recognized with multiple awards, including Website of the Day on CSS Design Awards, FWA of the Day, and Awwwards, reinforcing its impact in the digital design space.

    This collaboration between type8 Studio and Paul Barbin of DEPARTMENT Maison de Création showcases how thoughtful design, innovative technology, and a strong artistic vision can come together to craft a truly immersive web experience.



    Source link

  • How to add Dependency Injection, Configurations, and Logging in a .NET 7 Console Application &vert; Code4IT

    How to add Dependency Injection, Configurations, and Logging in a .NET 7 Console Application | Code4IT


    By default, you cannot use Dependency Injection, custom logging, and configurations from settings in a Console Application. Unless you create a custom Host!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, you just want to create a console application to run a complex script. Just because it is a “simple” console application, it doesn’t mean that you should not use best practices, such as using Dependency Injection.

    Also, you might want to test the code: Dependency Injection allows you to test the behavior of a class without having a strict dependency on the referenced concrete classes: you can use stubs and mocks, instead.

    In this article, we’re going to learn how to add Dependency Injection in a .NET 7 console application. The same approach can be used for other versions of .NET. We will also add logging, using Serilog, and configurations coming from an appsettings.json file.

    We’re going to start small, with the basic parts, and gradually move on to more complex scenarios. We’re gonna create a simple, silly console application: we will inject a bunch of services, and print a message on the console.

    We have a root class:

    public class NumberWorker
    {
        private readonly INumberService _service;
    
        public NumberWorker(INumberService service) => _service = service;
    
        public void PrintNumber()
        {
            var number = _service.GetPositiveNumber();
            Console.WriteLine($"My wonderful number is {number}");
        }
    }
    

    that injects an INumberService, implemented by NumberService:

    public interface INumberService
    {
        int GetPositiveNumber();
    }
    
    public class NumberService : INumberService
    {
        private readonly INumberRepository _repo;
    
        public NumberService(INumberRepository repo) => _repo = repo;
    
        public int GetPositiveNumber()
        {
            int number = _repo.GetNumber();
            return Math.Abs(number);
        }
    }
    

    which, in turn, uses an INumberRepository implemented by NumberRepository:

    public interface INumberRepository
    {
        int GetNumber();
    }
    
    public class NumberRepository : INumberRepository
    {
        public int GetNumber()
        {
            return -42;
        }
    }
    

    The console application will create a new instance of NumberWorker and call the PrintNumber method.

    Now, we have to build the dependency tree and inject such services.

    How to create an IHost to use a host for a Console Application

    The first step to take is to install some NuGet packages that will allow us to add a custom IHost container so that we can add Dependency Injection and all the customization we usually add in projects that have a StartUp (or a Program) class, such as .NET APIs.

    We need to install 2 NuGet packages: Microsoft.Extensions.Hosting.Abstractions and Microsoft.Extensions.Hosting will be used to create a new IHost that will be used to build the dependencies tree.

    By navigating your csproj file, you should be able to see something like this:

    <ItemGroup>
        <PackageReference Include="Microsoft.Extensions.Hosting" Version="7.0.1" />
        <PackageReference Include="Microsoft.Extensions.Hosting.Abstractions" Version="7.0.0" />
    </ItemGroup>
    

    Now we are ready to go! First, add the following using statements:

    using Microsoft.Extensions.DependencyInjection;
    using Microsoft.Extensions.Hosting;
    

    and then, within the Program class, add this method:

    private static IHost CreateHost() =>
      Host.CreateDefaultBuilder()
          .ConfigureServices((context, services) =>
          {
              services.AddSingleton<INumberRepository, NumberRepository>();
              services.AddSingleton<INumberService, NumberService>();
          })
          .Build();
    }
    

    Host.CreateDefaultBuilder() creates the default IHostBuilder – similar to the IWebHostBuilder, but without any reference to web components.

    Then we add all the dependencies, using services.AddSingleton<T, K>. Notice that it’s not necessary to add services.AddSingleton<NumberWorker>: when we will use the concrete instance, the dependency tree will be resolved, without the need of having an indication of the root itself.

    Finally, once we have everything in place, we call Build() to create a new instance of IHost.

    Now, we just have to run it!

    In the Main method, create the IHost instance by calling CreateHost(). Then, by using the ActivatorUtilities class (coming from the Microsoft.Externsions.DependencyInjection namespace), create a new instance of NumberWorker, so that you can call PrintNumber();

    private static void Main(string[] args)
    {
      IHost host = CreateHost();
      NumberWorker worker = ActivatorUtilities.CreateInstance<NumberWorker>(host.Services);
      worker.PrintNumber();
    }
    

    Now you are ready to run the application, and see the message on the console:

    Basic result on Console

    Read configurations from appsettings.json for a Console Library

    We want to make our system configurable and place our configurations in an appsettings.json file.

    As we saw in a recent article 🔗, we can use IOptions<T> to inject configurations in the constructor. For the sake of this article, I’m gonna use a POCO class, NumberConfig, that is mapped to a configuration section and injected into the classes.

    public class NumberConfig
    {
        public int DefaultNumber { get; set; }
    }
    

    Now we need to manually create an appsettings.json file within the project folder, and add a new section that will hold the values of the configuration:

    {
      "Number": {
        "DefaultNumber": -899
      }
    }
    

    and now we can add the configuration binding in our CreateHost() method, within the ConfigureServices section:

    services.Configure<NumberConfig>(context.Configuration.GetSection("Number"));
    

    Finally, we can update the NumberRepository to accept the configurations in input and use them to return the value:

    public class NumberRepository : INumberRepository
    {
        private readonly NumberConfig _config;
    
        public NumberRepository(IOptions<NumberConfig> options) => _config = options.Value;
    
        public int GetNumber() => _config.DefaultNumber;
    }
    

    Run the project to admire the result, and… BOOM! It will not work! You should see the message “My wonderful number is 0”, even though the number we set on the config file is -899.

    This happens because we must include the appsettings.json file in the result of the compilation. Right-click on that file, select the Properties menu, and set the “Copy to Output Directory” to “Copy always”:

    Copy always the appsettings file to the Output Directory

    Now, build and run the project, and you’ll see the correct message: “My wonderful number is 899”.

    Clearly, the same values can be accessed via IConfigurations.

    Add Serilog logging to log on Console and File

    Finally, we can add Serilog logs to our console applications – as well as define Sinks.

    To add Serilog, you first have to install these NuGet packages:

    • Serilog.Extensions.Hosting and Serilog.Formatting.Compact to add the basics of Serilog;
    • Serilog.Settings.Configuration to read logging configurations from settings (if needed);
    • Serilog.Sinks.Console and Serilog.Sinks.File to add the Console and the File System as Sinks.

    Let’s get back to the CreateHost() method, and add a new section right after ConfigureServices:

    .UseSerilog((context, services, configuration) => configuration
        .ReadFrom.Configuration(context.Configuration)
        .ReadFrom.Services(services)
        .Enrich.FromLogContext()
        .WriteTo.Console()
        .WriteTo.File($"report-{DateTimeOffset.UtcNow.ToString("yyyy-MM-dd-HH-mm-ss")}.txt", restrictedToMinimumLevel: LogEventLevel.Warning)
        )
    

    Here we’re telling that we need to read the config from Settings, add logging context, and write both on Console and on File (only if the log message level is greater or equal than Warning).

    Then, add an ILogger here and there, and admire the final result:

    Serilog Logging is visible on the Console

    Final result

    To wrap up, here’s the final implementation of the Program class and the
    CreateHost method:

    private static void Main(string[] args)
    {
        IHost host = CreateHost();
        NumberWorker worker = ActivatorUtilities.CreateInstance<NumberWorker>(host.Services);
        worker.PrintNumber();
    }
    
    private static IHost CreateHost() =>
      Host
      .CreateDefaultBuilder()
      .ConfigureServices((context, services) =>
      {
          services.Configure<NumberConfig>(context.Configuration.GetSection("Number"));
    
          services.AddSingleton<INumberRepository, NumberRepository>();
          services.AddSingleton<INumberService, NumberService>();
      })
      .UseSerilog((context, services, configuration) => configuration
          .ReadFrom.Configuration(context.Configuration)
          .ReadFrom.Services(services)
          .Enrich.FromLogContext()
          .WriteTo.Console()
          .WriteTo.File($"report-{DateTimeOffset.UtcNow.ToString("yyyy-MM-dd-HH-mm-ss")}.txt", restrictedToMinimumLevel: LogEventLevel.Warning)
          )
      .Build();
    

    Further readings

    As always, a few resources to learn more about the topics discussed in this article.

    First and foremost, have a look at this article with a full explanation of Generic Hosts in a .NET Core application:

    🔗 .NET Generic Host in ASP.NET Core | Microsoft docs

    Then, if you recall, we’ve already learned how to print Serilog logs to the Console:

    🔗 How to log to Console with .NET Core and Serilog | Code4IT

    This article first appeared on Code4IT 🐧

    Lastly, we accessed configurations using IOptions<NumberConfig>. Did you know that there are other ways to access config?

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in .NET 7 | Code4IT

    as well as defining configurations for your project?

    🔗 3 (and more) ways to set configuration values in .NET | Code4IT

    Wrapping up

    In this article, we’ve learned how we can customize a .NET Console application to use dependency injection, external configurations, and Serilog logging.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to automatically refresh configurations with Azure App Configuration in ASP.NET Core &vert; Code4IT

    How to automatically refresh configurations with Azure App Configuration in ASP.NET Core | Code4IT


    ASP.NET allows you to poll Azure App Configuration to always get the most updated values without restarting your applications. It’s simple, but you have to think thoroughly.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In a previous article, we learned how to centralize configurations using Azure App Configuration, a service provided by Azure to share configurations in a secure way. Using Azure App Configuration, you’ll be able to store the most critical configurations in a single place and apply them to one or more environments or projects.

    We used a very simple example with a limitation: you have to restart your applications to make changes effective. In fact, ASP.NET connects to Azure App Config, loads the configurations in memory, and serves these configs until the next application restart.

    In this article, we’re gonna learn how to make configurations dynamic: by the end of this article, we will be able to see the changes to our configs reflected in our applications without restarting them.

    Since this one is a kind of improvement of the previous article, you should read it first.

    Let me summarize here the code showcased in the previous article. We have an ASP.NET Core API application whose only purpose is to return the configurations stored in an object, whose shape is this one:

    {
      "MyNiceConfig": {
        "PageSize": 6,
        "Host": {
          "BaseUrl": "https://www.mydummysite.com",
          "Password": "123-go"
        }
      }
    }
    

    In the constructor of the API controller, I injected an IOptions<MyConfig> instance that holds the current data stored in the application.

     public ConfigDemoController(IOptions<MyConfig> config)
            => _config = config;
    

    The only HTTP Endpoint is a GET: it just accesses that value and returns it to the client.

    [HttpGet()]
    public IActionResult Get()
    {
        return Ok(_config.Value);
    }
    

    Finally, I created a new instance of Azure App Configuration, and I used a connection string to integrate Azure App Configuration with the existing configurations by calling:

    builder.Configuration.AddAzureAppConfiguration(ConnectionString);
    

    Now we can move on and make configurations dynamic.

    Sentinel values: a guard value to monitor changes in the configurations

    On Azure App Configuration, you have to update the configurations manually one by one. Unfortunately, there is no way to update them in a single batch. You can import them in a batch, but you have to update them singularly.

    Imagine that you have a service that accesses an external API whose BaseUrl and API Key are stored on Az App Configuration. We now need to move to another API: we then have to update both BaseUrl and API Key. The application is running, and we want to update the info about the external API. If we updated the application configurations every time something is updated on Az App Configuration, we would end up with an invalid state – for example, we would have the new BaseUrl and the old API Key.

    Therefore, we have to define a configuration value that acts as a sort of versioning key for the whole list of configurations. In Azure App Configuration’s jargon, it’s called Sentinel.

    A Sentinel is nothing but version key: it’s a string value that is used by the application to understand if it needs to reload the whole list of configurations. Since it’s just a string, you can set any value, as long as it changes over time. My suggestion is to use the UTC date value of the moment you have updated the value, such as 202306051522. This way, in case of errors you can understand when was the last time any of these values have changed (but you won’t know which values have changed), and, depending on the pricing tier you are using, you can compare the current values with the previous ones.

    So, head back to the Configuration Explorer page and add a new value: I called it Sentinel.

    Sentinel value on Azure App Configuration

    As I said, you can use any value. For the sake of this article, I’m gonna use a simple number (just for simplicity).

    Define how to refresh configurations using ASP.NET Core app startup

    We can finally move to the code!

    If you recall, in the previous article we added a NuGet package, Microsoft.Azure.AppConfiguration.AspNetCore, and then we added Azure App Configuration as a configurations source by calling

    builder.Configuration.AddAzureAppConfiguration(ConnectionString);
    

    That instruction is used to load all the configurations, without managing polling and updates. Therefore, we must remove it.

    Instead of that instruction, add this other one:

    builder.Configuration.AddAzureAppConfiguration(options =>
    {
        options
        .Connect(ConnectionString)
        .Select(KeyFilter.Any, LabelFilter.Null)
        // Configure to reload configuration if the registered sentinel key is modified
        .ConfigureRefresh(refreshOptions =>
                  refreshOptions.Register("Sentinel", label: LabelFilter.Null, refreshAll: true)
            .SetCacheExpiration(TimeSpan.FromSeconds(3))
          );
    });
    

    Let’s deep dive into each part:

    options.Connect(ConnectionString) just tells ASP.NET that the configurations must be loaded from that specific connection string.

    .Select(KeyFilter.Any, LabelFilter.Null) loads all keys that have no Label;

    and, finally, the most important part:

    .ConfigureRefresh(refreshOptions =>
                refreshOptions.Register(key: "Sentinel", label: LabelFilter.Null, refreshAll: true)
          .SetCacheExpiration(TimeSpan.FromSeconds(3))
        );
    

    Here we are specifying that all values must be refreshed (refreshAll: true) when the key with value=“Sentinel” (key: "Sentinel") is updated. Then, store those values for 3 seconds (SetCacheExpiration(TimeSpan.FromSeconds(3)).

    Here I used 3 seconds as a refresh time. This means that, if the application is used continuously, the application will poll Azure App Configuration every 3 seconds – it’s clearly a bad idea! So, pick the correct value depending on the change expectations. The default value for cache expiration is 30 seconds.

    Notice that the previous instruction adds Azure App Configuration to the Configuration object, and not as a service used by .NET. In fact, the method is builder.Configuration.AddAzureAppConfiguration. We need two more steps.

    First of all, add Azure App Configuration to the IServiceCollection object:

    builder.Services.AddAzureAppConfiguration();
    

    Finally, we have to add it to our existing middlewares by calling

    app.UseAzureAppConfiguration();
    

    The final result is this:

    public static void Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);
    
        const string ConnectionString = "......";
    
        // Load configuration from Azure App Configuration
        builder.Configuration.AddAzureAppConfiguration(options =>
        {
            options.Connect(ConnectionString)
                    .Select(KeyFilter.Any, LabelFilter.Null)
                    // Configure to reload configuration if the registered sentinel key is modified
                    .ConfigureRefresh(refreshOptions =>
                        refreshOptions.Register(key: "Sentinel", label: LabelFilter.Null, refreshAll: true)
                        .SetCacheExpiration(TimeSpan.FromSeconds(3)));
        });
    
        // Add the service to IServiceCollection
        builder.Services.AddAzureAppConfiguration();
    
        builder.Services.AddControllers();
        builder.Services.Configure<MyConfig>(builder.Configuration.GetSection("MyNiceConfig"));
    
        var app = builder.Build();
    
        // Add the middleware
        app.UseAzureAppConfiguration();
    
        app.UseHttpsRedirection();
    
        app.MapControllers();
    
        app.Run();
    }
    

    IOptionsMonitor: accessing and monitoring configuration values

    It’s time to run the project and look at the result: some of the values are coming from Azure App Configuration.

    Default config coming from Azure App Configuration

    Now we can update them: without restarting the application, update the PageSize value, and don’t forget to update the Sentinel too. Call again the endpoint, and… nothing happens! 😯

    This is because in our controller we are using IOptions<T> instead of IOptionsMonitor<T>. As we’ve learned in a previous article, IOptionsMonitor<T> is a singleton instance that always gets the most updated config values. It also emits an event when the configurations have been refreshed.

    So, head back to the ConfigDemoController, and replace the way we retrieve the config:

    [ApiController]
    [Route("[controller]")]
    public class ConfigDemoController : ControllerBase
    {
        private readonly IOptionsMonitor<MyConfig> _config;
    
        public ConfigDemoController(IOptionsMonitor<MyConfig> config)
        {
            _config = config;
            _config.OnChange(Update);
        }
    
        [HttpGet()]
        public IActionResult Get()
        {
            return Ok(_config.CurrentValue);
        }
    
        private void Update(MyConfig arg1, string? arg2)
        {
          Console.WriteLine($"Configs have been updated! PageSize is {arg1.PageSize}, " +
                    $" Password is {arg1.Host.Password}");
        }
    }
    

    When using IOptionsMonitor<T>, you can retrieve the current values of the configuration object by accessing the CurrentValue property. Also, you can define an event listener that is to be attached to the OnChange event;

    We can finally run the application and update the values on Azure App Configuration.

    Again, update one of the values, update the sentinel, and wait. After 3 seconds, you’ll see a message popping up in the console: it’s the text defined in the Update method.

    Then, call again the application (again, without restarting it), and admire the updated values!

    You can see a live demo here:

    Demo of configurations refreshed dinamically

    As you can see, the first time after updating the Sentinel value, the values are still the old ones. But, in the meantime, the values have been updated, and the cache has expired, so that the next time the values will be retrieved from Azure.

    My 2 cents on timing

    As we’ve learned, the config values are stored in a memory cache, with an expiration time. Every time the cache expires, we need to retrieve again the configurations from Azure App Configuration (in particular, by checking if the Sentinel value has been updated in the meanwhile). Don’t underestimate the cache value, as there are pros and cons of each kind of value:

    • a short timespan keeps the values always up-to-date, making your application more reactive to changes. But it also means that you are polling too often the Azure App Configuration endpoints, making your application busier and incurring limitations due to the requests count;
    • a long timespan keeps your application more performant because there are fewer requests to the Configuration endpoints, but it also forces you to have the configurations updated after a while from the update applied on Azure.

    There is also another issue with long timespans: if the same configurations are used by different services, you might end up in a dirty state. Say that you have UserService and PaymentService, and both use some configurations stored on Azure whose caching expiration is 10 minutes. Now, the following actions happen:

    1. UserService starts
    2. PaymentService starts
    3. Someone updates the values on Azure
    4. UserService restarts, while PaymentService doesn’t.

    We will end up in a situation where UserService has the most updated values, while PaymentService doesn’t. There will be a time window (in our example, up to 10 minutes) in which the configurations are misaligned.

    Also, take costs and limitations into consideration: with the Free tier you have 1000 requests per day, while with the Standard tier, you have 30.000 per hour per replica. Using the default cache expiration (30 seconds) in an application with a continuous flow of users means that you are gonna call the endpoint 2880 times per day (2 times a minute * (minutes per day = 1440)). Way more than the available value on the Free tier.

    So, think thoroughly before choosing an expiration time!

    Further readings

    This article is a continuation of a previous one, and I suggest you read the other one to understand how to set up Azure App Configuration and how to integrate it in an ASP.NET Core API application in case you don’t want to use dynamic configuration.

    🔗 Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    This article first appeared on Code4IT 🐧

    Also, we learned that using IOptions we are not getting the most updated values: in fact, we need to use IOptionsMonitor. Check out this article to understand the other differences in the IOptions family.

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in ASP.NET Core | Code4IT

    Finally, I briefly talked about pricing. As of July 2023, there are just 2 pricing tiers, with different limitations.

    🔗 App Configuration pricing | Microsoft Learn

    Wrapping up

    In my opinion, smart configuration handling is essential for the hard times when you have to understand why an error is happening only in a specific environment.

    Centralizing configurations is a good idea, as it allows developers to simulate a whole environment by just changing a few values on the application.

    Making configurations live without restarting your applications manually can be a good idea, but you have to analyze it thoroughly.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How an XDR Platform Transforms Your SOC Operations

    How an XDR Platform Transforms Your SOC Operations


    XDR solutions are revolutionizing how security teams handle threats by dramatically reducing false positives and streamlining operations. In fact, modern XDR platforms generate significantly fewer false positives than traditional SIEM threat analytics, allowing security teams to focus on genuine threats rather than chasing shadows. We’ve seen firsthand how security operations centers (SOCs) struggle with alert fatigue, fragmented visibility, and resource constraints. However, an XDR platform addresses these challenges by unifying information from multiple sources and providing a holistic view of threats. This integration enables organizations to operate advanced threat detection and response with fewer SOC resources, making it a cost-effective approach to modern security operations.

    An XDR platform consolidates security data into a single system, ensuring that SOC teams and surrounding departments can operate from the same information base. Consequently, this unified approach not only streamlines operations but also minimizes breach risks, making it an essential component of contemporary cybersecurity strategies.

    In this article, we’ll explore how XDR transforms SOC operations, why traditional tools fall short, and the practical benefits of implementing this technology in your security framework.

    The SOC Challenge: Why Traditional Tools Fall Short

    Security Operations Centers (SOCs) today face unprecedented challenges with their traditional security tools. While security teams strive to protect organizations, they’re increasingly finding themselves overwhelmed by fundamental limitations in their security infrastructure.

    Alert overload and analyst fatigue

    Modern SOC teams are drowning in alerts. As per Vectra AI, an overwhelming 71% of SOC practitioners worry they’ll miss real attacks buried in alert floods, while 51% believe they simply cannot keep pace with mounting security threats. The statistics paint a troubling picture:

    Siloed tools and fragmented visibility

    The tool sprawl in security operations creates massive blind spots. According to Vectra AI findings, 73% of SOCs have more than 10 security tools in place, while 45% juggle more than 20 different tools. Despite this arsenal, 47% of practitioners don’t trust their tools to work as needed.

    Many organizations struggle with siloed security data across disparate systems. Each department stores logs, alerts, and operational details in separate repositories that rarely communicate with one another. This fragmentation means threat hunting becomes guesswork because critical artifacts sit in systems that no single team can access.

    Slow response times and manual processes

    Traditional SOCs rely heavily on manual processes, significantly extending detection and response times. When investigating incidents, analysts must manually piece together information from different silos, losing precious time during active cyber incidents.

    According to research by Palo Alto Networks, automation can reduce SOC response times by up to 50%, significantly limiting breach impacts. Unfortunately, most traditional SOCs lack this capability. The workflow in traditional environments is characterized by manual processes that exacerbate alert fatigue while dealing with massive threat alert volumes.

    The complexity of investigations further slows response. When an incident occurs, analysts must combine data from various sources to understand the full scope of an attack, a time-consuming process that allows threats to linger in systems longer than necessary.

    What is an XDR Platform and How Does It Work?

    Extended Detection and Response (XDR) platforms represent the evolution of cybersecurity technology, breaking down traditional barriers between security tools. Unlike siloed solutions, XDR solutions provide a holistic approach to threat management through unified visibility and coordinated response.

    Unified data collection across endpoints, network, and cloud

    At its core, an XDR platform aggregates and correlates data from multiple security layers into a centralized repository. This comprehensive data collection encompasses:

    • Endpoints (computers, servers, mobile devices)
    • Network infrastructure and traffic
    • Cloud environments and workloads
    • Email systems and applications
    • Identity and access management

    This integration eliminates blind spots that typically plague security operations. By collecting telemetry from across the entire attack surface, XDR platforms provide security teams with complete visibility into potential threats. The system automatically ingests, cleans, and standardizes this data, ensuring consistent, high-quality information for analysis.

    Real-time threat detection using AI and ML

    XDR platforms leverage advanced analytics, artificial intelligence, and machine learning to identify suspicious patterns and anomalies that human analysts might miss. These capabilities enable:

    • Automatic correlation of seemingly unrelated events across different security layers
    • Identification of sophisticated multi-vector attacks through pattern recognition
    • Real-time monitoring and analysis of data streams for immediate threat identification
    • Reduction in false positives through contextual understanding of alerts

    The AI-powered capabilities enable XDR platforms to detect threats at a scale and speed impossible for human analysts alone. Moreover, these systems continuously learn and adapt to evolving threats through machine learning models.

    Automated response and orchestration capabilities

    Once threats are detected, XDR platforms can initiate automated responses without requiring manual intervention. This automation includes:

    • Isolation of compromised devices to contain threats
    • Blocking of malicious IP addresses and domains
    • Execution of predefined response playbooks for consistent remediation
    • Prioritization of incidents based on severity for efficient resource allocation

    Key Benefits of XDR for SOC Operations

    Implementing an XDR platform delivers immediate, measurable advantages to security operations centers struggling with traditional tools and fragmented systems. SOC teams gain specific capabilities that fundamentally transform their effectiveness against modern threats.

    Faster threat detection and reduced false positives

    The strategic advantage of XDR solutions begins with their ability to dramatically reduce alert volume. XDR tools automatically group related alerts into unified incidents, representing entire attack sequences rather than isolated events. This correlation across different security layers identifies complex attack patterns that traditional solutions might miss.

    Improved analyst productivity through automation

    As per the Tines report, 64% of analysts spend over half their time on tedious manual work, with 66% believing that half of their tasks could be automated. XDR platforms address this challenge through built-in orchestration and automation that offload repetitive tasks. Specifically, XDR can automate threat detection through machine learning, streamline incident response processes, and generate AI-powered incident reports. This automation allows SOC teams to detect sophisticated attacks with fewer resources while reducing response time.

    Centralized visibility and simplified workflows

    XDR provides a single pane view that eliminates “swivel chair integration,” where analysts manually interface across multiple security systems. This unified approach aggregates data from endpoints, networks, applications, and cloud environments into a consolidated platform. As a result, analysts gain swift investigation capabilities with instant access to all forensic artifacts, events, and threat intelligence in one location. This centralization particularly benefits teams during complex investigations, enabling them to quickly understand the complete attack story.

    Better alignment with compliance and audit needs

    XDR strengthens regulatory compliance through detailed documentation and monitoring capabilities. The platform generates comprehensive logs and audit trails of security events, user activities, and system changes, helping organizations demonstrate compliance to regulators. Additionally, XDR’s continuous monitoring adapts to new threats and regulatory changes, ensuring consistent compliance over time. Through centralized visibility and data aggregation, XDR effectively monitors data flows and access patterns, preventing unauthorized access to sensitive information.

    Conclusion

    XDR platforms clearly represent a significant advancement in cybersecurity technology.  At Seqrite, we offer a comprehensive XDR platform designed to help organizations simplify their SOC operations, improve detection accuracy, and automate responses. If you are looking to strengthen your cybersecurity posture with an effective and scalable XDR solution, Seqrite XDR is built to help you stay ahead of evolving threats.

     



    Source link