برچسب: How

  • How to integrate Feature Flags stored on Azure App Configuration in an ASP.NET Core Application | Code4IT

    How to integrate Feature Flags stored on Azure App Configuration in an ASP.NET Core Application | Code4IT


    Learn how to use Feature Flags in ASP.NET Core apps and read values from Azure App Configuration. Understand how to use filters, like the Percentage filter, to control feature activation, and learn how to take full control of the cache expiration of the values.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Feature Flags let you remotely control the activation of features without code changes. They help you to test, release, and manage features safely and quickly by driving changes using centralized configurations.

    In a previous article, we learned how to integrate Feature Flags in ASP.NET Core applications. Also, a while ago, we learned how to integrate Azure App Configuration in an ASP.NET Core application.

    In this article, we are going to join the two streams in a single article: in fact, we will learn how to manage Feature Flags using Azure App Configuration to centralize our configurations.

    It’s a sort of evolution from the previous article. Instead of changing the static configurations and redeploying the whole application, we are going to move the Feature Flags to Azure so that you can enable or disable those flags in just one click.

    A recap of Feature Flags read from the appsettings file

    Let’s reuse the example shown in the previous article.

    We have an ASP.NET Core application (in that case, we were building a Razor application, but it’s not important for the sake of this article), with some configurations defined in the appsettings file under the Feature key:

    {
      "FeatureManagement": {
        "Header": true,
        "Footer": true,
        "PrivacyPage": false,
        "ShowPicture": {
          "EnabledFor": [
            {
              "Name": "Percentage",
              "Parameters": { "Value": 60 }
            }
          ]
        }
      }
    }
    

    We have already dove deep into Feature Flags in an ASP.NET Core application in the previous article. However, let me summarize it.

    First of all, you have to define your flags in the appsettings.json file using the structure we saw before.

    To use Feature Flags in ASP.NET Core you have to install the Microsoft.FeatureManagement.AspNetCore NuGet package.

    Then, you have to tell ASP.NET to use Feature Flags by calling:

    builder.Services.AddFeatureManagement();
    

    Finally, you are able to consume those flags in three ways:

    • inject the IFeatureManager interface and call IsEnabled or IsEnabledAsync;
    • use the FeatureGate attribute on a Controller class or a Razor model;
    • use the <feature> tag in a Razor page to show or hide a portion of HTML

    How to create Feature Flags on Azure App Configuration

    We are ready to move our Feature Flags to Azure App Configuration. Needless to say, you need an Azure subscription 😉

    Log in to the Azure Portal, head to “Create a resource”, and create a new App Configuration:

    Azure App configuration in the Marketplace

    I’m going to reuse the same instance I created in the previous article – you can see the full details in the How to create an Azure App Configuration instance section.

    Now we have to configure the same keys defined in the appsettings file: Header, Footer, and PrivacyPage.

    Open the App Configuration instance and locate the “Feature Manager” menu item in the left panel. This is the central place for creating, removing, and managing your Feature Flags. Here, you can see that I have already added the Header and Footer, and you can see their current state: “Footer” is enabled, while “Header” is not.

    Feature Flags manager dashboard

    How can I add the PrivacyPage flag? It’s elementary: click the “Create” button and fill in the fields.

    You have to define a Name and a Key (they can also be different), and if you want, you can add a Label and a Description. You can also define whether the flag should be active by checking the “Enable feature flag” checkbox.

    Feature Flag definition form

    Read Feature Flags from Azure App Configuration in an ASP.NET Core application

    It’s time to integrate Azure App Configuration with our ASP.NET Core application.

    Before moving to the code, we have to locate the connection string and store it somewhere.

    Head back to the App Configuration resource and locate the “Access keys” menu item under the “Settings” section.

    Access Keys page with connection strings

    From here, copy the connection string (I suggest that you use the Read-only Keys) and store it somewhere.

    Before proceeding, you have to install the Microsoft.Azure.AppConfiguration.AspNetCore NuGet package.

    Now, we can add Azure App Configuration as a source for our configurations by connecting to the connection string and by declaring that we are going to use Feature Flags:

    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString).UseFeatureFlags()
    );
    

    That’s not enough. We need to tell ASP.NET that we are going to consume these configurations by adding such functionalities to the Services property.

    builder.Services.AddAzureAppConfiguration();
    
    builder.Services.AddFeatureManagement();
    

    Finally, once we have built our application with the usual builder.Build(), we have to add the Azure App Configuration middleware:

    app.UseAzureAppConfiguration();
    

    To try it out, run the application and validate that the flags are being applied. You can enable or disable those flags on Azure, restart the application, and check that the changes to the flags are being applied. Otherwise, you can wait 30 seconds to have the flag values refreshed and see the changes applied to your application.

    Using the Percentage filter on Azure App Configuration

    Suppose you want to enable a functionality only to a percentage of sessions (sessions, not users!). In that case, you can use the Percentage filter.

    The previous article had a specific section dedicated to the PercentageFilter, so you might want to check it out.

    As a recap, we defined the flag as:

    {
      "ShowPicture": {
        "EnabledFor": [
          {
            "Name": "Percentage",
            "Parameters": {
              "Value": 60
            }
          }
        ]
      }
    }
    

    And added the PercentageFilter filter to ASP.NET with:

    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    

    Clearly, we can define such flags on Azure as well.

    Head back to the Azure Portal and add a new Feature Flag. This time, you have to add a new Feature Filter to any existing flag. Even though the PercentageFilter is out-of-the-box in the FeatureManagement NuGet package, it is not available on the Azure portal.

    You have to define the filter with the following values:

    • Filter Type must be “Custom”;
    • Custom filter name must be “Percentage”
    • You must add a new key, “Value”, and set its value to “60”.

    Custom filter used to create Percentage Filter

    The configuration we just added reflects the JSON value we previously had in the appsettings file: 60% of the requests will activate the flag, while the remaining 40% will not.

    Define the cache expiration interval for Feature Flags

    By default, Feature Flags are stored in an internal cache for 30 seconds.

    Sometimes, it’s not the best choice for your project; you may prefer a longer duration to avoid additional calls to the App Configuration platform; other times, you’d like to have the changes immediately available.

    You can then define the cache expiration interval you need by configuring the options for the Feature Flags:

    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString).UseFeatureFlags(featureFlagOptions =>
        {
            featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(10);
        })
    );
    

    This way, Feature Flag values are stored in the internal cache for 10 seconds. Then, when you reload the page, the configurations are reread from Azure App Configuration and the flags are applied with the new values.

    Further readings

    This is the final article of a path I built during these months to explore how to use configurations in ASP.NET Core.

    We started by learning how to set configuration values in an ASP.NET Core application, as explained here:

    🔗 3 (and more) ways to set configuration values in ASP.NET Core

    Then, we learned how to read and use them with the IOptions family:

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in ASP.NET Core

    From here, we learned how to read the same configurations from Azure App Configuration, to centralize our settings:

    🔗 Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    Then, we configured our applications to automatically refresh the configurations using a Sentinel value:

    🔗 How to automatically refresh configurations with Azure App Configuration in ASP.NET Core

    Finally, we introduced Feature Flags in our apps:

    🔗 Feature Flags 101: A Guide for ASP.NET Core Developers | Code4IT

    And then we got to this article!

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we have configured an ASP.NET Core application to read the Feature Flags stored on Azure App Configuration.

    Here’s the minimal code you need to add Feature Flags for ASP.NET Core API Controllers:

    var builder = WebApplication.CreateBuilder(args);
    
    string connectionString = "my connection string";
    
    builder.Services.AddControllers();
    
    builder.Configuration.AddAzureAppConfiguration(options =>
        options.Connect(connectionString)
        .UseFeatureFlags(featureFlagOptions =>
            {
                featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(10);
            }
        )
    );
    
    builder.Services.AddAzureAppConfiguration();
    
    builder.Services.AddFeatureManagement()
        .AddFeatureFilter<PercentageFilter>();
    
    var app = builder.Build();
    
    app.UseRouting();
    app.UseAzureAppConfiguration();
    app.MapControllers();
    app.Run();
    

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to create Unit Tests for Model Validation | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Model validation is fundamental to any project: it brings security and robustness acting as a first shield against an invalid state.

    You should then add Unit Tests focused on model validation. In fact, when defining the input model, you should always consider both the valid and, even more, the invalid models, making sure that all the invalid models are rejected.

    BDD is a good approach for this scenario, and you can use TDD to implement it gradually.

    Okay, but how can you validate that the models and model attributes you defined are correct?

    Let’s define a simple model:

    public class User
    {
        [Required]
        [MinLength(3)]
        public string FirstName { get; set; }
    
        [Required]
        [MinLength(3)]
        public string LastName { get; set; }
    
        [Range(18, 100)]
        public int Age { get; set; }
    }
    

    Have we defined our model correctly? Are we covering all the edge cases? A well-written Unit Test suite is our best friend here!

    We have two choices: we can write Integration Tests to send requests to our system, which is running an in-memory server, and check the response we receive. Or we can use the internal Validator class, the one used by ASP.NET to validate input models, to create slim and fast Unit Tests. Let’s use the second approach.

    Here’s a utility method we can use in our tests:

    public static IList<ValidationResult> ValidateModel(object model)
    {
        var results = new List<ValidationResult>();
    
        var validationContext = new ValidationContext(model, null, null);
    
        Validator.TryValidateObject(model, validationContext, results, true);
    
        if (model is IValidatableObject validatableModel)
           results.AddRange(validatableModel.Validate(validationContext));
    
        return results;
    }
    

    In short, we create a validation context without any external dependency, focused only on the input model: new ValidationContext(model, null, null).

    Next, we validate each field by calling TryValidateObject and store the validation results in a list, result.

    Finally, if the Model implements the IValidatableObject interface, which exposes the Validate method, we call that Validate() method and store the returned validation errors in the final result list created before.

    As you can see, we can handle both validation coming from attributes on the fields, such as [Required], and custom validation defined in the model class’s Validate() method.

    Now, we can use this method to verify whether the validation passes and, in case it fails, which errors are returned:

    [Test]
    public void User_ShouldPassValidation_WhenModelIsValid()
    {
        var model = new User { FirstName = "Davide", LastName = "Bellone", Age = 32 };
        var validationResult = ModelValidationHelper.ValidateModel(mode);
        Assert.That(validationResult, Is.Empty);
    }
    
    [Test]
    public void User_ShouldNotPassValidation_WhenLastNameIsEmpty()
    {
        var model = new User { FirstName = "Davide", LastName = null, Age = 32 };
        var validationResult = ModelValidationHelper.ValidateModel(mode);
        Assert.That(validationResult, Is.Not.Empty);
    }
    
    
    [Test]
    public void User_ShouldNotPassValidation_WhenAgeIsLessThan18()
    {
        var model = new User { FirstName = "Davide", LastName = "Bellone", Age = 10 };
        var validationResult = ModelValidationHelper.ValidateModel(mode);
        Assert.That(validationResult, Is.Not.Empty);
    }
    

    Further readings

    Model Validation allows you to create more robust APIs. To improve robustness, you can follow Postel’s law:

    🔗 Postel’s law for API Robustness | Code4IT

    This article first appeared on Code4IT 🐧

    Model validation, in my opinion, is one of the cases where Unit Tests are way better than Integration Tests. This is a perfect example of Testing Diamond, the best (in most cases) way to structure a test suite:

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage) | Code4IT

    If you still prefer writing Integration Tests for this kind of operation, you can rely on the WebApplicationFactory class and use it in your NUnit tests:

    🔗 Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit | Code4IT

    Wrapping up

    Model validation is crucial. Testing the correctness of model validation can make or break your application. Please don’t skip it!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal &vert; Code4IT

    OhMyPosh and CascadiaCode not working on PowerShell? How to fix it in Windows 10 and Windows 11 Integrated Terminal | Code4IT


    Learn how to integrate Oh My Posh, a cross-platform tool that lets you create beautiful and informative prompts for PowerShell.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    The content of the blog you are reading right now is stored in a Git repository. Every time I create an article, I create a new Git Branch to isolate the changes.

    To generate the skeleton of the articles, I use the command line (well, I generally use PowerShell); in particular, given that I’m using both Windows 10 and Windows 11 – depending on the laptop I’m working on – I use the Integrated Terminal, which allows you to define the style, the fonts, and so on of every terminal configured in the settings.

    Windows terminal with default style

    The default profile is pretty basic: no info is shown except for the current path – I want to customize the appearance.

    I want to show the status of the Git repository, including:

    • repository name
    • branch name
    • outgoing commits

    There are lots of articles that teach how to use OhMyPosh with Cascadia Code. Unfortunately, I couldn’t make them work.

    In this article, I teach you how I fixed it on my local machine. It’s a step-by-step guide I wrote while installing it on my local machine. I hope it works for you as well!

    Step 1: Create the $PROFILE file if it does not exist

    In PowerShell, you can customize the current execution by updating the $PROFILE file.

    Clearly, you first have to check if the profile file exists.

    Open the PowerShell and type:

    $PROFILE # You can also use $profile lowercase - it's the same!
    

    This command shows you the expected path of this file. The file, if it exists, is stored in that location.

    The Profile file is expected to be under a specific folder whose path can be found using the $PROFILE command

    In this case, the $Profile file should be available under the folder C:\Users\d.bellone\Documents\WindowsPowerShell. In my case, it does not exist, though!

    The Profile file is expected to be under a specific path, but it may not exist

    Therefore, you must create it manually: head to that folder and create a file named Microsoft.PowerShell_profile.ps1.

    Note: it might happen that not even the WindowsPowerShell folder exists. If it’s missing, well, create it!

    Step 2: Install OhMyPosh using Winget, Scoop, or PowerShell

    To use OhMyPosh, we have to – of course – install it.

    As explained in the official documentation, we have three ways to install OhMyPosh, depending on the tool you prefer.

    If you use Winget, just run:

    winget install JanDeDobbeleer.OhMyPosh -s winget
    

    If you prefer Scoop, the command is:

    scoop install https://github.com/JanDeDobbeleer/oh-my-posh/releases/latest/download/oh-my-posh.json
    

    And, if you like working with PowerShell, execute:

    Set-ExecutionPolicy Bypass -Scope Process -Force; Invoke-Expression ((New-Object System.Net.WebClient).DownloadString('https://ohmyposh.dev/install.ps1'))
    

    I used Winget, and you can see the installation process here:

    Install OhMyPosh with Winget

    Now, to apply these changes, you have to restart the PowerShell.

    Step 3: Add OhMyPosh to the PowerShell profile

    Open the Microsoft.PowerShell_profile.ps1 file and add the following line:

    oh-my-posh init pwsh | Invoke-Expression
    

    This command is executed every time you open the PowerShell with the default profile, and it initializes OhMyPosh to have it available during the current session.

    Now, you can save and close the file.

    Hint: you can open the profile file with Notepad by running notepad $PROFILE.

    Step 4: Set the Execution Policy to RemoteSigned

    Restart the terminal. In all probability, you will see an error like this:

    &ldquo;The file .ps1 is not digitally signed&rdquo; error

    The error message

    The file <path>\Microsoft.PowerShell_profile.ps1 is
    not digitally signed. You cannot run this script on the current system

    means that PowerShell does not trust the script it’s trying to load.

    To see which Execution Policy is currently active, run:

    You’ll probably see that the value is AllSigned.

    To enable the execution of scripts created on your local machine, you have to set the Execution Policy value to RemoteSigned, using this command by running the PowerShell in administrator mode:

    Set-ExecutionPolicy RemoteSigned
    

    Let’s see the definition of the RemoteSigned Execution policy as per SQLShack’s article:

    This is also a safe PowerShell Execution policy to set in an enterprise environment. This policy dictates that any script that was not created on the system that the script is running on, should be signed. Therefore, this will allow you to write your own script and execute it.

    So, yeah, feel free to proceed and set the new Execution policy to have your PowerShell profile loaded correctly every time you open a new PowerShell instance.

    Now, OhMyPosh can run in the current profile.

    Head to a Git repository and notice that… It’s not working!🤬 Or, well, we have the Git information, but we are missing some icons and glyphs.

    Oh My Posh is loaded correctly, but some icons are missing due to the wrong font

    Step 5: Use CaskaydiaCove, not Cascadia Code, as a font

    We still have to install the correct font with the missing icons.

    We will install it using Chocolatey, a package manager available for Windows 11.

    To check if you have it installed, run:

    Now, to install the correct font family, open a PowerShell with administration privileges and run:

    choco install cascadia-code-nerd-font
    

    Once the installation is complete, you must tell Integrated Terminal to use the correct font by following these steps:

    1. open to the Settings page (by hitting CTRL + ,)
    2. select the profile you want to update (in my case, I’ll update the default profile)
    3. open the Appearance section
    4. under Font face select CaskaydiaCove Nerd Font

    PowerShell profile settings - Font Face should be CaskaydiaCove Nerd Font

    Now close the Integrated Terminal to apply the changes.

    Open it again, navigate to a Git repository, and admire the result.

    OhMyPosh with icons and fonts loaded correctly

    Further readings

    The first time I read about OhMyPosh, it was on Scott Hanselman’s blog. I couldn’t make his solution work – and that’s the reason I wrote this article. However, in his article, he shows how he customized his own Terminal with more glyphs and icons, so you should give it a read.

    🔗 My Ultimate PowerShell prompt with Oh My Posh and the Windows Terminal | Scott Hanselman’s blog

    We customized our PowerShell profile with just one simple configuration. However, you can do a lot more. You can read Ruud’s in-depth article about PowerShell profiles.

    🔗 How to Create a PowerShell Profile – Step-by-Step | Lazyadmin

    One of the core parts of this article is that we have to use CaskaydiaCove as a font instead of the (in)famous Cascadia Code. But why?

    🔗 Why CaskaydiaCove and not Cascadia Code? | GitHub

    Finally, as I said at the beginning of this article, I use Git and Git Branches to handle the creation and management of my blog articles. That’s just the tip of the iceberg! 🏔️

    If you want to steal my (previous) workflow, have a look at the behind-the-scenes of my blogging process (note: in the meanwhile, a lot of things have changed, but these steps can still be helpful for you)

    🔗 From idea to publishing, and beyond: how I automated my blogging workflow with GitHub, PowerShell, and Azure | Code4IT

    This article first appeared on Code4IT 🐧

    Wrapping up

    In this article, we learned how to install OhMyPosh in PowerShell and overcome all the errors you (well, I) don’t see described in other articles.

    I wrote this step-by-step article alongside installing these tools on my local machine, so I’m confident the solution will work.

    Did this solution work for you? Let me know! 📨

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How To Create Kinetic Image Animations with React-Three-Fiber

    How To Create Kinetic Image Animations with React-Three-Fiber



    For the past few months, I’ve been exploring different kinetic motion designs with text and images. The style looks very intriguing, so I decided to create some really cool organic animations using images and React Three Fiber.

    In this article, we’ll learn how to create the following animation using Canvas2D and React Three Fiber.

    Setting Up the View & Camera

    The camera’s field of view (FOV) plays a huge role in this project. Let’s keep it very low so it looks like an orthographic camera. You can experiment with different perspectives later. I prefer using a perspective camera over an orthographic one because we can always try different FOVs. For more detailed implementation check source code.

    <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />

    Setting Up Our 3D Shapes

    First, let’s create and position 3D objects that will display our images. For this example, we need to make 2 components:

    Billboard.tsx – This is a cylinder that will show our stack of images

    'use client';
    
    import { useRef } from 'react';
    import * as THREE from 'three';
    
    function Billboard({ radius = 5, ...props }) {
        const ref = useRef(null);
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} />
                <meshBasicMaterial color="red" side={THREE.DoubleSide} />
            </mesh>
        );
    }

    Banner.tsx – This is another cylinder that will work like a moving banner

    'use client';
    
    import * as THREE from 'three';
    import { useRef } from 'react';
    
    function Banner({ radius = 1.6, ...props }) {
        const ref = useRef(null);
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry
                args={[radius, radius, radius * 0.07, radius * 80, radius * 10, true]}
                />
                <meshBasicMaterial
                color="blue"
                side={THREE.DoubleSide}
                />
            </mesh>
        );
    }
    
    export default Banner;

    Once we have our components ready, we can use them on our page.

    Now let’s build the whole shape:

    1. Create a wrapper group – We’ll make a group that wraps all our components. This will help us rotate everything together later.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} /> 
                    <group>
    
                    </group>
                </View>
            </div>
        );
    }

    2. Render Billboard and Banner components in the loop – Inside our group, we’ll create a loop to render our Billboards and Banners multiple times.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
                    <group>
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                            key={`billboard-${index}`}
                            radius={5}
                            />,
                            <Banner
                            key={`banner-${index}`}
                            radius={5}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    3. Stack them up – We’ll use the index from our loop and the y position to stack our items on top of each other. Here’s how it looks so far:

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    const COUNT = 10;
    const GAP = 3.2;
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
                    <group>
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                            key={`billboard-${index}`}
                            radius={5}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]}
                            />,
                            <Banner
                            key={`banner-${index}`}
                            radius={5}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0]}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    4. Add some rotation – Let’s rotate things a bit! First, I’ll hard-code the rotation of our banners to make them more curved and fit nicely with the Billboard component. We’ll also make the radius a bit bigger.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    const COUNT = 10;
    const GAP = 3.2;
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
                    <group>
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                            key={`billboard-${index}`}
                            radius={5}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]}
                            rotation={[0, index * Math.PI * 0.5, 0]} // <-- rotation of the billboard
                            />,
                            <Banner
                            key={`banner-${index}`}
                            radius={5}
                            rotation={[0, 0, 0.085]} // <-- rotation of the banner
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0]}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    5. Tilt the whole thing – Now let’s rotate our entire group to make it look like the Leaning Tower of Pisa.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    const COUNT = 10;
    const GAP = 3.2;
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
                    <group rotation={[-0.15, 0, -0.2]}> // <-- rotate the group
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                            key={`billboard-${index}`}
                            radius={5}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]}
                            rotation={[0, index * Math.PI * 0.5, 0]}
                            />,
                            <Banner
                            key={`banner-${index}`}
                            radius={5}
                            rotation={[0, 0, 0.085]}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0]}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    6. Perfect! – Our 3D shapes are all set up. Now we can add our images to them.

    Creating a Texture from Our Images Using Canvas

    Here’s the cool part: we’ll put all our images onto a canvas, then use that canvas as a texture on our Billboard shape.

    To make this easier, I created some helper functions that simplify the whole process.

    getCanvasTexture.js

    import * as THREE from 'three';
    
    /**
    * Preloads an image and calculates its dimensions
    */
    async function preloadImage(imageUrl, axis, canvasHeight, canvasWidth) {
        const img = new Image();
    
        img.crossOrigin = 'anonymous';
    
        await new Promise((resolve, reject) => {
            img.onload = () => resolve();
            img.onerror = () => reject(new Error(`Failed to load image: ${imageUrl}`));
            img.src = imageUrl;
        });
    
        const aspectRatio = img.naturalWidth / img.naturalHeight;
    
        let calculatedWidth;
        let calculatedHeight;
    
        if (axis === 'x') {
            // Horizontal layout: scale to fit canvasHeight
            calculatedHeight = canvasHeight;
            calculatedWidth = canvasHeight * aspectRatio;
            } else {
            // Vertical layout: scale to fit canvasWidth
            calculatedWidth = canvasWidth;
            calculatedHeight = canvasWidth / aspectRatio;
        }
    
        return { img, width: calculatedWidth, height: calculatedHeight };
    }
    
    function calculateCanvasDimensions(imageData, axis, gap, canvasHeight, canvasWidth) {
        if (axis === 'x') {
            const totalWidth = imageData.reduce(
            (sum, data, index) => sum + data.width + (index > 0 ? gap : 0), 0);
    
            return { totalWidth, totalHeight: canvasHeight };
        } else {
            const totalHeight = imageData.reduce(
            (sum, data, index) => sum + data.height + (index > 0 ? gap : 0), 0);
    
            return { totalWidth: canvasWidth, totalHeight };
        }
    }
    
    function setupCanvas(canvasElement, context, dimensions) {
        const { totalWidth, totalHeight } = dimensions;
        const devicePixelRatio = Math.min(window.devicePixelRatio || 1, 2);
    
        canvasElement.width = totalWidth * devicePixelRatio;
        canvasElement.height = totalHeight * devicePixelRatio;
    
        if (devicePixelRatio !== 1) context.scale(devicePixelRatio, devicePixelRatio);
    
        context.fillStyle = '#ffffff';
        context.fillRect(0, 0, totalWidth, totalHeight);
    }
    
    function drawImages(context, imageData, axis, gap) {
        let currentX = 0;
        let currentY = 0;
    
        context.save();
    
        for (const data of imageData) {
            context.drawImage(data.img, currentX, currentY, data.width, data.height);
    
            if (axis === 'x') currentX += data.width + gap;
            else currentY += data.height + gap;
        }
    
        context.restore();
    }
    
    function createTextureResult(canvasElement, dimensions) {
        const texture = new THREE.CanvasTexture(canvasElement);
        texture.needsUpdate = true;
        texture.wrapS = THREE.RepeatWrapping;
        texture.wrapT = THREE.ClampToEdgeWrapping;
        texture.generateMipmaps = false;
        texture.minFilter = THREE.LinearFilter;
        texture.magFilter = THREE.LinearFilter;
    
        return {
            texture,
            dimensions: {
                width: dimensions.totalWidth,
                height: dimensions.totalHeight,
                aspectRatio: dimensions.totalWidth / dimensions.totalHeight,
            },
        };
    }
    
    export async function getCanvasTexture({
        images,
        gap = 10,
        canvasHeight = 512,
        canvasWidth = 512,
        canvas,
        ctx,
        axis = 'x',
    }) {
        if (!images.length) throw new Error('No images');
    
        // Create canvas and context if not provided
        const canvasElement = canvas || document.createElement('canvas');
        const context = ctx || canvasElement.getContext('2d');
    
        if (!context) throw new Error('No context');
    
        // Preload all images in parallel
        const imageData = await Promise.all(
            images.map((image) => preloadImage(image.url, axis, canvasHeight, canvasWidth))
        );
    
        // Calculate total canvas dimensions
        const dimensions = calculateCanvasDimensions(imageData, axis, gap, canvasHeight, canvasWidth);
    
        // Setup canvas
        setupCanvas(canvasElement, context, dimensions);
    
        // Draw all images
        drawImages(context, imageData, axis, gap);
    
        // Create and return texture result
        return createTextureResult(canvasElement, dimensions)
    }

    Then we can also create a useCollageTexture hook that we can easily use in our components.

    useCollageTexture.jsx

    import { useState, useEffect, useCallback } from 'react';
    import { getCanvasTexture } from '@/webgl/helpers/getCanvasTexture';
    
    export function useCollageTexture(images, options = {}) {
    const [textureResults, setTextureResults] = useState(null);
    const [isLoading, setIsLoading] = useState(true);
    const [error, setError] = useState(null);
    
    const { gap = 0, canvasHeight = 512, canvasWidth = 512, axis = 'x' } = options;
    
    const createTexture = useCallback(async () => {
        try {
            setIsLoading(true);
            setError(null);
    
            const result = await getCanvasTexture({
                images,
                gap,
                canvasHeight,
                canvasWidth,
                axis,
            });
    
            setTextureResults(result);
    
        } catch (err) {
            setError(err instanceof Error ? err : new Error('Failed to create texture'));
        } finally {
            setIsLoading(false);
        }
    }, [images, gap, canvasHeight, canvasWidth, axis]);
    
        useEffect(() => {
            if (images.length > 0) createTexture();
        }, [images.length, createTexture]);
    
        return {
            texture: textureResults?.texture || null,
            dimensions: textureResults?.dimensions || null,
            isLoading,
            error,
        };
    }

    Adding the Canvas to Our Billboard

    Now let’s use our useCollageTexture hook on our page. We’ll create some simple loading logic. It takes a second to fetch all the images and put them onto the canvas. Then we’ll pass our texture and dimensions of canvas into the Billboard component.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import Loader from '@/components/ui/modules/Loader/Loader';
    import images from '@/data/images';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    import { useCollageTexture } from '@/hooks/useCollageTexture';
    
    const COUNT = 10;
    const GAP = 3.2;
    
    export default function Home() {
        const { texture, dimensions, isLoading } = useCollageTexture(images); // <-- getting the texture and dimensions from the useCollageTexture hook
    
        if (isLoading) return <Loader />; // <-- showing the loader when the texture is loading
    
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                    <PerspectiveCamera makeDefault fov={7} position={[0, 0, 100]} near={0.01} far={100000} />
                    <group rotation={[-0.15, 0, -0.2]}>
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                                key={`billboard-${index}`}
                                radius={5}
                                rotation={[0, index * Math.PI * 0.5, 0]}
                                position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]}
                                texture={texture} // <--passing the texture to the billboard
                                dimensions={dimensions} // <--passing the dimensions to the billboard
                            />,
                            <Banner
                                key={`banner-${index}`}
                                radius={5.035}
                                rotation={[0, 0, 0.085]}
                                position={[
                                    0,
                                    (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5,
                                    0,
                                ]}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    Inside the Billboard component, we need to properly map this texture to make sure everything fits correctly. The width of our canvas will match the circumference of the cylinder, and we’ll center the y position of the texture. This way, all the images keep their resolution and don’t get squished or stretched.

    Billboard.jsx

    'use client';
    
    import * as THREE from 'three';
    import { useRef } from 'react';  
    
    function setupCylinderTextureMapping(texture, dimensions, radius, height) {
        const cylinderCircumference = 2 * Math.PI * radius;
        const cylinderHeight = height;
        const cylinderAspectRatio = cylinderCircumference / cylinderHeight;
    
        if (dimensions.aspectRatio > cylinderAspectRatio) {
            // Canvas is wider than cylinder proportionally
            texture.repeat.x = cylinderAspectRatio / dimensions.aspectRatio;
            texture.repeat.y = 1;
            texture.offset.x = (1 - texture.repeat.x) / 2;
        } else {
            // Canvas is taller than cylinder proportionally
            texture.repeat.x = 1;
            texture.repeat.y = dimensions.aspectRatio / cylinderAspectRatio;
        }
    
        // Center the texture
        texture.offset.y = (1 - texture.repeat.y) / 2;
    }
    
    function Billboard({ texture, dimensions, radius = 5, ...props }) {
        const ref = useRef(null);
    
        setupCylinderTextureMapping(texture, dimensions, radius, 2);
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} />
                <meshBasicMaterial map={texture} side={THREE.DoubleSide} />
            </mesh>
        );
    }
    
    export default Billboard;

    Now let’s animate them using the useFrame hook. The trick to animating these images is to just move the X offset of the texture. This gives us the effect of a rotating mesh, when really we’re just moving the texture offset.

    Billboard.jsx

    'use client';
    
    import * as THREE from 'three';
    import { useRef } from 'react';
    import { useFrame } from '@react-three/fiber';  
    
    function setupCylinderTextureMapping(texture, dimensions, radius, height) {
        const cylinderCircumference = 2 * Math.PI * radius;
        const cylinderHeight = height;
        const cylinderAspectRatio = cylinderCircumference / cylinderHeight;
    
        if (dimensions.aspectRatio > cylinderAspectRatio) {
            // Canvas is wider than cylinder proportionally
            texture.repeat.x = cylinderAspectRatio / dimensions.aspectRatio;
            texture.repeat.y = 1;
            texture.offset.x = (1 - texture.repeat.x) / 2;
        } else {
            // Canvas is taller than cylinder proportionally
            texture.repeat.x = 1;
            texture.repeat.y = dimensions.aspectRatio / cylinderAspectRatio;
        }
    
        // Center the texture
        texture.offset.y = (1 - texture.repeat.y) / 2;
    }
    
    function Billboard({ texture, dimensions, radius = 5, ...props }) {
        const ref = useRef(null);
    
        setupCylinderTextureMapping(texture, dimensions, radius, 2);
    
        useFrame((state, delta) => {
            if (texture) texture.offset.x += delta * 0.001;
        });
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} />
                <meshBasicMaterial map={texture} side={THREE.DoubleSide} />
            </mesh>
        );
    }
    
    export default Billboard;

    I think it would look even better if we made the back of the images a little darker. To do this, I created MeshImageMaterial – it’s just an extension of MeshBasicMaterial that makes our backface a bit darker.

    MeshImageMaterial.js

    import * as THREE from 'three';
    import { extend } from '@react-three/fiber';
    
    export class MeshImageMaterial extends THREE.MeshBasicMaterial {
        constructor(parameters = {}) {
            super(parameters);
            this.setValues(parameters);
        }
    
        onBeforeCompile = (shader) => {
            shader.fragmentShader = shader.fragmentShader.replace(
                '#include <color_fragment>',
                /* glsl */ `#include <color_fragment>
                if (!gl_FrontFacing) {
                vec3 blackCol = vec3(0.0);
                diffuseColor.rgb = mix(diffuseColor.rgb, blackCol, 0.7);
                }
                `
            );
        };
    }
    
    extend({ MeshImageMaterial });

    Billboard.jsx

    'use client';
    
    import * as THREE from 'three';
    import { useRef } from 'react';
    import { useFrame } from '@react-three/fiber';
    import '@/webgl/materials/MeshImageMaterial';
    
    function setupCylinderTextureMapping(texture, dimensions, radius, height) {
        const cylinderCircumference = 2 * Math.PI * radius;
        const cylinderHeight = height;
        const cylinderAspectRatio = cylinderCircumference / cylinderHeight;
    
        if (dimensions.aspectRatio > cylinderAspectRatio) {
            // Canvas is wider than cylinder proportionally
            texture.repeat.x = cylinderAspectRatio / dimensions.aspectRatio;
            texture.repeat.y = 1;
            texture.offset.x = (1 - texture.repeat.x) / 2;
        } else {
            // Canvas is taller than cylinder proportionally
            texture.repeat.x = 1;
            texture.repeat.y = dimensions.aspectRatio / cylinderAspectRatio;
        }
    
        // Center the texture
        texture.offset.y = (1 - texture.repeat.y) / 2;
    }
    
    function Billboard({ texture, dimensions, radius = 5, ...props }) {
        const ref = useRef(null);
    
        setupCylinderTextureMapping(texture, dimensions, radius, 2);
    
        useFrame((state, delta) => {
            if (texture) texture.offset.x += delta * 0.001;
        });
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} />
                <meshImageMaterial map={texture} side={THREE.DoubleSide} toneMapped={false} />
            </mesh>
        );
    }
    
    export default Billboard;

    And now we have our images moving around cylinders. Next, we’ll focus on banners (or marquees, whatever you prefer).

    Adding Texture to the Banner

    The last thing we need to fix is our Banner component. I wrapped it with this texture. Feel free to take it and edit it however you want, but remember to keep the proper dimensions of the texture.

    We simply import our texture using the useTexture hook, map it onto our material, and animate the texture offset just like we did in our Billboard component.

    Billboard.jsx

    'use client';
    
    import * as THREE from 'three';
    import bannerTexture from '@/assets/images/banner.jpg';
    import { useTexture } from '@react-three/drei';
    import { useFrame } from '@react-three/fiber';
    import { useRef } from 'react';
    
    function Banner({ radius = 1.6, ...props }) {
        const ref = useRef(null);
    
        const texture = useTexture(bannerTexture.src);
        texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
    
        useFrame((state, delta) => {
            if (!ref.current) return;
            const material = ref.current.material;
            if (material.map) material.map.offset.x += delta / 30;
        });
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry
                    args={[radius, radius, radius * 0.07, radius * 80, radius * 10, true]}
                />
                <meshBasicMaterial
                    map={texture}
                    map-anisotropy={16}
                    map-repeat={[15, 1]}
                    side={THREE.DoubleSide}
                    toneMapped={false}
                    backfaceRepeatX={3}
                />
            </mesh>
        );
    }
    
    export default Banner;

    Nice! Now we have something cool, but I think it would look even cooler if we replaced the backface with something different. Maybe a gradient? For this, I created another extension of MeshBasicMaterial called MeshBannerMaterial. As you probably guessed, we just put a gradient on the backface. That’s it! Let’s use it in our Banner component.

    We replace the MeshBasicMaterial with MeshBannerMaterial and now it looks like this!

    MeshBannerMaterial.js

    import * as THREE from 'three';
    import { extend } from '@react-three/fiber';
    
    export class MeshBannerMaterial extends THREE.MeshBasicMaterial {
        constructor(parameters = {}) {
            super(parameters);
            this.setValues(parameters);
    
            this.backfaceRepeatX = 1.0;
    
            if (parameters.backfaceRepeatX !== undefined)
    
            this.backfaceRepeatX = parameters.backfaceRepeatX;
        }
    
        onBeforeCompile = (shader) => {
            shader.uniforms.repeatX = { value: this.backfaceRepeatX * 0.1 };
            shader.fragmentShader = shader.fragmentShader
            .replace(
                '#include <common>',
                /* glsl */ `#include <common>
                uniform float repeatX;
    
                vec3 pal( in float t, in vec3 a, in vec3 b, in vec3 c, in vec3 d ) {
                    return a + b*cos( 6.28318*(c*t+d) );
                }
                `
            )
            .replace(
                '#include <color_fragment>',
                /* glsl */ `#include <color_fragment>
                if (!gl_FrontFacing) {
                diffuseColor.rgb = pal(vMapUv.x * repeatX, vec3(0.5,0.5,0.5),vec3(0.5,0.5,0.5),vec3(1.0,1.0,1.0),vec3(0.0,0.10,0.20) );
                }
                `
            );
        };
    }
    
    extend({ MeshBannerMaterial });

    Banner.jsx

    'use client';
    
    import * as THREE from 'three';
    import bannerTexture from '@/assets/images/banner.jpg';
    import { useTexture } from '@react-three/drei';
    import { useFrame } from '@react-three/fiber';
    import { useRef } from 'react';
    import '@/webgl/materials/MeshBannerMaterial';
    
    function Banner({ radius = 1.6, ...props }) {
    const ref = useRef(null);
    
    const texture = useTexture(bannerTexture.src);
    
    texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
    
    useFrame((state, delta) => {
        if (!ref.current) return;
    
        const material = ref.current.material;
    
        if (material.map) material.map.offset.x += delta / 30;
    });
    
    return (
        <mesh ref={ref} {...props}>
            <cylinderGeometry
                args={[radius, radius, radius * 0.07, radius * 80, radius * 10, true]}
            />
            <meshBannerMaterial
                map={texture}
                map-anisotropy={16}
                map-repeat={[15, 1]}
                side={THREE.DoubleSide}
                toneMapped={false}
                backfaceRepeatX={3}
            />
        </mesh>
    );
    }
    
    export default Banner;

    And now we have it ✨

    Check out the demo

    You can experiment with this method in lots of ways. For example, I created 2 more examples with shapes I made in Blender, and mapped canvas textures on them. You can check them out here:

    Final Words

    Check out the final versions of all demos:

    I hope you enjoyed this tutorial and learned something new!

    Feel free to check out the source code for more details!



    Source link

  • Top 10 Cloud Security Challenges in 2025 And How to Solve Them with Seqrite

    Top 10 Cloud Security Challenges in 2025 And How to Solve Them with Seqrite


    In today’s world, organizations are rapidly embracing cloud security to safeguard their data and operations. However, as cloud adoption grows, so do the risks. In this post, we highlight the top cloud security challenges and show how Seqrite can help you tackle them with ease.

    1.    Misconfigurations

    One of the simplest yet most dangerous mistakes is misconfiguring cloud workloads think storage buckets left public, weak IAM settings, or missing encryption. Cybercriminals actively scan for these mistakes. A small misconfiguration can lead to significant data leakage or worst-case, ransomware deployment. Seqrite Endpoint Protection Cloud ensure your cloud environment adheres to best-practice security settings before threats even strike.

    2.    Shared Responsibility Confusion

    The cloud model operates on shared responsibility: providers secure infrastructure, you manage your data and configurations. Too many teams skip this second part. Inadequate control over access, authentication, and setup drives serious risks. With Seqrite’s unified dashboard for access control, IAM, and policy enforcement, you stay firmly in control without getting overwhelmed.

    3.    Expanded Attack Surface

    More cloud services, more code, more APIs, more opportunities for attacks. Whether it’s serverless functions or public API endpoints, the number of access points grows quickly. Seqrite tackles this with integrated API scanning, vulnerability assessment, and real-time threat detection. Every service, even ephemeral ones is continuously monitored.

    4.    Unauthorized Access & Account Hijacking

    Attackers often gain entry via stolen credentials, especially in shared or multi-cloud environments. Once inside, they move laterally and hijack more resources. Seqrite’s multi-factor authentication, adaptive risk scoring, and real-time anomaly detection lock out illicit access and alert you instantly.

    5.    Insufficient Data Encryption

    Unencrypted data whether at rest or in transit is a gold mine for attackers. Industries with sensitive or regulated information, like healthcare or finance, simply can’t afford this. Seqrite ensures enterprise-grade encryption everywhere you store or transmit data and handles key management so that it’s secure and hassle-free.

    6.    Poor Visibility and Monitoring

    Without centralized visibility, security teams rely on manual cloud consoles and piecemeal logs. That slows response and leaves gaps. Seqrite solves this with a unified monitoring layer that aggregates logs and events across all your cloud environments. You get complete oversight and lightning-fast detection.

    7.     Regulatory Compliance Pressures

    Compliance with GDPR, HIPAA, PCI-DSS, DPDPA and other regulations is mandatory—but complex in multi-cloud environments. Seqrite Data Privacy simplifies compliance with continuous audits, policy enforcement, and detailed reports, helping you reduce audit stress and regulatory risk.

    8.    Staffing & Skills Gap

    Hiring cloud-native, security-savvy experts is tough. Many teams lack the expertise to monitor and secure dynamic cloud environments. Seqrite’s intuitive interface, automation, and policy templates remove much of the manual work, allowing lean IT teams to punch above their weight.

    9.    Multi-cloud Management Challenges

    Working across AWS, Azure, Google Cloud and maybe even private clouds? Each has its own models and configurations. This fragmentation creates blind spots and policy drift. Seqrite consolidates everything into one seamless dashboard, ensuring consistent cloud security policies across all environments.

    10.  Compliance in Hybrid & Multi-cloud Setups

    Hybrid cloud setups introduce additional risks, cross-environment data flows, networking complexities, and inconsistent controls. Seqrite supports consistent security policy application across on-premises, private clouds, and public clouds, no matter where a workload lives.

    Bring in Seqrite to secure your cloud journey, safe, compliant, and hassle-free.

     



    Source link

  • Understanding void(0) in JavaScript: What It Is, Why It’s Used, and How to Fix It



    Understanding void(0) in JavaScript: What It Is, Why It’s Used, and How to Fix It



    Source link

  • How to extract, create, and navigate Zip Files in C# &vert; Code4IT

    How to extract, create, and navigate Zip Files in C# | Code4IT


    Learn how to zip and unzip compressed files with C#. Beware: it’s not as obvious as it might seem!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When working with local files, you might need to open, create, or update Zip files.

    In this article, we will learn how to work with Zip files in C#. We will learn how to perform basic operations such as opening, extracting, and creating a Zip file.

    The main class we will use is named ZipFile, and comes from the System.IO.Compression namespace. It’s been present in C# since .NET Framework 4.5, so we can say it’s pretty stable 😉 Nevertheless, there are some tricky points that you need to know before using this class. Let’s learn!

    Using C# to list all items in a Zip file

    Once you have a Zip file, you can access the internal items without extracting the whole Zip.

    You can use the ZipFile.Open method.

    using ZipArchive archive = ZipFile.Open(zipFilePath, ZipArchiveMode.Read);
    System.Collections.ObjectModel.ReadOnlyCollection<ZipArchiveEntry> entries = archive.Entries;
    

    Notice that I specified the ZipArchiveMode. This is an Enum whose values are Read, Create, and Update.

    Using the Entries property of the ZipArchive, you can access the whole list of files stored within the Zip folder, each represented by a ZipArchiveEntry instance.

    All entries in the current Zip file

    The ZipArchiveEntry object contains several fields, like the file’s name and the full path from the root archive.

    Details of a single ZipEntry item

    There are a few key points to remember about the entries listed in the ZipArchiveEntry.

    1. It is a ReadOnlyCollection<ZipArchiveEntry>: it means that even if you find a way to add or update the items in memory, the changes are not applied to the actual files;
    2. It lists all files and folders, not only those at the root level. As you can see from the image above, it lists both the files at the root level, like File.txt, and those in inner folders, such as TestZip/InnerFolder/presentation.pptx;
    3. Each file is characterized by two similar but different properties: Name is the actual file name (like presentation.pptx), while FullName contains the path from the root of the archive (e.g. TestZip/InnerFolder/presentation.pptx);
    4. It lists folders as if they were files: in the image above, you can see TestZip/InnerFolder. You can recognize them because their Name property is empty and their Length is 0;

    Folders are treated like files, but with no Size or Name

    Lastly, remember that ZipFile.Open returns an IDisposable, so you should place the operations within a using statement.

    ❓❓A question for you! Why do we see an item for the TestZip/InnerFolder folder, but there is no reference to the TestZip folder? Drop a comment below 📩

    Extracting a Zip folder is easy but not obvious.

    We have only one way to do that: by calling the ZipFile.ExtractToDirectory method.

    It accepts as mandatory parameters the path of the Zip file to be extracted and the path to the destination:

    var zipPath = @"C:\Users\d.bellone\Desktop\TestZip.zip";
    var destinationPath = @"C:\Users\d.bellone\Desktop\MyDestination";
    ZipFile.ExtractToDirectory(zipPath, destinationPath);
    

    Once you run it, you will see the content of the Zip copied and extracted to the MyDestination folder.

    Note that this method creates the destination folder if it does not exist.

    This method accepts two more parameters:

    • entryNameEncoding, by which you can specify the encoding. The default value is UTF-8.
    • overwriteFiles allows you to specify whether it must overwrite existing files. The default value is false. If set to false and the destination files already exist, this method throws a System.IO.IOException saying that the file already exists.

    Using C# to create a Zip from a folder

    The key method here is ZipFile.CreateFromDirectory, which allows you to create Zip files in a flexible way.

    The first mandatory value is, of course, the source directory path.

    The second mandatory parameter is the destination of the resulting Zip file.

    It can be the local path to the file:

    string sourceFolderPath = @"\Desktop\myFolder";
    string destinationZipPath = @"\Desktop\destinationFile.zip";
    
    ZipFile.CreateFromDirectory(sourceFolderPath, destinationZipPath);
    

    Or it can be a Stream that you can use later for other operations:

    using (MemoryStream memStream = new MemoryStream())
    {
        string sourceFolderPath = @"\Desktop\myFolder";
        ZipFile.CreateFromDirectory(sourceFolderPath, memStream);
    
        var lenght = memStream.Length;// here the Stream is populated
    }
    

    You can finally add some optional parameters:

    • compressionLevel, whose values are Optimal, Fastest, NoCompression, SmallestSize.
    • includeBaseDirectory: a flag that defines if you have to copy only the first-level files or also the root folder.

    A quick comparison of the four Compression Levels

    As we just saw, we have four compression levels: Optimal, Fastest, NoCompression, and SmallestSize.

    What happens if I use the different values to zip all the photos and videos of my latest trip?

    The source folder’s size is 16.2 GB.

    Let me zip it with the four compression levels:

     private long CreateAndTrack(string sourcePath, string destinationPath, CompressionLevel compression)
     {
         Stopwatch stopwatch = Stopwatch.StartNew();
    
         ZipFile.CreateFromDirectory(
             sourceDirectoryName: sourcePath,
             destinationArchiveFileName: destinationPath,
             compressionLevel: compression,
             includeBaseDirectory: true
             );
         stopwatch.Stop();
    
         return stopwatch.ElapsedMilliseconds;
     }
    
    // in Main...
    
    var smallestTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "Smallest.zip"),
        CompressionLevel.SmallestSize);
    
    var noCompressionTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "NoCompression.zip"),
        CompressionLevel.NoCompression);
    
    var fastestTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "Fastest.zip"),
        CompressionLevel.Fastest);
    
    var optimalTime = CreateAndTrack(sourceFolderPath,
        Path.Combine(rootFolder, "Optimal.zip"),
        CompressionLevel.Optimal);
    

    By executing this operation, we have this table:

    Compression Type Execution time (ms) Execution time (s) Size (bytes) Size on disk (bytes)
    Optimal 483481 483 17,340,065,594 17,340,067,840
    Fastest 661674 661 16,935,519,764 17,004,888,064
    Smallest 344756 344 17,339,881,242 17,339,883,520
    No Compression 42521 42 17,497,652,162 17,497,653,248

    We can see a bunch of weird things:

    • Fastest compression generates a smaller file than Smallest compression.
    • Fastest compression is way slower than Smallest compression.
    • Optimal lies in the middle.

    This is to say: don’t trust the names; remember to benchmark the parts where you need performance, even with a test as simple as this.

    Wrapping up

    This was a quick article about one specific class in the .NET ecosystem.

    As we saw, even though the class is simple and it’s all about three methods, there are some things you should keep in mind before using this class in your code.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL

    How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL


    Fragment shaders allow us to create smooth, organic visuals that are difficult to achieve with standard polygon-based rendering in WebGL. One powerful example is the metaball effect, where multiple objects blend and deform seamlessly. This can be implemented using a technique called ray marching, directly within a fragment shader.

    In this tutorial, we’ll walk you through how to create droplet-like, bubble spheres using Three.js and GLSL—an effect that responds interactively to your mouse movements. But first, take a look at the demo video below to see the final result in action.

    Overview

    Let’s take a look at the overall structure of the demo and review the steps we’ll follow to build it.

    1. Setting Up the Fullscreen Plane

    We create a fullscreen plane that covers the entire viewport.

    2. Rendering Spheres with Ray Marching

    We’ll render spheres using ray marching in the fragment shader.

    3. From Spheres to Metaballs

    We blend multiple spheres smoothly to create a metaball effect.

    4. Adding Noise for a Droplet-like Appearance

    By adding noise to the surface, we create a realistic droplet-like texture.

    5. Simulating Stretchy Droplets with Mouse Movement

    We arrange spheres along the mouse trail to create a stretchy, elastic motion.

    Let’s get started!

    1. Setup

    We render a single fullscreen plane that covers the entire viewport.

    // Output.ts
    
    const planeGeometry = new THREE.PlaneGeometry(2.0, 2.0);
    const planeMaterial = new THREE.RawShaderMaterial({
        vertexShader: base_vert,
        fragmentShader: output_frag,
        uniforms: this.uniforms,
    });
    const plane = new THREE.Mesh(planeGeometry, planeMaterial);
    this.scene.add(plane);

    We define a uniform variable named uResolution to pass the canvas size to the shader, where Common.width and Common.height represent the width and height of the canvas in pixels. This uniform will be used to normalize coordinates based on the screen resolution.

    // Output.ts
    
    this.uniforms = {
        uResolution: {
            value: new THREE.Vector2(Common.width, Common.height),
        },
    };

    When using RawShaderMaterial, you need to provide your own shaders. Therefore, we prepare both a vertex shader and a fragment shader.

    // base.vert
    
    attribute vec3 position;
    varying vec2 vTexCoord;
    
    void main() {
        vTexCoord = position.xy * 0.5 + 0.5;
        gl_Position = vec4(position, 1.0);
    }

    The vertex shader receives the position attribute.

    Since the xy components of position originally range from -1 to 1, we convert them to a range from 0 to 1 and output them as a texture coordinate called vTexCoord. This is passed to the fragment shader and used to calculate colors or effects based on the position on the screen.

    // output.frag
    
    precision mediump float;
    
    uniform vec2 uResolution;
    varying vec2 vTexCoord;
    
    void main() {
        gl_FragColor = vec4(vTexCoord, 1.0, 1.0);
    }

    The fragment shader receives the interpolated texture coordinate vTexCoord and the uniform variable uResolution representing the canvas size. Here, we temporarily use vTexCoord to output color for testing.

    Now we’re all set to start drawing in the fragment shader!
    Next, let’s move on to actually rendering the spheres.

    2. Ray Marching

    2.1. What is Ray Marching?

    As mentioned at the beginning, we will use a method called ray marching to render spheres. Ray marching proceeds in the following steps:

    1. Define the scene
    2. Set the camera (viewing) direction
    3. Cast rays
    4. Evaluate the distance from the current ray position to the nearest object in the scene.
    5. Move the ray forward by that distance
    6. Check for a hit

    For example, let’s consider a scene with three spheres. These spheres are expressed using SDFs (Signed Distance Functions), which will be explained in detail later.

    First, we determine the camera direction. Once the direction is set, we cast a ray in that direction.

    Next, we evaluate the distance to all objects from the current ray position, and take the minimum of these distances.

    After obtaining this distance, we move the ray forward by that amount.

    We repeat this process until either the ray gets close enough to an object—closer than a small threshold—or the maximum number of steps is reached.
    If the distance is below the threshold, we consider it a “hit” and shade the corresponding pixel.

    For example, in the figure above, a hit is detected on the 8th ray marching step.

    If the maximum number of steps were set to 7, the 7th step would not have hit anything yet. But since the limit is reached, the loop ends and no hit is detected.

    Therefore, nothing would be rendered at that position. If parts of an object appear to be missing in the final image, it may be due to an insufficient number of steps. However, be aware that increasing the step count will also increase the computational load.

    To better understand this process, try running this demo to see how it works in practice.

    2.2. Signed Distance Function

    In the previous section, we briefly mentioned the SDF (Signed Distance Function).
    Let’s take a moment to understand what it is.

    An SDF is a function that returns the distance from a point to a particular shape. The key characteristic is that it returns a positive or negative value depending on whether the point is outside or inside the shape.

    For example, here is the distance function for a sphere:

    float sdSphere(vec3 p, float s)
    {
        return length(p) - s;
    }

    Here, p is a vector representing the position relative to the origin, and s is the radius of the sphere.

    This function calculates how far the point p is from the surface of a sphere centered at the origin with radius s.

    • If the result is positive, the point is outside the sphere.
    • If negative, it is inside the sphere.
    • If the result is zero, the point is on the surface—this is considered a hit point (in practice, we detect a hit when the distance is less than a small threshold).

    In this demo, we use a sphere’s distance function, but many other shapes have their own distance functions as well.

    If you’re interested, here’s a great article on distance functions.

    2.3. Rendering Spheres

    Let’s try rendering spheres.
    In this demo, we’ll render two slightly overlapping spheres.

    // output.frag
    
    precision mediump float;
    
    const float EPS = 1e-4;
    const int ITR = 16;
    
    uniform vec2 uResolution;
    
    varying vec2 vTexCoord;
    
    // Camera Params
    vec3 origin = vec3(0.0, 0.0, 1.0);
    vec3 lookAt = vec3(0.0, 0.0, 0.0);
    vec3 cDir = normalize(lookAt - origin);
    vec3 cUp = vec3(0.0, 1.0, 0.0);
    vec3 cSide = cross(cDir, cUp);
    
    vec3 translate(vec3 p, vec3 t) {
        return p - t;
    }
    
    float sdSphere(vec3 p, float s)
    {
        return length(p) - s;
    }
    
    float map(vec3 p) {
        float radius = 0.5;
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
        d = min(sphere0, sphere1);
    
        return d;
    }
    
    void main() {
        vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);
    
        // Orthographic Camera
        vec3 ray = origin + cSide * p.x + cUp * p.y;
        vec3 rayDirection = cDir;
    
        float dist = 0.0;
    
        for (int i = 0; i < ITR; ++i) {
            dist = map(ray);
            ray += rayDirection * dist;
            if (dist < EPS) break;
        }
    
        vec3 color = vec3(0.0);
    
        if (dist < EPS) {
            color = vec3(1.0, 1.0, 1.0);
        }
    
        gl_FragColor = vec4(color, 1.0);
    }

    First, we normalize the screen coordinates:

    vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);

    Next, we set up the camera. This demo uses an orthographic camera (parallel projection):

    // Camera Params
    vec3 origin = vec3(0.0, 0.0, 1.0);
    vec3 lookAt = vec3(0.0, 0.0, 0.0);
    vec3 cDir = normalize(lookAt - origin);
    vec3 cUp = vec3(0.0, 1.0, 0.0);
    vec3 cSide = cross(cDir, cUp);
    
    // Orthographic Camera
    vec3 ray = origin + cSide * p.x + cUp * p.y;
    vec3 rayDirection = cDir;

    After that, inside the map function, two spheres are defined and their distances calculated using sdSphere. The variable d is initially set to a large value and updated with the min function to keep track of the shortest distance to the surface.

    float map(vec3 p) {
        float radius = 0.5;
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
        d = min(sphere0, sphere1);
    
        return d;
    }

    Then we run a ray marching loop, which updates the ray position by computing the distance to the nearest object at each step. The loop ends either after a fixed number of iterations or when the distance becomes smaller than a threshold (dist < EPS):

    for ( int i = 0; i < ITR; ++ i ) {
    	dist = map(ray);
    	ray += rayDirection * dist;
    	if ( dist < EPS ) break ;
    }

    Finally, we determine the output color. We use black as the default color (background), and render a white pixel only if a hit is detected:

    vec3 color = vec3(0.0);
    
    if ( dist < EPS ) {
    	color = vec3(1.0);
    }

    We’ve successfully rendered two overlapping spheres using ray marching!

    2.4. Normals

    Although we successfully rendered spheres in the previous section, the scene still looks flat and lacks depth. This is because we haven’t applied any shading or visual effects that respond to surface orientation.

    While we won’t implement full shading in this demo, we’ll still compute surface normals, as they’re essential for adding surface detail and other visual effects.

    Let’s look at the code first:

    vec3 generateNormal(vec3 p) {
        return normalize(vec3(
                map(p + vec3(EPS, 0.0, 0.0)) - map(p + vec3(-EPS, 0.0, 0.0)),
                map(p + vec3(0.0, EPS, 0.0)) - map(p + vec3(0.0, -EPS, 0.0)),
                map(p + vec3(0.0, 0.0, EPS)) - map(p + vec3(0.0, 0.0, -EPS))
            ));
    }

    At first glance, this may seem hard to understand. Put simply, this computes the gradient of the distance function, which corresponds to the normal vector.

    If you’ve studied vector calculus, this might be easy to understand. For many others, though, it may seem a bit difficult.

    That’s totally fine—a full understanding of the details isn’t necessary to use the result. If you just want to move on, feel free to skip ahead to the section where we debug normals by visualizing them with color.

    However, for those who are interested in how it works, we’ll now walk through the explanation in more detail.

    The gradient of a scalar function 𝑓(𝑥,𝑦,𝑧) is simply a vector composed of its partial derivatives. It points in the direction of the greatest rate of increase of the function:

    To compute this gradient numerically, we can use the central difference method. For example:

    We apply the same idea for the 𝑦 and 𝑧 components.
    Note: The factor 2𝜀 is omitted in the code since we normalize the result using normalize().

    Next, let us consider a signed distance function 𝑓(𝑥,𝑦,𝑧), which returns the shortest distance from any point in space to the surface of an object. By definition, 𝑓(𝑥,𝑦,𝑧)=0 on the surface of the object.

    Assume that 𝑓 is smooth (i.e., differentiable) in the region of interest. When the point (𝑥,𝑦,𝑧) undergoes a small displacement Δ𝒓=(Δ𝑥,Δ𝑦,Δ𝑧), the change in the function value Δ𝑓 can be approximated using the first-order Taylor expansion:

    Here,∇𝑓 is the gradient vector of 𝑓, and Δ𝒓 is an arbitrary small displacement vector.

    Now, since 𝑓=0 on the surface and remains constant as we move along the surface (i.e., tangentially), the function value does not change, so Δ𝑓=0. Therefore:

    This means that the gradient vector is perpendicular to any tangent vector Δ𝒓 on the surface. In other words, the gradient vector ∇𝑓 points in the direction of the surface normal.

    Thus, the gradient of a signed distance function gives the surface normal direction at any point on the surface.

    2.5. Visualizing Normals with Color

    To verify that the surface normals are being calculated correctly, we can visualize them using color.

    if ( dist < EPS ) {
    	vec3 normal = generateNormal(ray);
    	color = normal;
    }

    Note that within the if block, ray refers to a point on the surface of the object. So by passing ray to generateNormal, we can obtain the surface normal at the point of intersection.

    When we render the scene, you’ll notice that the surface of the sphere is shaded in red, green, and blue based on the orientation of the normal vectors. This is because we’re mapping the 𝑥, 𝑦, and 𝑧 components of the normal vector to the RGB color channels respectively.

    This is a common and intuitive way to debug normal vectors visually, helping us ensure they are computed correctly.

    When combining two spheres with the standard min() function, a hard edge forms where the shapes intersect, resulting in an unnatural boundary.
    To avoid this, we can use a blending function called smoothMin, which softens the transition by merging the distance values smoothly.

    // added
    float smoothMin(float d1, float d2, float k) {
        float h = exp(-k * d1) + exp(-k * d2);
        return -log(h) / k;
    }
    
    float map(vec3 p) {
        float radius = 0.5;
        float k = 7.; // added: smoothing factor for metaball effect
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-.4, 0.0, 0.0)), radius);
        d = smoothMin(d, sphere0, k); // modified: blend with smoothing
        d = smoothMin(d, sphere1, k); // modified
    
        return d;
    }

    This function creates a smooth, continuous connection between shapes—producing a metaball-like effect where the forms appear to merge organically.

    The parameter k controls the smoothness of the blend. A higher k value results in a sharper transition (closer to min()), while a lower k produces smoother, more gradual merging.

    For more details, please refer to the following two articles:

    1. wgld.org | GLSL: オブジェクト同士を補間して結合する
    2. Inigo Quilez :: computer graphics, mathematics, shaders, fractals, demoscene and more

    4. Adding Noise for a Droplet-like Appearance

    So far, we’ve covered how to calculate normals and how to smoothly blend objects.

    Next, let’s tune the surface appearance to make things feel more realistic.

    In this demo, we’re aiming to create droplet-like metaballs. So how can we achieve that kind of look? The key idea here is to use noise to distort the surface.

    Let’s jump right into the code:

    // output.frag
    
    uniform float uTime;
    
    // ...
    
    float rnd3D(vec3 p) {
        return fract(sin(dot(p, vec3(12.9898, 78.233, 37.719))) * 43758.5453123);
    }
    
    float noise3D(vec3 p) {
        vec3 i = floor(p);
        vec3 f = fract(p);
    
        float a000 = rnd3D(i); // (0,0,0)
        float a100 = rnd3D(i + vec3(1.0, 0.0, 0.0)); // (1,0,0)
        float a010 = rnd3D(i + vec3(0.0, 1.0, 0.0)); // (0,1,0)
        float a110 = rnd3D(i + vec3(1.0, 1.0, 0.0)); // (1,1,0)
        float a001 = rnd3D(i + vec3(0.0, 0.0, 1.0)); // (0,0,1)
        float a101 = rnd3D(i + vec3(1.0, 0.0, 1.0)); // (1,0,1)
        float a011 = rnd3D(i + vec3(0.0, 1.0, 1.0)); // (0,1,1)
        float a111 = rnd3D(i + vec3(1.0, 1.0, 1.0)); // (1,1,1)
    
        vec3 u = f * f * (3.0 - 2.0 * f);
        // vec3 u = f*f*f*(f*(f*6.0-15.0)+10.0);
    
        float k0 = a000;
        float k1 = a100 - a000;
        float k2 = a010 - a000;
        float k3 = a001 - a000;
        float k4 = a000 - a100 - a010 + a110;
        float k5 = a000 - a010 - a001 + a011;
        float k6 = a000 - a100 - a001 + a101;
        float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;
    
        return k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;
    }
    
    vec3 dropletColor(vec3 normal, vec3 rayDir) {
        vec3 reflectDir = reflect(rayDir, normal);
    
        float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
        float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);
    
        vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
        vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;
    
        float intensity = 2.3;
        vec3 color = (_color0 + _color1) * intensity;
    
        return color;
    }
    
    // ...
    
    void main() {
    	// ...
    
    	if ( dist < EPS ) {
    		vec3 normal = generateNormal(ray);
    		color = dropletColor(normal, rayDirection);
    	}
    	
    	 gl_FragColor = vec4(color, 1.0);
    }

    To create the droplet-like texture, we’re using value noise. If you’re unfamiliar with these noise techniques, the following articles provide helpful explanations:

    3D value noise is generated by interpolating random values placed at the eight vertices of a cube. The process involves three stages of linear interpolation:

    1. Bottom face interpolation: First, we interpolate between the four corner values on the bottom face of the cube
    2. Top face interpolation: Similarly, we interpolate between the four corner values on the top face
    3. Final z-axis interpolation: Finally, we interpolate between the results from the bottom and top faces along the z-axis

    This triple interpolation process is called trilinear interpolation.

    The following code demonstrates the trilinear interpolation process for 3D value noise:

    float n = mix(
    	mix( mix( a000, a100, u.x ), mix( a010, a110, u.x ), u.y ),
    	mix( mix( a001, a101, u.x ), mix( a011, a111, u.x ), u.y ),
    	u.z
    );

    The nested mix() functions above can be converted into an explicit polynomial form for better performance:

    float k0 = a000;
    float k1 = a100 - a000;
    float k2 = a010 - a000;
    float k3 = a001 - a000;
    float k4 = a000 - a100 - a010 + a110;
    float k5 = a000 - a010 - a001 + a011;
    float k6 = a000 - a100 - a001 + a101;
    float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;
    
    float n = k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;

    By sampling this noise using the reflection vector as coordinates, we can create a realistic water droplet-like texture. Note that we are using the surface normal obtained earlier to compute this reflection vector. To add time-based variation, we generate noise at positions offset by uTime:

    vec3 reflectDir = reflect(rayDir, normal);
    
    float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
    float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);

    Finally, we blend two noise-influenced colors and scale the result:

    vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
    vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;
    
    float intensity = 2.3;
    vec3 color = (_color0 + _color1) * intensity;

    It’s starting to look quite like a water droplet! However, it still appears a bit murky.
    To improve this, let’s add the following post-processing step:

    // output.frag
    
    if ( dist < EPS ) {
    	vec3 normal = generateNormal(ray);
    	color = dropletColor(normal, rayDirection);
    }
    
    vec3 finalColor = pow(color, vec3(7.0)); // added
    
    gl_FragColor = vec4(finalColor, 1.0); // modified

    Using pow(), darker regions are suppressed, allowing the highlights to pop and creating a more glass-like, translucent surface.

    5. Simulating Stretchy Droplets with Mouse Movement

    Finally, let’s make the droplet stretch and follow the mouse movement, giving it a soft and elastic feel.

    We’ll achieve this by placing multiple spheres along the mouse trail.

    // Output.ts
    
    constructor() {
    	// ...
    	this.trailLength = 15;
    	this.pointerTrail = Array.from({ length: this.trailLength }, () => new THREE.Vector2(0, 0));
    	
    	this.uniforms = {
    	    uTime: { value: Common.time },
    	    uResolution: {
    	        value: new THREE.Vector2(Common.width, Common.height),
    	    },
    	    uPointerTrail: { value: this.pointerTrail },
    	};
    }
    
    // ...
    
    /**
     * # rAF update
     */
    update() {
      this.updatePointerTrail();
      this.render();
    }
    
    /**
     * # Update the pointer trail
     */
    updatePointerTrail() {
      for (let i = this.trailLength - 1; i > 0; i--) {
         this.pointerTrail[i].copy(this.pointerTrail[i - 1]);
      }
      this.pointerTrail[0].copy(Pointer.coords);
    }
    // output.frag
    
    const int TRAIL_LENGTH = 15; // added
    uniform vec2 uPointerTrail[TRAIL_LENGTH]; // added
    
    // ...
    
    // modified
    float map(vec3 p) {
        float baseRadius = 8e-3;
        float radius = baseRadius * float(TRAIL_LENGTH);
        float k = 7.;
        float d = 1e5;
    
        for (int i = 0; i < TRAIL_LENGTH; i++) {
            float fi = float(i);
            vec2 pointerTrail = uPointerTrail[i] * uResolution / min(uResolution.x, uResolution.y);
    
            float sphere = sdSphere(
                    translate(p, vec3(pointerTrail, .0)),
                    radius - baseRadius * fi
                );
    
            d = smoothMin(d, sphere, k);
        }
    
        float sphere = sdSphere(translate(p, vec3(1.0, -0.25, 0.0)), 0.55);
        d = smoothMin(d, sphere, k);
    
        return d;
    }

    Conclusion

    In this tutorial, we explored how to create a dynamic, droplet-like effect using ray marching and shading techniques. Here’s what we covered:

    1. Used ray marching to render spheres in 3D space.
    2. Applied smoothMin to blend the spheres into seamless metaballs.
    3. Added surface noise to give the spheres a more organic appearance.
    4. Simulated stretchy motion by arranging spheres along the mouse trail.

    By combining these techniques, we achieved a soft, fluid visual that responds to user interaction.

    Thanks for following along—I hope you find these techniques useful in your own projects!



    Source link

  • How Seqrite Endpoint Protection Blocks Bots, Scripts, and Malware


    In today’s hyper-connected digital world, the cybersecurity landscape is shifting dramatically. Gone are the days when cyberattacks primarily relied on human intervention. We’re now facing a new breed of silent, swift adversaries: non-human threats. These automated entities—bots, malicious scripts, and sophisticated malware—are designed to operate at machine speed, exploiting vulnerabilities, bypassing traditional defenses, and often remaining undetected until significant damage has occurred. So, how do you defend against something you can’t see, something that moves faster than human reaction? The answer lies in intelligent, automated endpoint security. Enter Seqrite Endpoint Protection (EPP), your robust shield against these invisible invaders. Available for both cloud-based and on-premise deployments, Seqrite EPP is engineered with cutting-edge technologies specifically designed to identify and neutralize these stealthy, non-human threats.

    Understanding the Enigma: What Exactly Are Non-Human Cyber Threats?

    When we talk about “non-human cyber threats,” we’re referring to automated programs and code snippets that launch attacks without requiring direct human interaction. These include:

    • Bots: Automated programs designed to perform repetitive tasks at scale. Think credential stuffing attacks where bots try thousands of username/password combinations, or Distributed Denial of Service (DDoS) attacks that flood a server with traffic.
    • Malicious Scripts: These are pieces of automated code, often hidden within legitimate-looking files or web pages, designed to exploit system weaknesses, exfiltrate sensitive data, or spread malware across your network.
    • Exploit Kits: These are sophisticated toolkits that automatically scan systems for unpatched vulnerabilities and then deploy exploits to gain unauthorized access or deliver payloads like ransomware.

    The key characteristic of these threats is their autonomy and speed. They operate under the radar, making traditional, reactive security measures largely ineffective. This is precisely why proactive, automated detection and prevention mechanisms are absolutely critical for modern businesses.

    Seqrite Endpoint Protection: Your Multi-Layered Defense Against Automation

    Seqrite’s EPP doesn’t just offer a single line of defense; it deploys a comprehensive, multi-layered security framework. This framework is specifically engineered to detect and block automation-driven threats using a powerful combination of intelligent rule-based systems, behavioral analysis, and advanced AI-powered capabilities.

    Let’s dive into the key features that make Seqrite EPP a formidable opponent against non-human threats:

    1. Advanced Device Control: Many non-human threats, especially scripts and certain types of malware, are delivered via external devices like USB drives. Seqrite’s Advanced Device Control enforces strict usage policies, allowing you to define what devices can connect to your endpoints and how they can be used. By controlling storage, network, and wireless interfaces, you effectively close off a major entry point for automated attacks.
    2. Application Control with Zero Trust: Imagine only allowing approved applications and scripts to run on your systems. That’s the power of Seqrite’s Application Control. By implementing a Zero Trust model, it blocks unknown or unapproved applications and scripts from executing. Through meticulous allowlisting and blocklisting, only trusted applications can operate, making it incredibly effective against stealthy automation tools that attempt to execute malicious code.
    3. Behavior-Based Detection (GoDeep.AI): This is where Seqrite truly shines. Leveraging cutting-edge AI and machine learning, GoDeep.AI continuously monitors endpoint activity to identify abnormal and suspicious behaviors that indicate a non-human threat. This includes detecting:
      • Repetitive access patterns: A hallmark of bots attempting to brute-force accounts or scan for vulnerabilities.
      • Scripted encryption behavior: Instantly flags the tell-tale signs of ransomware encrypting files.
      • Silent data exfiltration attempts: Catches automated processes trying to siphon off sensitive information. The system doesn’t just detect; it actively stops suspicious activity in its tracks before it can cause any harm.
    4. Intrusion Detection & Prevention System (IDS/IPS): Seqrite’s integrated IDS/IPS actively monitors network traffic for known exploit patterns and anomalous behavior. This robust system is crucial for blocking automation-based threats that attempt to infiltrate your network through known vulnerabilities or launch network-based attacks like port scanning.
    5. File Sandboxing: When a suspicious file or script enters your environment, Seqrite doesn’t let it run directly on your system. Instead, it’s whisked away to a secure, isolated virtual sandbox environment for deep analysis. Here, the file is allowed to execute and its behavior is meticulously observed. If it exhibits any malicious traits—like attempting to mimic user behavior, access restricted resources, or encrypt files—it’s immediately flagged and stopped, preventing any potential damage to your actual endpoints.
    6. Web Protection & Phishing Control: Many non-human threats, particularly bots and sophisticated malware, rely on communication with remote command-and-control (C2) servers. Seqrite’s Web Protection proactively blocks:
      • Access to known malicious domains.
      • Phishing sites designed to steal credentials.
      • Unauthorized web access that could lead to malware downloads.
      • Crucially, it cuts off botnet callbacks, effectively severing the communication lines between bots and their command centers, rendering them inert.

    Enhancing Your Defense: Essential Supporting Features

    Beyond its core capabilities, Seqrite Endpoint Protection is bolstered by a suite of supporting features that further strengthen your organization’s resilience against non-human threats and beyond:

    Feature Benefit
    Patch Management Automatically identifies and fixes software vulnerabilities that bots and scripts often exploit to gain entry. Proactive patching is key to prevention.
    Firewall Provides a critical layer of defense by filtering unauthorized network traffic and blocking communication with known botnet IP addresses.
    Data Loss Prevention (DLP) Prevents automated data theft by monitoring and controlling data in transit, ensuring sensitive information doesn’t leave your network without authorization.
    Centralized Log Management Offers a unified view of security events, allowing for rapid detection and auditing of unusual or suspicious behaviors across all endpoints.
    Disk Encryption Management Safeguards your data by encrypting entire disks, stopping automated decryption attempts even if data is stolen, and protecting against ransomware.

     

    The Future of Endpoint Security: Why Non-Human Threat Detection is Non-Negotiable

    As we move deeper into 2025 and beyond, cyber threats are becoming increasingly automated, sophisticated, and often, AI-driven. Relying on traditional, signature-based security solutions is no longer enough to match the speed, stealth, and evolving tactics of automation-based attacks.

    Seqrite Endpoint Protection is built for this future. It leverages intelligent automation to effectively combat automation—blocking bots, malicious scripts, advanced ransomware, and other non-human threats before they can execute and wreak havoc on your systems and data.

    Final Takeaway: Don’t Let Invisible Threats Compromise Your Business

    In a world where cyberattacks are increasingly executed by machines, your defense must be equally advanced. With its comprehensive suite of features—including cutting-edge device and application control, AI-driven behavioral detection (GoDeep.AI), robust network-level protection, and secure sandboxing—Seqrite Endpoint Protection ensures your endpoints remain locked down and secure.

    Whether your organization operates with a cloud-first strategy or relies on a traditional on-premise infrastructure, Seqrite provides the adaptable and powerful security solutions you need.

    Ready to Fortify Your Defenses?

    It’s time to upgrade your endpoint security and protect your organization from both human-initiated and the ever-growing wave of non-human cyber threats.

    Explore how Seqrite can secure your business today. Request a Free Trial or Schedule a Demo.

     



    Source link