برچسب: With

  • Interactive Text Destruction with Three.js, WebGPU, and TSL

    Interactive Text Destruction with Three.js, WebGPU, and TSL



    When Flash was taken from us all those years ago, it felt like losing a creative home — suddenly, there were no tools left for building truly interactive experiences on the web. In its place, the web flattened into a static world of HTML and CSS.

    But those days are finally behind us. We’re picking up where we left off nearly two decades ago, and the web is alive again with rich, immersive experiences — thanks in large part to powerful tools like Three.js.

    I’ve been working with images, video, and interactive projects for 15 years, using things like Processing, p5.js, OpenFrameworks, and TouchDesigner. Last year, I added Three.js to the mix as a creative tool, and I’ve been loving the learning process. That ongoing exploration leads to little experiments like the one I’m sharing in this tutorial.

    Project Structure

    The structure of our script is going to be simple: one function to preload assets, and another one to build the scene.

    Since we’ll be working with 3D text, the first thing we need to do is load a font in .json format — the kind that works with Three.js.

    To convert a .ttf font into that format, you can use the Facetype.js tool, which generates a .typeface.json file.

    const Resources = {
    	font: null
    };
    
    function preload() {
    
    	const _font_loader = new FontLoader();
    	_font_loader.load( "../static/font/Times New Roman_Regular.json", ( font ) => {
    
    		Resources.font = font;
    		init();
    
    	} );
    
    }
    
    function init() {
    
    }
    
    window.onload = preload;

    Scene setup & Environment

    A classic Three.js scene — the only thing to keep in mind is that we’re working with Three Shader Language (TSL), which means our renderer needs to be a WebGPURenderer.

    const scene = new THREE.Scene();
    const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
    const renderer = new THREE.WebGPURenderer({ antialias: true });
    
    document.body.appendChild(renderer.domElement);
    
    renderer.setSize(window.innerWidth, window.innerHeight);
    camera.position.z = 5;
    
    scene.add(camera);

    Next, we’ll set up the scene environment to get some lighting going.

    To keep things simple and avoid loading more assets, we’ll use the default RoomEnvironment that “comes” with Three.js. We’ll also add a DirectionalLight to the scene.

    const environment = new RoomEnvironment();
    const pmremGenerator = new THREE.PMREMGenerator(renderer);
    scene.environment = pmremGenerator.fromSceneAsync(environment).texture;
    
    scene.environmentIntensity = 0.8;
    
    const   light = new THREE.DirectionalLight("#e7e2ca",5);
    light.position.x = 0.0;
    light.position.y = 1.2;
    light.position.z = 3.86;
    
    scene.add(light);

    TextGeometry

    We’ll use TextGeometry, which lets us create 3D text in Three.js.

    It uses a JSON font file (which we loaded earlier with FontLoader) and is configured with parameters like size, depth, and letter spacing.

    const text_geo = new TextGeometry("NUEVOS",{
        font:Resources.font,
        size:1.0,
        depth:0.2,
        bevelEnabled: true,
        bevelThickness: 0.1,
        bevelSize: 0.01,
        bevelOffset: 0,
        bevelSegments: 1
    }); 
    
    const mesh = new THREE.Mesh(
        text_geo,
        new THREE.MeshStandardMaterial({ 
            color: "#656565",
            metalness: 0.4, 
            roughness: 0.3
        })
    );
    
    scene.add(mesh);

    By default, the origin of the text sits at (0, 0), but we want it centered.
    To do that, we need to compute its BoundingBox and manually apply a translation to the geometry:

    text_geo.computeBoundingBox();
    const centerOffset = - 0.5 * ( text_geo.boundingBox.max.x - text_geo.boundingBox.min.x );
    const centerOffsety = - 0.5 * ( text_geo.boundingBox.max.y - text_geo.boundingBox.min.y );
    text_geo.translate( centerOffset, centerOffsety, 0 );

    Now that we have the mesh and material ready, we can move on to the function that lets us blow everything up 💥

    Three Shader Language

    I really love TSL — it’s closed the gap between ideas and execution, in a context that’s not always the friendliest… shaders.

    The effect we’re going to implement deforms the geometry’s vertices based on the pointer’s position, and uses spring physics to animate those deformations in a dynamic way.

    But before we get to that, let’s grab a few attributes we’ll need to make everything work properly:

    //  Original position of each vertex — we’ll use it as a reference
    //  so unaffected vertices can "return" to their original spot
    const initial_position = storage( text_geo.attributes.position, "vec3", count );
    
    //  Normal of each vertex — we’ll use this to know which direction to "push" in
    const normal_at = storage( text_geo.attributes.normal, "vec3", count );
    
    //  Number of vertices in the geometry
    const count = text_geo.attributes.position.count;

    Next, we’ll create a storage buffer to hold the simulation data — and we’ll also write a function.
    But not a regular JavaScript function — this one’s a compute function, written in the context of TSL.

    It runs on the GPU and we’ll use it to set up the initial values for our buffers, getting everything ready for the simulation.

    // In this buffer we’ll store the modified positions of each vertex —
    // in other words, their current state in the simulation.
    const   position_storage_at = storage(new THREE.StorageBufferAttribute(count,3),"vec3",count);   
    
    const compute_init = Fn( ()=>{
    
    	position_storage_at.element( instanceIndex ).assign( initial_position.element( instanceIndex ) );
    
    } )().compute( count );
    
    // Run the function on the GPU. This runs compute_init once per vertex.
    renderer.computeAsync( compute_init );

    Now we’re going to create another one of these functions — but unlike the previous one, this one will run inside the animation loop, since it’s responsible for updating the simulation on every frame.

    This function runs on the GPU and needs to receive values from the outside — like the pointer position, for example.

    To send that kind of data to the GPU, we use what’s called uniforms. They work like bridges between our “regular” code and the code that runs inside the GPU shader.

    They’re defined like this:

    const u_input_pos = uniform(new THREE.Vector3(0,0,0));
    const u_input_pos_press = uniform(0.0);

    With this, we can calculate the distance between the pointer position and each vertex of the geometry.

    Then we clamp that value so the deformation only affects vertices within a certain radius.
    To do that, we use the step function — it acts like a threshold, and lets us apply the effect only when the distance is below a defined value.

    Finally, we use the vertex normal as a direction to push it outward.

    const compute_update = Fn(() => {
    
        // Original position of the vertex — also its resting position
        const base_position = initial_position.element(instanceIndex);
    
        // The vertex normal tells us which direction to push
        const normal = normal_at.element(instanceIndex);
    
        // Current position of the vertex — we’ll update this every frame
        const current_position = position_storage_at.element(instanceIndex);
    
        // Calculate distance between the pointer and the base position of the vertex
        const distance = length(u_input_pos.sub(base_position));
    
        // Limit the effect's range: it only applies if distance is less than 0.5
        const pointer_influence = step(distance, 0.5).mul(1.0);
    
        // Compute the new displaced position along the normal.
        // Where pointer_influence is 0, there’ll be no deformation.
        const disorted_pos = base_position.add(normal.mul(pointer_influence));
    
        // Assign the new position to update the vertex
        current_position.assign(disorted_pos);
    
    })().compute(count);
    

    To make this work, we’re missing two key steps: we need to assign the buffer with the modified positions to the material, and we need to make sure the renderer runs the compute function on every frame inside the animation loop.

    // Assign the buffer with the modified positions to the material
    mesh.material.positionNode = position_storage_at.toAttribute();
    
    // Animation loop
    function animate() {
    	// Run the compute function
    	renderer.computeAsync(compute_update_0);
    
    	// Render the scene
    	renderer.renderAsync(scene, camera);
    }

    Right now the function doesn’t produce anything too exciting — the geometry moves around in a kinda clunky way. We’re about to bring in springs, and things will get much better.

    // Spring — how much force we apply to reach the target value
    velocity += (target_value - current_value) * spring;
    
    // Friction controls the damping, so the movement doesn’t oscillate endlessly
    velocity *= friction;
    
    current_value += velocity;

    But before that, we need to store one more value per vertex, the velocity, so let’s create another storage buffer.

    const position_storage_at = storage(new THREE.StorageBufferAttribute(count, 3), "vec3", count);
    
    // New buffer for velocity
    const velocity_storage_at = storage(new THREE.StorageBufferAttribute(count, 3), "vec3", count);
    
    const compute_init = Fn(() => {
    
        position_storage_at.element(instanceIndex).assign(initial_position.element(instanceIndex));
        
        // We initialize it too
        velocity_storage_at.element(instanceIndex).assign(vec3(0.0, 0.0, 0.0));
    
    })().compute(count);

    We’ll also add two uniforms: spring and friction.

    const u_spring = uniform(0.05);
    const u_friction = uniform(0.9);

    Now we’ve implemented the springs in the update:

    const compute_update = Fn(() => {
    
        const base_position = initial_position.element(instanceIndex);
        const current_position = position_storage_at.element(instanceIndex);
    
        // Get current velocity
        const current_velocity = velocity_storage_at.element(instanceIndex);
    
        const normal = normal_at.element(instanceIndex);
    
        const   distance =  length(u_input_pos.sub(base_position));
        const   pointer_influence = step(distance,0.5).mul(1.5);
    
        const disorted_pos = base_position.add(normal.mul(pointer_influence));
        disorted_pos.assign((mix(base_position, disorted_pos, u_input_pos_press)));
      
        // Spring implementation
        // velocity += (target_value - current_value) * spring;
        current_velocity.addAssign(disorted_pos.sub(current_position).mul(u_spring));
        // velocity *= friction;
        current_velocity.assign(current_velocity.mul(u_friction));
        // value += velocity
        current_position.addAssign(current_velocity);
    
    
    })().compute(count);

    Now we’ve got everything we need — time to start fine-tuning.

    We’re going to add two things. First, we’ll use the TSL function mx_noise_vec3 to generate some noise for each vertex. That way, we can tweak the direction a bit so things don’t feel so stiff.

    We’re also going to rotate the vertices using another TSL function — surprise, it’s called rotate.

    Here’s what our updated compute_update function looks like:

    const compute_update = Fn(() => {
    
        const base_position = initial_position.element(instanceIndex);
        const current_position = position_storage_at.element(instanceIndex);
        const current_velocity = velocity_storage_at.element(instanceIndex);
    
        const normal = normal_at.element(instanceIndex);
    
        // NEW: Add noise so the direction in which the vertices "explode" isn’t too perfectly aligned with the normal
        const noise = mx_noise_vec3(current_position.mul(0.5).add(vec3(0.0, time, 0.0)), 1.0).mul(u_noise_amp);
    
        const distance = length(u_input_pos.sub(base_position));
        const pointer_influence = step(distance, 0.5).mul(1.5);
    
        const disorted_pos = base_position.add(noise.mul(normal.mul(pointer_influence)));
    
        // NEW: Rotate the vertices to give the animation a more chaotic feel
        disorted_pos.assign(rotate(disorted_pos, vec3(normal.mul(distance)).mul(pointer_influence)));
    
        disorted_pos.assign(mix(base_position, disorted_pos, u_input_pos_press));
    
        current_velocity.addAssign(disorted_pos.sub(current_position).mul(u_spring));
        current_position.addAssign(current_velocity);
        current_velocity.assign(current_velocity.mul(u_friction));
    
    })().compute(count);
    

    Now that the motion feels right, it’s time to tweak the material colors a bit and add some post-processing to the scene.

    We’re going to work on the emissive color — meaning it won’t be affected by lights, and it’ll always look bright and explosive. Especially once we throw some bloom on top. (Yes, bloom everything.)

    We’ll start from a base color (whichever you like), passed in as a uniform. To make sure each vertex gets a slightly different color, we’ll offset its hue a bit using values from the buffers — in this case, the velocity buffer.

    The hue function takes a color and a value to shift its hue, kind of like how offsetHSL works in THREE.Color.

    // Base emissive color
    const emissive_color = color(new THREE.Color("0000ff"));
    
    const vel_at = velocity_storage_at.toAttribute();
    const hue_rotated = vel_at.mul(Math.PI*10.0);
    
    // Multiply by the length of the velocity buffer — this means the more movement,
    // the more the vertex color will shift
    const emission_factor = length(vel_at).mul(10.0);
    
    // Assign the color to the emissive node and boost it as much as you want
    mesh.material.emissiveNode = hue(emissive_color, hue_rotated).mul(emission_factor).mul(5.0);

    Finally! Lets change scene background color and add Fog:

    scene.fog = new THREE.Fog(new THREE.Color("#41444c"),0.0,8.5);
    scene.background = scene.fog.color;

    Now, let’s spice up the scene with a bit of post-processing — one of those things that got way easier to implement thanks to TSL.

    We’re going to include three effects: ambient occlusion, bloom, and noise. I always like adding some noise to what I do — it helps break up the flatness of the pixels a bit.

    I won’t go too deep into this part — I grabbed the AO setup from the Three.js examples.

    const   composer = new THREE.PostProcessing(renderer);
    const   scene_pass = pass(scene,camera);
    
    scene_pass.setMRT(mrt({
        output:output,
        normal:normalView
    }));
    
    const   scene_color = scene_pass.getTextureNode("output");
    const   scene_depth = scene_pass.getTextureNode("depth");
    const   scene_normal = scene_pass.getTextureNode("normal");
    
    const ao_pass = ao( scene_depth, scene_normal, camera);
    ao_pass.resolutionScale = 1.0;
    
    const   ao_denoise = denoise(ao_pass.getTextureNode(), scene_depth, scene_normal, camera ).mul(scene_color);
    const   bloom_pass = bloom(ao_denoise,0.3,0.2,0.1);
    const   post_noise = (mx_noise_float(vec3(uv(),time.mul(0.1)).mul(sizes.width),0.03)).mul(1.0);
    
    composer.outputNode = ao_denoise.add(bloom_pass).add(post_noise);

    Alright, that’s it amigas — thanks so much for reading, and I hope it was useful!



    Source link

  • Python – Solving 7 Queen Problem with Tabu Search – Useful code

    Python – Solving 7 Queen Problem with Tabu Search – Useful code


    The n-queens problem is a classic puzzle that involves placing n chess queens on an n × n chessboard in such a way that no two queens threaten each other. In other words,
    no two queens should share the same row, column, or diagonal. This is a constraintsatisfaction problem (CSP) that does not define an explicit objective function. Let’s
    suppose we are attempting to solve a 7-queens problem using tabu search. In this problem, the number of collisions in the initial random configuration shown in figure 6.8a is 4: {Q1– Q2}, {Q2– Q6}, {Q4– Q5}, and {Q6– Q7}.

    The above is part of the book Optimization Algorithms by Alaa Khamis, which I have used as a stepstone, in order to make a YT video, explaining the core of the tabu search with the algorithm. The solution of the n-queens problem is actually interesting, as its idea is to swap queen’s columns until these are allowed to be swaped and until the constrains are solved. The “tabu tenure” is just a type of record, that does not allow a certain change to be carried for a number of moves after it has been carried out. E.g., once you replace the columns of 2 queens, you are not allowed to do the same for the next 3 moves. This allows you to avoid loops.

    https://www.youtube.com/watch?v=m7uAw3cNMAM

    Github code:

    Thank you and have a nice day! 🙂



    Source link

  • How to automatically refresh configurations with Azure App Configuration in ASP.NET Core | Code4IT

    How to automatically refresh configurations with Azure App Configuration in ASP.NET Core | Code4IT


    ASP.NET allows you to poll Azure App Configuration to always get the most updated values without restarting your applications. It’s simple, but you have to think thoroughly.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In a previous article, we learned how to centralize configurations using Azure App Configuration, a service provided by Azure to share configurations in a secure way. Using Azure App Configuration, you’ll be able to store the most critical configurations in a single place and apply them to one or more environments or projects.

    We used a very simple example with a limitation: you have to restart your applications to make changes effective. In fact, ASP.NET connects to Azure App Config, loads the configurations in memory, and serves these configs until the next application restart.

    In this article, we’re gonna learn how to make configurations dynamic: by the end of this article, we will be able to see the changes to our configs reflected in our applications without restarting them.

    Since this one is a kind of improvement of the previous article, you should read it first.

    Let me summarize here the code showcased in the previous article. We have an ASP.NET Core API application whose only purpose is to return the configurations stored in an object, whose shape is this one:

    {
      "MyNiceConfig": {
        "PageSize": 6,
        "Host": {
          "BaseUrl": "https://www.mydummysite.com",
          "Password": "123-go"
        }
      }
    }
    

    In the constructor of the API controller, I injected an IOptions<MyConfig> instance that holds the current data stored in the application.

     public ConfigDemoController(IOptions<MyConfig> config)
            => _config = config;
    

    The only HTTP Endpoint is a GET: it just accesses that value and returns it to the client.

    [HttpGet()]
    public IActionResult Get()
    {
        return Ok(_config.Value);
    }
    

    Finally, I created a new instance of Azure App Configuration, and I used a connection string to integrate Azure App Configuration with the existing configurations by calling:

    builder.Configuration.AddAzureAppConfiguration(ConnectionString);
    

    Now we can move on and make configurations dynamic.

    Sentinel values: a guard value to monitor changes in the configurations

    On Azure App Configuration, you have to update the configurations manually one by one. Unfortunately, there is no way to update them in a single batch. You can import them in a batch, but you have to update them singularly.

    Imagine that you have a service that accesses an external API whose BaseUrl and API Key are stored on Az App Configuration. We now need to move to another API: we then have to update both BaseUrl and API Key. The application is running, and we want to update the info about the external API. If we updated the application configurations every time something is updated on Az App Configuration, we would end up with an invalid state – for example, we would have the new BaseUrl and the old API Key.

    Therefore, we have to define a configuration value that acts as a sort of versioning key for the whole list of configurations. In Azure App Configuration’s jargon, it’s called Sentinel.

    A Sentinel is nothing but version key: it’s a string value that is used by the application to understand if it needs to reload the whole list of configurations. Since it’s just a string, you can set any value, as long as it changes over time. My suggestion is to use the UTC date value of the moment you have updated the value, such as 202306051522. This way, in case of errors you can understand when was the last time any of these values have changed (but you won’t know which values have changed), and, depending on the pricing tier you are using, you can compare the current values with the previous ones.

    So, head back to the Configuration Explorer page and add a new value: I called it Sentinel.

    Sentinel value on Azure App Configuration

    As I said, you can use any value. For the sake of this article, I’m gonna use a simple number (just for simplicity).

    Define how to refresh configurations using ASP.NET Core app startup

    We can finally move to the code!

    If you recall, in the previous article we added a NuGet package, Microsoft.Azure.AppConfiguration.AspNetCore, and then we added Azure App Configuration as a configurations source by calling

    builder.Configuration.AddAzureAppConfiguration(ConnectionString);
    

    That instruction is used to load all the configurations, without managing polling and updates. Therefore, we must remove it.

    Instead of that instruction, add this other one:

    builder.Configuration.AddAzureAppConfiguration(options =>
    {
        options
        .Connect(ConnectionString)
        .Select(KeyFilter.Any, LabelFilter.Null)
        // Configure to reload configuration if the registered sentinel key is modified
        .ConfigureRefresh(refreshOptions =>
                  refreshOptions.Register("Sentinel", label: LabelFilter.Null, refreshAll: true)
            .SetCacheExpiration(TimeSpan.FromSeconds(3))
          );
    });
    

    Let’s deep dive into each part:

    options.Connect(ConnectionString) just tells ASP.NET that the configurations must be loaded from that specific connection string.

    .Select(KeyFilter.Any, LabelFilter.Null) loads all keys that have no Label;

    and, finally, the most important part:

    .ConfigureRefresh(refreshOptions =>
                refreshOptions.Register(key: "Sentinel", label: LabelFilter.Null, refreshAll: true)
          .SetCacheExpiration(TimeSpan.FromSeconds(3))
        );
    

    Here we are specifying that all values must be refreshed (refreshAll: true) when the key with value=“Sentinel” (key: "Sentinel") is updated. Then, store those values for 3 seconds (SetCacheExpiration(TimeSpan.FromSeconds(3)).

    Here I used 3 seconds as a refresh time. This means that, if the application is used continuously, the application will poll Azure App Configuration every 3 seconds – it’s clearly a bad idea! So, pick the correct value depending on the change expectations. The default value for cache expiration is 30 seconds.

    Notice that the previous instruction adds Azure App Configuration to the Configuration object, and not as a service used by .NET. In fact, the method is builder.Configuration.AddAzureAppConfiguration. We need two more steps.

    First of all, add Azure App Configuration to the IServiceCollection object:

    builder.Services.AddAzureAppConfiguration();
    

    Finally, we have to add it to our existing middlewares by calling

    app.UseAzureAppConfiguration();
    

    The final result is this:

    public static void Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);
    
        const string ConnectionString = "......";
    
        // Load configuration from Azure App Configuration
        builder.Configuration.AddAzureAppConfiguration(options =>
        {
            options.Connect(ConnectionString)
                    .Select(KeyFilter.Any, LabelFilter.Null)
                    // Configure to reload configuration if the registered sentinel key is modified
                    .ConfigureRefresh(refreshOptions =>
                        refreshOptions.Register(key: "Sentinel", label: LabelFilter.Null, refreshAll: true)
                        .SetCacheExpiration(TimeSpan.FromSeconds(3)));
        });
    
        // Add the service to IServiceCollection
        builder.Services.AddAzureAppConfiguration();
    
        builder.Services.AddControllers();
        builder.Services.Configure<MyConfig>(builder.Configuration.GetSection("MyNiceConfig"));
    
        var app = builder.Build();
    
        // Add the middleware
        app.UseAzureAppConfiguration();
    
        app.UseHttpsRedirection();
    
        app.MapControllers();
    
        app.Run();
    }
    

    IOptionsMonitor: accessing and monitoring configuration values

    It’s time to run the project and look at the result: some of the values are coming from Azure App Configuration.

    Default config coming from Azure App Configuration

    Now we can update them: without restarting the application, update the PageSize value, and don’t forget to update the Sentinel too. Call again the endpoint, and… nothing happens! 😯

    This is because in our controller we are using IOptions<T> instead of IOptionsMonitor<T>. As we’ve learned in a previous article, IOptionsMonitor<T> is a singleton instance that always gets the most updated config values. It also emits an event when the configurations have been refreshed.

    So, head back to the ConfigDemoController, and replace the way we retrieve the config:

    [ApiController]
    [Route("[controller]")]
    public class ConfigDemoController : ControllerBase
    {
        private readonly IOptionsMonitor<MyConfig> _config;
    
        public ConfigDemoController(IOptionsMonitor<MyConfig> config)
        {
            _config = config;
            _config.OnChange(Update);
        }
    
        [HttpGet()]
        public IActionResult Get()
        {
            return Ok(_config.CurrentValue);
        }
    
        private void Update(MyConfig arg1, string? arg2)
        {
          Console.WriteLine($"Configs have been updated! PageSize is {arg1.PageSize}, " +
                    $" Password is {arg1.Host.Password}");
        }
    }
    

    When using IOptionsMonitor<T>, you can retrieve the current values of the configuration object by accessing the CurrentValue property. Also, you can define an event listener that is to be attached to the OnChange event;

    We can finally run the application and update the values on Azure App Configuration.

    Again, update one of the values, update the sentinel, and wait. After 3 seconds, you’ll see a message popping up in the console: it’s the text defined in the Update method.

    Then, call again the application (again, without restarting it), and admire the updated values!

    You can see a live demo here:

    Demo of configurations refreshed dinamically

    As you can see, the first time after updating the Sentinel value, the values are still the old ones. But, in the meantime, the values have been updated, and the cache has expired, so that the next time the values will be retrieved from Azure.

    My 2 cents on timing

    As we’ve learned, the config values are stored in a memory cache, with an expiration time. Every time the cache expires, we need to retrieve again the configurations from Azure App Configuration (in particular, by checking if the Sentinel value has been updated in the meanwhile). Don’t underestimate the cache value, as there are pros and cons of each kind of value:

    • a short timespan keeps the values always up-to-date, making your application more reactive to changes. But it also means that you are polling too often the Azure App Configuration endpoints, making your application busier and incurring limitations due to the requests count;
    • a long timespan keeps your application more performant because there are fewer requests to the Configuration endpoints, but it also forces you to have the configurations updated after a while from the update applied on Azure.

    There is also another issue with long timespans: if the same configurations are used by different services, you might end up in a dirty state. Say that you have UserService and PaymentService, and both use some configurations stored on Azure whose caching expiration is 10 minutes. Now, the following actions happen:

    1. UserService starts
    2. PaymentService starts
    3. Someone updates the values on Azure
    4. UserService restarts, while PaymentService doesn’t.

    We will end up in a situation where UserService has the most updated values, while PaymentService doesn’t. There will be a time window (in our example, up to 10 minutes) in which the configurations are misaligned.

    Also, take costs and limitations into consideration: with the Free tier you have 1000 requests per day, while with the Standard tier, you have 30.000 per hour per replica. Using the default cache expiration (30 seconds) in an application with a continuous flow of users means that you are gonna call the endpoint 2880 times per day (2 times a minute * (minutes per day = 1440)). Way more than the available value on the Free tier.

    So, think thoroughly before choosing an expiration time!

    Further readings

    This article is a continuation of a previous one, and I suggest you read the other one to understand how to set up Azure App Configuration and how to integrate it in an ASP.NET Core API application in case you don’t want to use dynamic configuration.

    🔗 Azure App Configuration and ASP.NET Core API: a smart and secure way to manage configurations | Code4IT

    This article first appeared on Code4IT 🐧

    Also, we learned that using IOptions we are not getting the most updated values: in fact, we need to use IOptionsMonitor. Check out this article to understand the other differences in the IOptions family.

    🔗 Understanding IOptions, IOptionsMonitor, and IOptionsSnapshot in ASP.NET Core | Code4IT

    Finally, I briefly talked about pricing. As of July 2023, there are just 2 pricing tiers, with different limitations.

    🔗 App Configuration pricing | Microsoft Learn

    Wrapping up

    In my opinion, smart configuration handling is essential for the hard times when you have to understand why an error is happening only in a specific environment.

    Centralizing configurations is a good idea, as it allows developers to simulate a whole environment by just changing a few values on the application.

    Making configurations live without restarting your applications manually can be a good idea, but you have to analyze it thoroughly.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit &vert; Code4IT

    Advanced Integration Tests for .NET 7 API with WebApplicationFactory and NUnit | Code4IT


    Integration Tests are incredibly useful: a few Integration Tests are often more useful than lots of Unit Tests. Let’s learn some advanced capabilities of WebApplicationFactory.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In a previous article, we learned a quick way to create Integration Tests for ASP.NET API by using WebApplicationFactory. That was a nice introductory article. But now we will delve into more complex topics and examples.

    In my opinion, a few Integration Tests and just the necessary number of Unit tests are better than hundreds of Unit Tests and no Integration Tests at all. In general, the Testing Diamond should be preferred over the Testing Pyramid (well, in most cases).

    In this article, we are going to create advanced Integration Tests by defining custom application settings, customizing dependencies to be used only during tests, defining custom logging, and performing complex operations in our tests.

    For the sake of this article, I created a sample API application that exposes one single endpoint whose purpose is to retrieve some info about the URL passed in the query string. For example,

    GET /SocialPostLink?uri=https%3A%2F%2Ftwitter.com%2FBelloneDavide%2Fstatus%2F1682305491785973760
    

    will return

    {
      "instanceName": "Real",
      "info": {
        "socialNetworkName": "Twitter",
        "sourceUrl": "https://twitter.com/BelloneDavide/status/1682305491785973760",
        "username": "BelloneDavide",
        "id": "1682305491785973760"
      }
    }
    

    For completeness, instanceName is a value coming from the appsettings.json file, while info is an object that holds some info about the social post URL passed as input.

    Internally, the code is using the Chain of Responsibility pattern: there is a handler that “knows” if it can handle a specific URL; if so, it just elaborates the input; otherwise, it calls the next handler.

    There is also a Factory that builds the chain, and finally, a Service that instantiates the Factory and then resolves the dependencies.

    As you can see, this solution can become complex. We could run lots of Unit Tests to validate that the Chain of Responsibility works as expected. We can even write a Unit Tests suite for the Factory.

    Class Diagram

    But, at the end of the day, we don’t really care about the internal structure of the project: as long as it works as expected, we could even use a huge switch block (clearly, with all the consequences of this choice). So, let’s write some Integration Tests.

    How to create a custom WebApplicationFactory in .NET

    When creating Integration Tests for .NET APIs you have to instantiate a new instance of WebApplicationFactory, a class coming from the Microsoft.AspNetCore.Mvc.Testing NuGet Package.

    Since we are going to define it once and reuse it across all the tests, let’s create a new class that extends WebApplicationFactory, and add some custom behavior to it.

    public class IntegrationTestWebApplicationFactory : WebApplicationFactory<Program>
    {
    
    }
    

    Let’s focus on the Program class: as you can see, the WebApplicationFactory class requires an entry point. Generally speaking, it’s the Program class of our application.

    If you hover on WebApplicationFactory<Program> and hit CTRL+. on Visual Studio, the autocomplete proposes two alternatives: one is the Program class defined in your APIs, while the other one is the Program class defined in Microsoft.VisualStudio.TestPlatform.TestHost. Choose the one for your API application! The WebApplicationFactory class will then instantiate your API following the instructions defined in your Program class, thus resolving all the dependencies and configurations as if you were running your application locally.

    What to do if you don’t have the Program class? If you use top-level statements, you don’t have the Program class, because it’s “implicit”. So you cannot reference the whole class. Unless… You have to create a new partial class named Program, and leave it empty: this way, you have a class name that can be used to reference the API definition:

    public partial class Program { }
    

    Here you can override some definitions of the WebHost to be created by calling ConfigureWebHost:

    public class IntegrationTestWebApplicationFactory : WebApplicationFactory<Program>
    {
        protected override void ConfigureWebHost(IWebHostBuilder builder)
        {
              builder.ConfigureAppConfiguration((host, configurationBuilder) => { });
        }
    }
    

    How to use WebApplicationFactory in your NUnit tests

    It’s time to start working on some real Integration Tests!

    As we said before, we have only one HTTP endpoint, defined like this:

    
    private readonly ISocialLinkParser _parser;
    private readonly ILogger<SocialPostLinkController> _logger;
    private readonly IConfiguration _config;
    
    public SocialPostLinkController(ISocialLinkParser parser, ILogger<SocialPostLinkController> logger, IConfiguration config)
    {
        _parser = parser;
        _logger = logger;
        _config = config;
    }
    
    [HttpGet]
    public IActionResult Get([FromQuery] string uri)
    {
        _logger.LogInformation("Received uri {Uri}", uri);
        if (Uri.TryCreate(uri, new UriCreationOptions {  }, out Uri _uri))
        {
            var linkInfo = _parser.GetLinkInfo(_uri);
            _logger.LogInformation("Uri {Uri} is of type {Type}", uri, linkInfo.SocialNetworkName);
    
            var instance = new Instance
            {
                InstanceName = _config.GetValue<string>("InstanceName"),
                Info = linkInfo
            };
            return Ok(instance);
        }
        else
        {
            _logger.LogWarning("Uri {Uri} is not a valid Uri", uri);
            return BadRequest();
        }
    }
    

    We have 2 flows to validate:

    • If the input URI is valid, the HTTP Status code should be 200;
    • If the input URI is invalid, the HTTP Status code should be 400;

    We could simply write Unit Tests for this purpose, but let me write Integration Tests instead.

    First of all, we have to create a test class and create a new instance of IntegrationTestWebApplicationFactory. Then, we will create a new HttpClient every time a test is run that will automatically include all the services and configurations defined in the API application.

    public class ApiIntegrationTests : IDisposable
    {
        private IntegrationTestWebApplicationFactory _factory;
        private HttpClient _client;
    
        [OneTimeSetUp]
        public void OneTimeSetup() => _factory = new IntegrationTestWebApplicationFactory();
    
        [SetUp]
        public void Setup() => _client = _factory.CreateClient();
    
        public void Dispose() => _factory?.Dispose();
    }
    

    As you can see, the test class implements IDisposable so that we can call Dispose() on the IntegrationTestWebApplicationFactory instance.

    From now on, we can use the _client instance to work with the in-memory instance of the API.

    One of the best parts of it is that, since it’s an in-memory instance, we can even debug our API application. When you create a test and put a breakpoint in the production code, you can hit it and see the actual values as if you were running the application in a browser.

    Now that we have the instance of HttpClient, we can create two tests to ensure that the two cases we defined before are valid. If the input string is a valid URI, return 200:

    [Test]
    public async Task Should_ReturnHttp200_When_UrlIsValid()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetAsync($"SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.OK));
    }
    

    Otherwise, return Bad Request:

    [Test]
    public async Task Should_ReturnBadRequest_When_UrlIsNotValid()
    {
        string inputUrl = "invalid-url";
    
        var result = await _client.GetAsync($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.StatusCode, Is.EqualTo(HttpStatusCode.BadRequest));
    }
    

    How to create test-specific configurations using InMemoryCollection

    WebApplicationFactory is highly configurable thanks to the ConfigureWebHost method. For instance, you can customize the settings injected into your services.

    Usually, you want to rely on the exact same configurations defined in your appsettings.json file to ensure that the system behaves correctly with the “real” configurations.

    For example, I defined the key “InstanceName” in the appsettings.json file whose value is “Real”, and whose value is used to create the returned Instance object. We can validate that that value is being read from that source as validated thanks to this test:

    [Test]
    public async Task Should_ReadInstanceNameFromSettings()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.InstanceName, Is.EqualTo("Real"));
    }
    

    But some other times you might want to override a specific configuration key.

    The ConfigureAppConfiguration method allows you to customize how you manage Configurations by adding or removing sources.

    If you want to add some configurations specific to the WebApplicationFactory, you can use AddInMemoryCollection, a method that allows you to add configurations in a key-value format:

    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureAppConfiguration((host, configurationBuilder) =>
        {
            configurationBuilder.AddInMemoryCollection(
                new List<KeyValuePair<string, string?>>
                {
                    new KeyValuePair<string, string?>("InstanceName", "FromTests")
                });
        });
    }
    

    Even if you had the InstanceName configured in your appsettings.json file, the value is now overridden and set to FromTests.

    You can validate this change by simply replacing the expected value in the previous test:

    [Test]
    public async Task Should_ReadInstanceNameFromSettings()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.InstanceName, Is.EqualTo("FromTests"));
    }
    

    If you also want to discard all the other existing configuration sources, you can call configurationBuilder.Sources.Clear() before AddInMemoryCollection and remove all the other existing configurations.

    How to set up custom dependencies for your tests

    Maybe you don’t want to resolve all the existing dependencies, but just a subset of them. For example, you might not want to call external APIs with a limited number of free API calls to avoid paying for the test-related calls. You can then rely on Stub classes that simulate the dependency by giving you full control of the behavior.

    We want to replace an existing class with a Stub one: we are going to create a stub class that will be used instead of SocialLinkParser:

    public class StubSocialLinkParser : ISocialLinkParser
    {
        public LinkInfo GetLinkInfo(Uri postUri) => new LinkInfo
        {
            SocialNetworkName = "test from stub",
            Id = "test id",
            SourceUrl = postUri,
            Username = "test username"
        };
    }
    

    We can then customize Dependency Injection to use StubSocialLinkParser in place of SocialLinkParser by specifying the dependency within the ConfigureTestServices method:

    builder.ConfigureTestServices(services =>
    {
        services.AddScoped<ISocialLinkParser, StubSocialLinkParser>();
    });
    

    Finally, we can create a method to validate this change:

    [Test]
    public async Task Should_UseStubName()
    {
        string inputUrl = "https://twitter.com/BelloneDavide/status/1682305491785973760";
    
        var result = await _client.GetFromJsonAsync<Instance>($"/SocialPostLink?uri={inputUrl}");
    
        Assert.That(result.Info.SocialNetworkName, Is.EqualTo("test from stub"));
    }
    

    How to create Integration Tests on specific resolved dependencies

    Now we are going to test that the SocialLinkParser does its job, regardless of the internal implementation. Right now we have used the Chain of Responsibility pattern, and we rely on the ISocialLinksFactory interface to create the correct sequence of handlers. But we don’t know in the future how we will define the code: maybe we will replace it all with a huge if-else sequence – the most important part is that the code works, regardless of the internal implementation.

    We can proceed in two ways: writing tests on the interface or writing tests on the concrete class.

    For the sake of this article, we are going to run tests on the SocialLinkParser class. Not the interface, but the concrete class. The first step is to add the class to the DI engine in the Program class:

    builder.Services.AddScoped<SocialLinkParser>();
    

    Now we can create a test to validate that it is working:

    [Test]
    public async Task Should_ResolveDependency()
    {
        using (var _scope = _factory.Services.CreateScope())
        {
            var service = _scope.ServiceProvider.GetRequiredService<SocialLinkParser>();
            Assert.That(service, Is.Not.Null);
            Assert.That(service, Is.AssignableTo<SocialLinkParser>());
        }
    }
    

    As you can see, we are creating an IServiceScope by calling _factory.Services.CreateScope(). Since we have to discard this scope after the test run, we have to place it within a using block. Then, we can create a new instance of SocialLinkParser by calling _scope.ServiceProvider.GetRequiredService<SocialLinkParser>() and create all the tests we want on the concrete implementation of the class.

    The benefit of this approach is that you have all the internal dependencies already resolved, without relying on mocks. You can then ensure that everything, from that point on, works as you expect.

    Here I created the scope within a using block. There is another approach that I prefer: create the scope instance in the SetUp method, and call Dispose() on it the the TearDown phase:

    protected IServiceScope _scope;
    protected SocialLinkParser _sut;
    private IntegrationTestWebApplicationFactory _factory;
    
    [OneTimeSetUp]
    public void OneTimeSetup() => _factory = new IntegrationTestWebApplicationFactory();
    
    [SetUp]
    public void Setup()
    {
        _scope = _factory.Services.CreateScope();
        _sut = _scope.ServiceProvider.GetRequiredService<SocialLinkParser>();
    }
    
    [TearDown]
    public void TearDown()
    {
        _sut = null;
        _scope.Dispose();
    }
    
    public void Dispose() => _factory?.Dispose();
    

    You can see an example of the implementation here in the SocialLinkParserTests class.

    Where are my logs?

    Sometimes you just want to see the logs generated by your application to help you debug an issue (yes, you can simply debug the application!). But, unless properly configured, the application logs will not be available to you.

    But you can add logs to the console easily by customizing the adding the Console sink in your ConfigureTestServices method:

    builder.ConfigureTestServices(services =>
    {
        services.AddLogging(builder => builder.AddConsole().AddDebug());
    });
    

    Now you will be able to see all the logs you generated in the Output panel of Visual Studio by selecting the Tests source:

    Logs appear in the Output panel of VisualStudio

    Beware that you are still reading the configurations for logging from the appsettings file! If you have specified in your project to log directly to a sink (such as DataDog or SEQ), your tests will send those logs to the specified sinks. Therefore, you should get rid of all the other logging sources by calling ClearProviders():

    services.AddLogging(builder => builder.ClearProviders() .AddConsole().AddDebug());
    

    Full example

    In this article, we’ve configured many parts of our WebApplicationFactory. Here’s the final result:

    public class IntegrationTestWebApplicationFactory : WebApplicationFactory<Program>
    {
        protected override void ConfigureWebHost(IWebHostBuilder builder)
        {
            builder.ConfigureAppConfiguration((host, configurationBuilder) =>
            {
                // Remove other settings sources, if necessary
                configurationBuilder.Sources.Clear();
    
                //Create custom key-value pairs to be used as settings
                configurationBuilder.AddInMemoryCollection(
                    new List<KeyValuePair<string, string?>>
                    {
                        new KeyValuePair<string, string?>("InstanceName", "FromTests")
                    });
            });
    
            builder.ConfigureTestServices(services =>
            {
                //Add stub classes
                services.AddScoped<ISocialLinkParser, StubSocialLinkParser>();
    
                //Configure logging
                services.AddLogging(builder => builder.ClearProviders().AddConsole().AddDebug());
            });
        }
    }
    

    You can find the source code used for this article on my GitHub; feel free to download it and toy with it!

    Further readings

    This is an in-depth article about Integration Tests in .NET. I already wrote an article about it with a simpler approach that you might enjoy:

    🔗 How to run Integration Tests for .NET API | Code4IT

    This article first appeared on Code4IT 🐧

    As I often say, a few Integration Tests are often more useful than a ton of Unit Tests. Focusing on Integration Tests instead that on Unit Tests has the benefit of ensuring that the system behaves correctly regardless of the internal implementation.

    In this article, I used the Chain of Responsibility pattern, so Unit Tests would be tightly coupled to the Handlers. If we decided to move to another pattern, we would have to delete all the existing tests and rewrite everything from scratch.

    Therefore, in my opinion, the Testing Diamond is often more efficient than the Testing Pyramid, as I explained here:

    🔗 Testing Pyramid vs Testing Diamond (and how they affect Code Coverage) | Code4IT

    Wrapping up

    This was a huge article, I know.

    Again, feel free to download and run the example code I shared on my GitHub.

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Making Animations Smarter with Data Binding: Creating a Dynamic Gold Calculator in Rive

    Making Animations Smarter with Data Binding: Creating a Dynamic Gold Calculator in Rive


    Designing visuals that respond to real-time data or user input usually means switching between multiple tools — one for animation, another for logic, and yet another for implementation. This back-and-forth can slow down iteration, make small changes cumbersome, and create a disconnect between design and behavior.

    If you’ve spent any time with Rive, you know it’s built to close that gap. It lets you design, animate, and add interaction all in one place — and with features like state machines and data binding, you can make your animations respond directly to variables and user actions.

    To demonstrate how we use data binding in Rive, we built a small interactive project — a gold calculator. The task was simple: calculate the price of 5g and 10g gold bars, from 1 to 6 bars, using external data for the current gold price per gram. The gold price can be dynamic, typically coming from market data, but in this case we used a manually set value.

    Let’s break down how the calculator is built, step by step, starting with the layout and structure of the file.

    1. File Structure

    The layout is built for mobile, using a 440×900 px artboard. It’s structured around three layout groups:

    1. Title with gold price per gram
    2. Controls for choosing gold bar amount and weight
    3. Gold bar illustration

    The title section includes a text layout made of two text runs: one holds static text like the label, while the other is dynamic and connected to external data using data binding. This allows the gold price to update in real time when the data changes.

    In the controls section, we added plus and minus buttons to set the number of gold bars. These are simple layouts with icons inside. Below them, there are two buttons to switch between 5g and 10g options. They’re styled as rounded layouts with text inside.

    In the state machine, two timelines define the tab states: one for when the 10g button is active, using a solid black background and white text, and another for 5g, with reversed styles. Switching between these two updates the active tab visually.

    The total price section also uses two text runs — one for the currency icon and one for the total value. This value changes based on the selected weight and quantity, and is driven by data binding.

    2. Gold Bar Illustration

    The illustration is built using a nested artboard with a single vector gold bar. Inside the calculator layout, we duplicated this artboard to show anywhere from 1 to 6 bars depending on the user’s selection.

    Since there are two weight options, we made the gold bar resize visually — wider for 10g and narrower for 5g. To do that, we used N-Slices so that the edges stay intact and only the middle stretches. The sliced group sits inside a fixed-size layout, and the artboard is set to Hug its contents, which lets it resize automatically.

    Created two timelines to control bar size: one where the width is 88px for 10g, and another at 74px for 5g. The switch between them is controlled by a number variable called Size-gram gold, where 5g is represented by 0 and 10g by 1 with 1 set as the default value.

    In the state machine, we connected this variable to the two timelines (the 10g timeline set as the default)— when it’s set to 0, the layout switches to 5g; when it’s 1, it switches to 10g. This makes the size update based on user selection without any manual switching. To keep the transition smooth, a 150ms animation duration is added.

    3. Visualizing 1–6 Gold Bars

    To show different quantities of gold bars in the main calculator layout, we created a tiered structure using three stacked layout groups with a vertical gap -137. Each tier is offset vertically to form a simple pyramid arrangement, with everything positioned in the bottom-left corner of the screen.

    The first tier contains three duplicated nested artboards of a single gold bar. Each of these is wrapped in a Hug layout, which allows them to resize correctly based on the weight. The second tier includes two gold bars and an empty layout. This empty layout is used for spacing — it creates a visual shift when we need to display exactly four bars. The top tier has just one gold bar centered.

    All three tiers are bottom-centered, which keeps the pyramid shape consistent as bars are added or removed.

    To control how many bars are visible, we created 6 timelines in Animate mode — one for each quantity from 1 to 6. To hide or show each gold bar, two techniques are used: adjusting the opacity of the nested artboard (100% to show, 0% to hide) and modifying the layout that wraps it. When a bar is hidden, the layout is set to a fixed width of 0px; when visible, it uses Hug settings to restore its size automatically.

    Each timeline has its own combination of these settings depending on which bars should appear. For example, in the timeline with 4 bars, we needed to prevent the fourth bar from jumping to the center of the row. To keep it properly spaced, we assigned a fixed width of 80px to the empty layout used for shifting. On the other timelines, that same layout is hidden by setting its width to 0px.

    This system makes it easy to switch between quantities while preserving the visual structure.

    4. State Machine and Data Binding Setup

    With the visuals and layouts ready, we moved on to setting up the logic with data binding and state transitions.

    4.1 External Gold Price

    First, we created a number variable called Gold price gram. This value can be updated externally — for example, connected to a trading database — so the calculator always shows the current market price of gold. In our case, we used a static value of 151.75, which can also be updated manually by the user.

    To display this in the UI, we bound Text Run 2 in the title layout to this variable. A converter in the Strings tab called “Convert to String Price” is then created and applied to that text run. This converter formats the number correctly for display and will be reused later.

    4.2 Gold Bar Size Control

    We already had a number variable called Size-gram gold, which controls the weight of the gold bar used in the nested artboard illustration.

    In the Listeners panel, two listeners are created. The first is set to target the 5g tab, uses a Pointer Down action, and assigns Size-gram gold = 0. The second targets the 10g tab, also with a Pointer Down action, and assigns Size-gram gold = 1.

    Next, two timelines (one for each tab state) are brought into the state machine. The 10g timeline is used as the default state, with transitions added: one from 10g to 5g when Size-gram gold = 0, and one back to 10g when Size-gram gold = 1. Each transition has a duration of 100ms to keep the switching smooth.

    4.3 Gold Bar Quantity

    Next, added another number variable, Quantity-gold, to track the number of selected bars. The default value is set to 1. In the Converters under Numeric, two “Calculate” converters are created — one that adds “+1” and one that subtracts “-1”.

    In the Listeners panel, the plus button is assigned an action: Quantity-gold = Quantity-gold, using the “+1” converter. This way, clicking the plus button increases the count by 1. The same is done for the minus button, assigning Quantity-gold = Quantity-gold and attaching the “-1” converter. Clicking the minus button decreases the count by 1.

    Inside the state machine, six timelines are connected to represent bar counts from 1 to 6. Each transition uses the Quantity-gold value to trigger the correct timeline.

    By default, the plus button would keep increasing the value endlessly, but the goal is to limit the max to six bars. On the timeline where six gold bars are active, the plus button is disabled by setting its click area scale to 0 and lowering its opacity to create a “disabled” visual state. On all other timelines, those properties are returned to their active values.

    The same logic is applied to the minus button to prevent values lower than one. On the timeline with one bar, the button is disabled, and on all others, it returns to its active state.

    Almost there!

    4.4 Total Price Logic

    For the 5g bar price, we calculated it using this formula:

    Total Price = Gold price gram + Quantity-gold * 5

    In Converters → Numeric, a Formula converter was created and named Total Price 5g Formula to calculate the total price. In the example, it looked like:

    {{View Model Price/Gold price gram}}*{{View Model Price/Quanity-gold}}*5.0

    Since we needed to display this number as text, the Total Price number variable was also converted into a string. For that, we used an existing converter called “Convert to String Price.”

    To use both converters together, a Group of converters was created and named Total Price 5g Group, which included the Total Price 5g Formula converter followed by the Convert to String Price converter.

    Then, the text for the price variable was data bound by adding the Total Price variable in the Property field and selecting Total Price 5g Group in the Convert field.

    To handle the 10g case, which is double the price, two options are explored — either creating a new converter that multiplies by 10 or multiplying the existing result by 2.

    Eventually, a second text element is added along with a new group of converters specifically for 10g. This includes a new formula:

    Total Price = Gold price gram + Quantity-gold * 10

    A formula converter and a group with both that formula and the string converter are created and named “Total Price 10g Group.”

    Using timelines where the 5g and 10g buttons are in their active states, we adjusted the transparency of the text elements. This way, the total price connected to the 5g converters group is visible when the 5g button is selected, and the price from the 10g converters group appears when the 10g button is selected.

    It works perfectly.

    After this setup, the Gold price gram variable can be connected to live external data, allowing the gold price in the calculator to reflect the current market value in real time.

    Wrapping Up

    This gold calculator project is a simple example, but it shows how data binding in Rive can be used to connect visual design with real-time logic — without needing to jump between separate tools or write custom code. By combining state machines, variables, and converters, you can build interfaces that are not only animated but also smart and responsive.

    Whether you’re working on a product UI, a prototype, or a standalone interactive graphic, Rive gives you a way to bring together motion and behavior in a single space. If you’re already experimenting with Rive, data binding opens up a whole new layer of possibilities to explore.



    Source link

  • Pre-commit hooks with Husky.NET – build, format, and test your .NET application before a Git commit &vert; Code4IT

    Pre-commit hooks with Husky.NET – build, format, and test your .NET application before a Git commit | Code4IT


    A Git commit represents the status of a system. Learn how to validate that your code builds, is well-formatted, and all the tests pass by adding a Git hook!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you need to run operations before completing a Git commit, you can rely on Git Hooks.

    Git hooks are scripts that run automatically whenever a particular event occurs in a Git repository. They let you customize Git’s internal behaviour and trigger customizable actions at key points in the development life cycle.

    Extending Git hooks allows you to plug in custom functionalities to the regular Git flow, such as Git message validation, code formatting, etc.

    I’ve already described how to use Husky with NPM, but here I’m gonna use Husky.NET, the version of Husky created for .NET-based applications.

    Git hooks: a way to extend Git operations

    As we said, Git hooks are actions that run during specific phases of Git operations.

    Git hooks fall into 4 categories:

    • client-side hooks related to the committing workflow: they execute when you run git commit on your local repository;
    • client-side hooks related to the email workflow: they are executed when running git am, which is a command that allows you to integrate mails and Git repositories (I’ve never used it. If you are interested in this functionality, here’s the official documentation);
    • client-side hooks related to other operations: these hooks run on your local repository when performing operations like git rebase;
    • server-side hooks: they run after a commit is received on the remote repository, and they can reject a git push operation.

    Let’s focus on the client-side hooks that run when you commit changes using git commit.

    Hook name Description
    pre-commit This hook is the first invoked by git commit (if you don’t use the -m flag, it is invoked before asking you to insert a commit message) and can be used to inspect the snapshot that is about to be committed.
    prepare-commit-msg This hook is invoked by git commit and can be used to edit the default commit message when it is generated by an automated tool.
    commit-msg This hook is invoked by git commit and can be used to validate or modify the commit message after it is entered by the user.
    post-commit This hook is invoked after the git commit execution has run correctly, and it is generally used to fire notifications.

    How to install Husky.NET and its dependencies in a .NET Application

    Husky.NET must be installed in the root folder of the solution.

    You first have to create a tool-manifest file in the root folder by running:

    This command creates a file named dotnet-tools.json under the .config folder: here you can see the list of external tools used by dotnet.

    After running the command, you will see that the dotnet-tools.json file contains this element:

    {
      "version": 1,
      "isRoot": true,
      "tools": {}
    }
    

    Now you can add Husky as a dotnet tool by running:

    dotnet tool install Husky
    

    After running the command, the file will contain something like this:

    {
      "version": 1,
      "isRoot": true,
      "tools": {
        "husky": {
          "version": "0.6.2",
          "commands": ["husky"]
        }
      }
    }
    

    Now that we have added it to our dependencies, we can add Husky to an existing .NET application by running:

    If you open the root folder, you should be able to see these 3 folders:

    • .git, which contains the info about the Git repository;
    • .config that contains the description of the tools, such as dotnet-tools;
    • .husky that contains the files we are going to use to define our Git hooks.

    Finally, you can add a new hook by running, for example,

    dotnet husky add pre-commit -c "echo 'Hello world!'"
    git add .husky/pre-commit
    

    This command creates a new file, pre-commit (without file extension), under the .husky folder. By default, it appears like this:

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    ## husky task runner examples -------------------
    ## Note : for local installation use 'dotnet' prefix. e.g. 'dotnet husky'
    
    ## run all tasks
    #husky run
    
    ### run all tasks with group: 'group-name'
    #husky run --group group-name
    
    ## run task with name: 'task-name'
    #husky run --name task-name
    
    ## pass hook arguments to task
    #husky run --args "$1" "$2"
    
    ## or put your custom commands -------------------
    #echo 'Husky.Net is awesome!'
    
    echo 'Hello world!'
    

    The default content is pretty useless; it’s time to customize that hook.

    Notice that the latest command has also generated a task-runner.json file; we will use it later.

    Your first pre-commit hook

    To customize the script, open the file located at .husky/pre-commit.

    Here, you can add whatever you want.

    In the example below, I run commands that compile the code, format the text (using dotnet format with the rules defined in the .editorconfig file), and then run all the tests.

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    echo 'Building code'
    dotnet build
    
    echo 'Formatting code'
    dotnet format
    
    echo 'Running tests'
    dotnet test
    

    Then, add it to Git, and you are ready to go. 🚀 But wait…

    3 ways to manage dotnet format with Husky.NET

    There is a problem with the approach in the example above.

    Let’s simulate a usage flow:

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format;
    5. the pre-commit hook runs dotnet test;
    6. after the hooks, the commit is created.

    What is the final result?

    Since dotnet format modifies the source files, and given that the snapshot has already been created before executing the hook, all the modified files will not be part of the final commit!

    Also, dotnet format executes linting on every file in the solution, not only those that are part of the current snapshot. The operation might then take a lot of time, depending on the size of the repository, and most of the time, it will not update any file (because you’ve already formatted everything in a previous run).

    We have to work out a way to fix this issue. I’ll suggest three approaches.

    Include all the changes using Git add

    The first approach is quite simple: run git add . after dotnet format.

    So, the flow becomes:

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format;
    5. the pre-commit hook runs git add .;
    6. the pre-commit hook runs dotnet test;
    7. Git creates the commit.

    This is the most straightforward approach, but it has some downsides:

    • dotnet format is executed on every file in the solution. The more your project grows, the slower your commits become;
    • git add . adds to the current snapshot all the files modified, even those you did not add to this commit on purpose (maybe because you have updated many files and want to create two distinct commits).

    So, it works, but we can do better.

    Execute a dry run of dotnet-format

    You can add the --verify-no-changes to the dotnet format command: this flag returns an error if at least one file needs to be updated because of a formatting rule.

    Let’s see how the flow changes if one file needs to be formatted.

    1. you modify a C# class;
    2. you run git commit -m "message";
    3. the pre-commit hook runs dotnet build;
    4. the pre-commit hook runs dotnet format --verify-no-changes;
    5. the pre-commit hook returns an error and aborts the operation;
    6. you run dotnet format on the whole solution to fix all the formatting issues;
    7. you run git add .;
    8. you run git commit -m "message";
    9. the pre-commit hook runs dotnet build;
    10. the pre-commit hook runs dotnet format --verify-no-changes. Now, there is nothing to format, and we can proceed;
    11. the pre-commit hook runs dotnet test;
    12. Git creates the commit.

    Notice that, this way, if there is something to format, the whole commit is aborted. You will then have to run dotnet format on the entire solution, fix the errors, add the changes to the snapshot, and restart the flow.

    It’s a longer process, but it allows you to have complete control over the formatted files.

    Also, you won’t risk including in the snapshot the files you want to keep staged in order to add them to a subsequent commit.

    Run dotnet-format only on the staged files using Husky.NET Task Runner

    The third approach is the most complex but with the best result.

    If you recall, during the initialization, Husky added two files in the .husky folder: pre-commit and task-runner.json.

    The key to this solution is the task-runner.json file. This file allows you to create custom scripts with a name, a group, the command to be executed, and its related parameters.

    By default, you will see this content:

    {
      "tasks": [
        {
          "name": "welcome-message-example",
          "command": "bash",
          "args": ["-c", "echo Husky.Net is awesome!"],
          "windows": {
            "command": "cmd",
            "args": ["/c", "echo Husky.Net is awesome!"]
          }
        }
      ]
    }
    

    To make sure that dotnet format runs only on the staged files, you must create a new task like this:

    {
      "name": "dotnet-format-staged-files",
      "group": "pre-commit-operations",
      "command": "dotnet",
      "args": ["format", "--include", "${staged}"],
      "include": ["**/*.cs"]
    }
    

    Here, we have specified a name, dotnet-format-staged-files, the command to run, dotnet, with some parameters listed in the args array. Notice that we can filter the list of files to be formatted by using the ${staged} parameter, which is populated by Husky.NET.

    We have also added this task to a group named pre-commit-operations that we can use to reference a list of tasks to be executed together.

    If you want to run a specific task, you can use dotnet husky run --name taskname. In our example, the command would be dotnet husky run --name dotnet-format-staged-files.

    If you want to run a set of tasks belonging to the same group, you can run dotnet husky run --group groupname. In our example, the command would be dotnet husky run --group pre-commit-operations.

    The last step is to call these tasks from within our pre-commit file. So, replace the old dotnet format command with one of the above commands.

    Final result and optimizations of the pre-commit hook

    Now that everything is in place, we can improve the script to make it faster.

    Let’s see which parts we can optimize.

    The first step is the build phase. For sure, we have to run dotnet build to see if the project builds correctly. You can consider adding the --no-restore flag to skip the restore step before building.

    Then we have the format phase: we can avoid formatting every file using one of the steps defined before. I’ll replace the plain dotnet format with the execution of the script defined in the Task Runner (it’s the third approach we saw).

    Then, we have the test phase. We can add both the --no-restore and the --no-build flag to the command since we have already built everything before. But wait! The format phase updated the content of our files, so we still have to build the whole solution. Unless we swap the build and the format phases.

    So, here we have the final pre-commit file:

    #!/bin/sh
    . "$(dirname "$0")/_/husky.sh"
    
    echo 'Ready to commit changes!'
    
    echo 'Format'
    
    dotnet husky run --name dotnet-format-staged-files
    
    echo 'Build'
    
    dotnet build --no-restore
    
    echo 'Test'
    
    dotnet test --no-restore
    
    echo 'Completed pre-commit changes'
    

    Yes, I know that when you run the dotnet test command, you also build the solution, but I prefer having two separate steps just for clarity!

    Ah, and don’t remove the #!/bin/sh at the beginning of the script!

    How to skip Git hooks

    To trigger the hook, just run git commit -m "message". Before completing the commit, the hook will run all the commands. If one of them fails, the whole commit operation is aborted.

    There are cases when you have to skip the validation. For example, if you have integration tests that rely on an external source currently offline. In that case, some tests will fail, and you will be able to commit your code only once the external system gets working again.

    You can skip the commit validation by adding the --no-verify flag:

    git commit -m "my message" --no-verify
    

    Further readings

    Husky.NET is a porting of the Husky tool we already used in a previous article, using it as an NPM dependency. In that article, we also learned how to customize Conventional Commits using Git hooks.

    🔗 How to customize Conventional Commits in a .NET application using GitHooks | Code4IT

    As we learned, there are many more Git hooks that we can use. You can see the complete list on the Git documentation:

    🔗 Customizing Git – Git Hooks | Git docs

    This article first appeared on Code4IT 🐧

    Of course, if you want to get the best out of Husky.NET, I suggest you have a look at the official documentation:

    🔗 Husky.Net documentation

    One last thing: we installed Husky.NET using dotnet tools. If you want to learn more about this topic, I found an excellent article online that you might want to read:

    🔗 Using dotnet tools | Gustav Ehrenborg

    Wrapping up

    In this article, we learned how to create a pre-commit Git hook and validate all our changes before committing them to our Git repository.

    We also focused on the formatting of our code: how can we format only the files we have changed without impacting the whole solution?

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How To Create Kinetic Image Animations with React-Three-Fiber

    How To Create Kinetic Image Animations with React-Three-Fiber



    For the past few months, I’ve been exploring different kinetic motion designs with text and images. The style looks very intriguing, so I decided to create some really cool organic animations using images and React Three Fiber.

    In this article, we’ll learn how to create the following animation using Canvas2D and React Three Fiber.

    Setting Up the View & Camera

    The camera’s field of view (FOV) plays a huge role in this project. Let’s keep it very low so it looks like an orthographic camera. You can experiment with different perspectives later. I prefer using a perspective camera over an orthographic one because we can always try different FOVs. For more detailed implementation check source code.

    <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />

    Setting Up Our 3D Shapes

    First, let’s create and position 3D objects that will display our images. For this example, we need to make 2 components:

    Billboard.tsx – This is a cylinder that will show our stack of images

    'use client';
    
    import { useRef } from 'react';
    import * as THREE from 'three';
    
    function Billboard({ radius = 5, ...props }) {
        const ref = useRef(null);
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} />
                <meshBasicMaterial color="red" side={THREE.DoubleSide} />
            </mesh>
        );
    }

    Banner.tsx – This is another cylinder that will work like a moving banner

    'use client';
    
    import * as THREE from 'three';
    import { useRef } from 'react';
    
    function Banner({ radius = 1.6, ...props }) {
        const ref = useRef(null);
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry
                args={[radius, radius, radius * 0.07, radius * 80, radius * 10, true]}
                />
                <meshBasicMaterial
                color="blue"
                side={THREE.DoubleSide}
                />
            </mesh>
        );
    }
    
    export default Banner;

    Once we have our components ready, we can use them on our page.

    Now let’s build the whole shape:

    1. Create a wrapper group – We’ll make a group that wraps all our components. This will help us rotate everything together later.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} /> 
                    <group>
    
                    </group>
                </View>
            </div>
        );
    }

    2. Render Billboard and Banner components in the loop – Inside our group, we’ll create a loop to render our Billboards and Banners multiple times.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
                    <group>
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                            key={`billboard-${index}`}
                            radius={5}
                            />,
                            <Banner
                            key={`banner-${index}`}
                            radius={5}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    3. Stack them up – We’ll use the index from our loop and the y position to stack our items on top of each other. Here’s how it looks so far:

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    const COUNT = 10;
    const GAP = 3.2;
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
                    <group>
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                            key={`billboard-${index}`}
                            radius={5}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]}
                            />,
                            <Banner
                            key={`banner-${index}`}
                            radius={5}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0]}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    4. Add some rotation – Let’s rotate things a bit! First, I’ll hard-code the rotation of our banners to make them more curved and fit nicely with the Billboard component. We’ll also make the radius a bit bigger.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    const COUNT = 10;
    const GAP = 3.2;
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
                    <group>
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                            key={`billboard-${index}`}
                            radius={5}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]}
                            rotation={[0, index * Math.PI * 0.5, 0]} // <-- rotation of the billboard
                            />,
                            <Banner
                            key={`banner-${index}`}
                            radius={5}
                            rotation={[0, 0, 0.085]} // <-- rotation of the banner
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0]}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    5. Tilt the whole thing – Now let’s rotate our entire group to make it look like the Leaning Tower of Pisa.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    const COUNT = 10;
    const GAP = 3.2;
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
                    <group rotation={[-0.15, 0, -0.2]}> // <-- rotate the group
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                            key={`billboard-${index}`}
                            radius={5}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]}
                            rotation={[0, index * Math.PI * 0.5, 0]}
                            />,
                            <Banner
                            key={`banner-${index}`}
                            radius={5}
                            rotation={[0, 0, 0.085]}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0]}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    6. Perfect! – Our 3D shapes are all set up. Now we can add our images to them.

    Creating a Texture from Our Images Using Canvas

    Here’s the cool part: we’ll put all our images onto a canvas, then use that canvas as a texture on our Billboard shape.

    To make this easier, I created some helper functions that simplify the whole process.

    getCanvasTexture.js

    import * as THREE from 'three';
    
    /**
    * Preloads an image and calculates its dimensions
    */
    async function preloadImage(imageUrl, axis, canvasHeight, canvasWidth) {
        const img = new Image();
    
        img.crossOrigin = 'anonymous';
    
        await new Promise((resolve, reject) => {
            img.onload = () => resolve();
            img.onerror = () => reject(new Error(`Failed to load image: ${imageUrl}`));
            img.src = imageUrl;
        });
    
        const aspectRatio = img.naturalWidth / img.naturalHeight;
    
        let calculatedWidth;
        let calculatedHeight;
    
        if (axis === 'x') {
            // Horizontal layout: scale to fit canvasHeight
            calculatedHeight = canvasHeight;
            calculatedWidth = canvasHeight * aspectRatio;
            } else {
            // Vertical layout: scale to fit canvasWidth
            calculatedWidth = canvasWidth;
            calculatedHeight = canvasWidth / aspectRatio;
        }
    
        return { img, width: calculatedWidth, height: calculatedHeight };
    }
    
    function calculateCanvasDimensions(imageData, axis, gap, canvasHeight, canvasWidth) {
        if (axis === 'x') {
            const totalWidth = imageData.reduce(
            (sum, data, index) => sum + data.width + (index > 0 ? gap : 0), 0);
    
            return { totalWidth, totalHeight: canvasHeight };
        } else {
            const totalHeight = imageData.reduce(
            (sum, data, index) => sum + data.height + (index > 0 ? gap : 0), 0);
    
            return { totalWidth: canvasWidth, totalHeight };
        }
    }
    
    function setupCanvas(canvasElement, context, dimensions) {
        const { totalWidth, totalHeight } = dimensions;
        const devicePixelRatio = Math.min(window.devicePixelRatio || 1, 2);
    
        canvasElement.width = totalWidth * devicePixelRatio;
        canvasElement.height = totalHeight * devicePixelRatio;
    
        if (devicePixelRatio !== 1) context.scale(devicePixelRatio, devicePixelRatio);
    
        context.fillStyle = '#ffffff';
        context.fillRect(0, 0, totalWidth, totalHeight);
    }
    
    function drawImages(context, imageData, axis, gap) {
        let currentX = 0;
        let currentY = 0;
    
        context.save();
    
        for (const data of imageData) {
            context.drawImage(data.img, currentX, currentY, data.width, data.height);
    
            if (axis === 'x') currentX += data.width + gap;
            else currentY += data.height + gap;
        }
    
        context.restore();
    }
    
    function createTextureResult(canvasElement, dimensions) {
        const texture = new THREE.CanvasTexture(canvasElement);
        texture.needsUpdate = true;
        texture.wrapS = THREE.RepeatWrapping;
        texture.wrapT = THREE.ClampToEdgeWrapping;
        texture.generateMipmaps = false;
        texture.minFilter = THREE.LinearFilter;
        texture.magFilter = THREE.LinearFilter;
    
        return {
            texture,
            dimensions: {
                width: dimensions.totalWidth,
                height: dimensions.totalHeight,
                aspectRatio: dimensions.totalWidth / dimensions.totalHeight,
            },
        };
    }
    
    export async function getCanvasTexture({
        images,
        gap = 10,
        canvasHeight = 512,
        canvasWidth = 512,
        canvas,
        ctx,
        axis = 'x',
    }) {
        if (!images.length) throw new Error('No images');
    
        // Create canvas and context if not provided
        const canvasElement = canvas || document.createElement('canvas');
        const context = ctx || canvasElement.getContext('2d');
    
        if (!context) throw new Error('No context');
    
        // Preload all images in parallel
        const imageData = await Promise.all(
            images.map((image) => preloadImage(image.url, axis, canvasHeight, canvasWidth))
        );
    
        // Calculate total canvas dimensions
        const dimensions = calculateCanvasDimensions(imageData, axis, gap, canvasHeight, canvasWidth);
    
        // Setup canvas
        setupCanvas(canvasElement, context, dimensions);
    
        // Draw all images
        drawImages(context, imageData, axis, gap);
    
        // Create and return texture result
        return createTextureResult(canvasElement, dimensions)
    }

    Then we can also create a useCollageTexture hook that we can easily use in our components.

    useCollageTexture.jsx

    import { useState, useEffect, useCallback } from 'react';
    import { getCanvasTexture } from '@/webgl/helpers/getCanvasTexture';
    
    export function useCollageTexture(images, options = {}) {
    const [textureResults, setTextureResults] = useState(null);
    const [isLoading, setIsLoading] = useState(true);
    const [error, setError] = useState(null);
    
    const { gap = 0, canvasHeight = 512, canvasWidth = 512, axis = 'x' } = options;
    
    const createTexture = useCallback(async () => {
        try {
            setIsLoading(true);
            setError(null);
    
            const result = await getCanvasTexture({
                images,
                gap,
                canvasHeight,
                canvasWidth,
                axis,
            });
    
            setTextureResults(result);
    
        } catch (err) {
            setError(err instanceof Error ? err : new Error('Failed to create texture'));
        } finally {
            setIsLoading(false);
        }
    }, [images, gap, canvasHeight, canvasWidth, axis]);
    
        useEffect(() => {
            if (images.length > 0) createTexture();
        }, [images.length, createTexture]);
    
        return {
            texture: textureResults?.texture || null,
            dimensions: textureResults?.dimensions || null,
            isLoading,
            error,
        };
    }

    Adding the Canvas to Our Billboard

    Now let’s use our useCollageTexture hook on our page. We’ll create some simple loading logic. It takes a second to fetch all the images and put them onto the canvas. Then we’ll pass our texture and dimensions of canvas into the Billboard component.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import Loader from '@/components/ui/modules/Loader/Loader';
    import images from '@/data/images';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    import { useCollageTexture } from '@/hooks/useCollageTexture';
    
    const COUNT = 10;
    const GAP = 3.2;
    
    export default function Home() {
        const { texture, dimensions, isLoading } = useCollageTexture(images); // <-- getting the texture and dimensions from the useCollageTexture hook
    
        if (isLoading) return <Loader />; // <-- showing the loader when the texture is loading
    
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                    <PerspectiveCamera makeDefault fov={7} position={[0, 0, 100]} near={0.01} far={100000} />
                    <group rotation={[-0.15, 0, -0.2]}>
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                                key={`billboard-${index}`}
                                radius={5}
                                rotation={[0, index * Math.PI * 0.5, 0]}
                                position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]}
                                texture={texture} // <--passing the texture to the billboard
                                dimensions={dimensions} // <--passing the dimensions to the billboard
                            />,
                            <Banner
                                key={`banner-${index}`}
                                radius={5.035}
                                rotation={[0, 0, 0.085]}
                                position={[
                                    0,
                                    (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5,
                                    0,
                                ]}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    Inside the Billboard component, we need to properly map this texture to make sure everything fits correctly. The width of our canvas will match the circumference of the cylinder, and we’ll center the y position of the texture. This way, all the images keep their resolution and don’t get squished or stretched.

    Billboard.jsx

    'use client';
    
    import * as THREE from 'three';
    import { useRef } from 'react';  
    
    function setupCylinderTextureMapping(texture, dimensions, radius, height) {
        const cylinderCircumference = 2 * Math.PI * radius;
        const cylinderHeight = height;
        const cylinderAspectRatio = cylinderCircumference / cylinderHeight;
    
        if (dimensions.aspectRatio > cylinderAspectRatio) {
            // Canvas is wider than cylinder proportionally
            texture.repeat.x = cylinderAspectRatio / dimensions.aspectRatio;
            texture.repeat.y = 1;
            texture.offset.x = (1 - texture.repeat.x) / 2;
        } else {
            // Canvas is taller than cylinder proportionally
            texture.repeat.x = 1;
            texture.repeat.y = dimensions.aspectRatio / cylinderAspectRatio;
        }
    
        // Center the texture
        texture.offset.y = (1 - texture.repeat.y) / 2;
    }
    
    function Billboard({ texture, dimensions, radius = 5, ...props }) {
        const ref = useRef(null);
    
        setupCylinderTextureMapping(texture, dimensions, radius, 2);
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} />
                <meshBasicMaterial map={texture} side={THREE.DoubleSide} />
            </mesh>
        );
    }
    
    export default Billboard;

    Now let’s animate them using the useFrame hook. The trick to animating these images is to just move the X offset of the texture. This gives us the effect of a rotating mesh, when really we’re just moving the texture offset.

    Billboard.jsx

    'use client';
    
    import * as THREE from 'three';
    import { useRef } from 'react';
    import { useFrame } from '@react-three/fiber';  
    
    function setupCylinderTextureMapping(texture, dimensions, radius, height) {
        const cylinderCircumference = 2 * Math.PI * radius;
        const cylinderHeight = height;
        const cylinderAspectRatio = cylinderCircumference / cylinderHeight;
    
        if (dimensions.aspectRatio > cylinderAspectRatio) {
            // Canvas is wider than cylinder proportionally
            texture.repeat.x = cylinderAspectRatio / dimensions.aspectRatio;
            texture.repeat.y = 1;
            texture.offset.x = (1 - texture.repeat.x) / 2;
        } else {
            // Canvas is taller than cylinder proportionally
            texture.repeat.x = 1;
            texture.repeat.y = dimensions.aspectRatio / cylinderAspectRatio;
        }
    
        // Center the texture
        texture.offset.y = (1 - texture.repeat.y) / 2;
    }
    
    function Billboard({ texture, dimensions, radius = 5, ...props }) {
        const ref = useRef(null);
    
        setupCylinderTextureMapping(texture, dimensions, radius, 2);
    
        useFrame((state, delta) => {
            if (texture) texture.offset.x += delta * 0.001;
        });
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} />
                <meshBasicMaterial map={texture} side={THREE.DoubleSide} />
            </mesh>
        );
    }
    
    export default Billboard;

    I think it would look even better if we made the back of the images a little darker. To do this, I created MeshImageMaterial – it’s just an extension of MeshBasicMaterial that makes our backface a bit darker.

    MeshImageMaterial.js

    import * as THREE from 'three';
    import { extend } from '@react-three/fiber';
    
    export class MeshImageMaterial extends THREE.MeshBasicMaterial {
        constructor(parameters = {}) {
            super(parameters);
            this.setValues(parameters);
        }
    
        onBeforeCompile = (shader) => {
            shader.fragmentShader = shader.fragmentShader.replace(
                '#include <color_fragment>',
                /* glsl */ `#include <color_fragment>
                if (!gl_FrontFacing) {
                vec3 blackCol = vec3(0.0);
                diffuseColor.rgb = mix(diffuseColor.rgb, blackCol, 0.7);
                }
                `
            );
        };
    }
    
    extend({ MeshImageMaterial });

    Billboard.jsx

    'use client';
    
    import * as THREE from 'three';
    import { useRef } from 'react';
    import { useFrame } from '@react-three/fiber';
    import '@/webgl/materials/MeshImageMaterial';
    
    function setupCylinderTextureMapping(texture, dimensions, radius, height) {
        const cylinderCircumference = 2 * Math.PI * radius;
        const cylinderHeight = height;
        const cylinderAspectRatio = cylinderCircumference / cylinderHeight;
    
        if (dimensions.aspectRatio > cylinderAspectRatio) {
            // Canvas is wider than cylinder proportionally
            texture.repeat.x = cylinderAspectRatio / dimensions.aspectRatio;
            texture.repeat.y = 1;
            texture.offset.x = (1 - texture.repeat.x) / 2;
        } else {
            // Canvas is taller than cylinder proportionally
            texture.repeat.x = 1;
            texture.repeat.y = dimensions.aspectRatio / cylinderAspectRatio;
        }
    
        // Center the texture
        texture.offset.y = (1 - texture.repeat.y) / 2;
    }
    
    function Billboard({ texture, dimensions, radius = 5, ...props }) {
        const ref = useRef(null);
    
        setupCylinderTextureMapping(texture, dimensions, radius, 2);
    
        useFrame((state, delta) => {
            if (texture) texture.offset.x += delta * 0.001;
        });
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} />
                <meshImageMaterial map={texture} side={THREE.DoubleSide} toneMapped={false} />
            </mesh>
        );
    }
    
    export default Billboard;

    And now we have our images moving around cylinders. Next, we’ll focus on banners (or marquees, whatever you prefer).

    Adding Texture to the Banner

    The last thing we need to fix is our Banner component. I wrapped it with this texture. Feel free to take it and edit it however you want, but remember to keep the proper dimensions of the texture.

    We simply import our texture using the useTexture hook, map it onto our material, and animate the texture offset just like we did in our Billboard component.

    Billboard.jsx

    'use client';
    
    import * as THREE from 'three';
    import bannerTexture from '@/assets/images/banner.jpg';
    import { useTexture } from '@react-three/drei';
    import { useFrame } from '@react-three/fiber';
    import { useRef } from 'react';
    
    function Banner({ radius = 1.6, ...props }) {
        const ref = useRef(null);
    
        const texture = useTexture(bannerTexture.src);
        texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
    
        useFrame((state, delta) => {
            if (!ref.current) return;
            const material = ref.current.material;
            if (material.map) material.map.offset.x += delta / 30;
        });
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry
                    args={[radius, radius, radius * 0.07, radius * 80, radius * 10, true]}
                />
                <meshBasicMaterial
                    map={texture}
                    map-anisotropy={16}
                    map-repeat={[15, 1]}
                    side={THREE.DoubleSide}
                    toneMapped={false}
                    backfaceRepeatX={3}
                />
            </mesh>
        );
    }
    
    export default Banner;

    Nice! Now we have something cool, but I think it would look even cooler if we replaced the backface with something different. Maybe a gradient? For this, I created another extension of MeshBasicMaterial called MeshBannerMaterial. As you probably guessed, we just put a gradient on the backface. That’s it! Let’s use it in our Banner component.

    We replace the MeshBasicMaterial with MeshBannerMaterial and now it looks like this!

    MeshBannerMaterial.js

    import * as THREE from 'three';
    import { extend } from '@react-three/fiber';
    
    export class MeshBannerMaterial extends THREE.MeshBasicMaterial {
        constructor(parameters = {}) {
            super(parameters);
            this.setValues(parameters);
    
            this.backfaceRepeatX = 1.0;
    
            if (parameters.backfaceRepeatX !== undefined)
    
            this.backfaceRepeatX = parameters.backfaceRepeatX;
        }
    
        onBeforeCompile = (shader) => {
            shader.uniforms.repeatX = { value: this.backfaceRepeatX * 0.1 };
            shader.fragmentShader = shader.fragmentShader
            .replace(
                '#include <common>',
                /* glsl */ `#include <common>
                uniform float repeatX;
    
                vec3 pal( in float t, in vec3 a, in vec3 b, in vec3 c, in vec3 d ) {
                    return a + b*cos( 6.28318*(c*t+d) );
                }
                `
            )
            .replace(
                '#include <color_fragment>',
                /* glsl */ `#include <color_fragment>
                if (!gl_FrontFacing) {
                diffuseColor.rgb = pal(vMapUv.x * repeatX, vec3(0.5,0.5,0.5),vec3(0.5,0.5,0.5),vec3(1.0,1.0,1.0),vec3(0.0,0.10,0.20) );
                }
                `
            );
        };
    }
    
    extend({ MeshBannerMaterial });

    Banner.jsx

    'use client';
    
    import * as THREE from 'three';
    import bannerTexture from '@/assets/images/banner.jpg';
    import { useTexture } from '@react-three/drei';
    import { useFrame } from '@react-three/fiber';
    import { useRef } from 'react';
    import '@/webgl/materials/MeshBannerMaterial';
    
    function Banner({ radius = 1.6, ...props }) {
    const ref = useRef(null);
    
    const texture = useTexture(bannerTexture.src);
    
    texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
    
    useFrame((state, delta) => {
        if (!ref.current) return;
    
        const material = ref.current.material;
    
        if (material.map) material.map.offset.x += delta / 30;
    });
    
    return (
        <mesh ref={ref} {...props}>
            <cylinderGeometry
                args={[radius, radius, radius * 0.07, radius * 80, radius * 10, true]}
            />
            <meshBannerMaterial
                map={texture}
                map-anisotropy={16}
                map-repeat={[15, 1]}
                side={THREE.DoubleSide}
                toneMapped={false}
                backfaceRepeatX={3}
            />
        </mesh>
    );
    }
    
    export default Banner;

    And now we have it ✨

    Check out the demo

    You can experiment with this method in lots of ways. For example, I created 2 more examples with shapes I made in Blender, and mapped canvas textures on them. You can check them out here:

    Final Words

    Check out the final versions of all demos:

    I hope you enjoyed this tutorial and learned something new!

    Feel free to check out the source code for more details!



    Source link

  • From Static to Dynamic: 3 Micro-Animations Every Web Developer Can Master with Rive

    From Static to Dynamic: 3 Micro-Animations Every Web Developer Can Master with Rive


    Interactive web animations have become essential for modern websites, but choosing the right implementation approach can be challenging. CSS, Video and JavaScript are the familiar methods and each certainly has its place in a developer’s toolkit. When you need your site to have unique custom interactions (while remaining light and performant, of course), that’s where Rive shines.

    Rive animations, whether vector or raster, look crisp at any size, are lightweight (often smaller than equivalent Lottie files), and can respond to user interactions and real-time data through a straightforward JavaScript API.

    This tutorial will walk you through Rive’s workflow and implementation process using three practical examples. We’ll build them step-by-step using a fictional smart plant care company called “TapRoot” as our case study, so you can see exactly how Rive fits into a real development process and decide if it’s right for your next project.

    There are countless ways to use Rive, but we’ll focus on these three patterns:

    1. Animated Hero Images create an immediate emotional connection and brand personality
    2. Interactive CTAs increase conversion rates by providing clear, satisfying feedback
    3. Flexible Layouts combine elements into an experience that works at any size

    Each pattern builds on the previous one, teaching you progressively more sophisticated Rive techniques while solving real-world UX challenges.

    Pattern 1: The Living Hero Image

    The Static Starting Point

    A static hero section for TapRoot could feature a photo of their smart plant pot with overlay text. It show’s the product, but we can do better.

    Creating the Rive Animation

    Let’s create an animated version that transforms this simple scene into a revealing experience that literally shows what makes TapRoot “smarter than it looks.” The animation features:

    • Gently swaying leaves: Constant, subtle motion brings a sense of life to the page.
    • Interior-reveal effect: Hovering over the pot reveals the hidden root system and embedded sensors
    • Product Feature Callouts: Key features are highlighted with interactive callouts

    Although Rive is vector-based, you can also import JPG, PNG, and PSD files. With an embedded image, a mesh can be constructed and a series of bones can be bound to it. Animating the bones gives the subtle motion of the leaves moving. We’ll loop it at a slow speed so the motion is noticeable, but not distracting.

    Adding Interactivity

    Next we’ll add a hover animation that reveals the inside of the pot. By clipping the image of the front of the pot to a rectangle, we can resize the shape to reveal the layers underneath. Using a joystick allows us to have an animation follow the cursor when it’s in within the hit area of the pot and snap back to normal when the cursor leaves the area.

    Feature Callouts

    With a nested artboard, it is easy to build a single layout to create multiple versions of an element. In this case, a feature callout has an updated icon, title, and short description for three separate features.

    The Result

    What was once a simple product photo is now an interactive revelation of TapRoot’s hidden intelligence. The animation embodies the brand message—”smarter than it looks”—by literally revealing the sophisticated technology beneath a beautifully minimal exterior.

    Pattern 2: The Conversion-Boosting Interactive CTA

    Beyond the Basic Button

    Most CTAs are afterthoughts—a colored rectangle with text. But your CTA is often the most important element on your page. Let’s make it irresistible.

    The Static Starting Point

    <button class="cta-button">Get yours today</button>
    .cta-button {
      background: #4CAF50;
      color: white;
      padding: 16px 32px;
      border: none;
      border-radius: 8px;
      font-size: 18px;
      cursor: pointer;
      transition: background-color 0.3s;
    }
    
    .cta-button:hover {
      background: #45a049;
    }

    Looks like this:

    Get’s the job done, but we can do better.

    The Rive Animation Design

    Our smart CTA tells a story in three states:

    1. Idle State: Clean, minimal button with an occasional “shine” animation
    2. Hover State: Fingerprint icon begins to follow the cursor
    3. Click State: An animated “tap” of the button

    Pattern 3: Flexible Layout

    Next we can combine the elements into a responsive animated layout that works on any device size. Rive’s layout features familiar row and column arrangements and lets you determine how your animated elements fit within areas as they resize.

    Check this out on the Rive Marketplace to dive into the file or remix it: https://rive.app/community/files/21264-39951-taproot-layout/

    Beyond These Three Patterns

    Once you’re comfortable with hero images, interactive CTAs, and flexible layouts, you can apply the same Rive principles to:

    • Loading states that tell stories while users wait
    • Form validation that guides users with gentle visual feedback
    • Data visualizations that reveal insights through motion
    • Onboarding flows that teach through interaction
    • Error states that maintain user confidence through friendly animation

    Your Next Steps

    1. Start Simple: Choose one existing static element on your site
    2. Design with Purpose: Every animation should solve a real user problem
    3. Test and Iterate: Measure performance and user satisfaction
    4. Explore Further: Check out the Rive Documentation and Community for inspiration

    Conclusion

    The web is becoming more interactive and alive. By understanding how to implement Rive animations—from X-ray reveals to root network interactions—you’re adding tools that create experiences users remember and share.

    The difference between a good website and a great one often comes down to these subtle details: the satisfying feedback of a button click, the smooth transition between themes, the curiosity sparked by hidden technology. These micro-interactions connect with users on an emotional level while providing genuine functional value.



    Source link

  • Coding a 3D Audio Visualizer with Three.js, GSAP & Web Audio API

    Coding a 3D Audio Visualizer with Three.js, GSAP & Web Audio API


    Sound is vibration, vision is vibration you can see. I’m always chasing the moment those waves overlap. For a recent Webflow & GSAP community challenge focusing on GSAP Draggable and Inertia Plugin, I decided to push the idea further by building a futuristic audio-reactive visualizer. The concept was to create a sci-fi “anomaly detector” interface that reacts to music in real time, blending moody visuals with sound.

    The concept began with a simple image in my mind: a glowing orange-to-white sphere sitting alone in a dark void, the core that would later pulse with the music. To solidify the idea, I ran this prompt through Midjourney: “Glowing orange and white gradient sphere, soft blurry layers, smooth distortion, dark black background, subtle film-grain, retro-analog vibe, cinematic lighting.” After a few iterations I picked the frame that felt right, gave it a quick color pass in Photoshop, and used that clean, luminous orb as the visual foundation for the entire audio-reactive build.

    Midjourney explorations

    The project was originally built as an entry for the Webflow × GSAP Community Challenge (Week 2: “Draggable & Inertia”), which encouraged the use of GSAP’s dragging and inertia capabilities. This context influenced the features: I made the on-screen control panels draggable with momentum, and even gave the 3D orb a subtle inertia-driven movement when “flung”. In this article, I’ll walk you through the entire process – from setting up the Three.js scene and analyzing audio with the Web Audio API, to creating custom shaders and adding GSAP animations and interactivity. By the end, you’ll see how code, visuals, and sound come together to create an immersive audio visualizer.

    Setting Up the Three.js Scene

    To build the 3D portion, I used Three.js to create a scene containing a dynamic sphere (the “anomaly”) and other visual elements. 

    We start with the usual Three.js setup: a scene, a camera, and a renderer. I went with a perspective camera to get a nice 3D view of our orb and placed it a bit back so the object is fully in frame. 

    An OrbitControls is used to allow basic click-and-drag orbiting around the object (with some damping for smoothness). Here’s a simplified snippet of the initial setup:

    // Initialize Three.js scene, camera, renderer
    const scene = new THREE.Scene();
    const camera = new THREE.PerspectiveCamera(75, window.innerWidth/window.innerHeight, 0.1, 100);
    camera.position.set(0, 0, 10);  // camera back a bit from origin
    
    const renderer = new THREE.WebGLRenderer({ antialias: true });
    renderer.setSize(window.innerWidth, window.innerHeight);
    document.body.appendChild(renderer.domElement);
    
    // Add OrbitControls for camera rotation
    const controls = new THREE.OrbitControls(camera, renderer.domElement);
    controls.enableDamping = true;
    controls.dampingFactor = 0.1;
    controls.rotateSpeed = 0.5;
    controls.enableZoom = false; // lock zoom for a more fixed view

    Next, I created the anomaly object. This is the main feature: a spiky wireframe sphere that reacts to audio. Three.js provides shapes like SphereGeometry or IcosahedronGeometry that we can use for a sphere. I chose an icosahedron geometry because it gives an interesting multi sided look and allows easy control of detail (via a subdivision level). The anomaly is actually composed of two overlapping parts:

    • Outer wireframe sphere: An IcosahedronGeometry with a custom ShaderMaterial that draws it as a glowing wireframe. This part will distort based on music (imagine it “vibrating” and morphing with the beat).
    • Inner glow sphere: A slightly larger SphereGeometry drawn with a semi-transparent, emissive shader (using the backside of the geometry) to create a halo or aura around the wireframe. This gives the orb a warm glow effect, like an energy field.

    I also added in some extra visuals: a field of tiny particles floating in the background (for a depth effect, like dust or sparks) and a subtle grid overlay in the UI (more on the UI later). The scene’s background is set to a dark color, and I layered a background image (the edited Midjourney visual) behind the canvas to create the mysterious-alien landscape horizon. This combination of 3D objects and 2D backdrop creates the illusion of a holographic display over a planetary surface.

    Integrating the Web Audio API for Music Analysis

    With the 3D scene in place, the next step was making it respond to music. This is where the Web Audio API comes in. I allowed the user to either upload an audio file or pick one of the four provided tracks. When the audio plays, we tap into the audio stream and analyze its frequencies in real-time using an AnalyserNode. The AnalyserNode gives us access to frequency data. This is a snapshot of the audio spectrum (bass, mids, treble levels, etc.) at any given moment, which we can use to drive animations.

    To set this up, I created an AudioContext and an AnalyserNode, and connected an audio source to it. If you’re using an <audio> element for playback, you can create a MediaElementSource from it and pipe that into the analyser. For example:

    // Create AudioContext and Analyser
    const audioContext = new (window.AudioContext || window.webkitAudioContext)();
    const analyser = audioContext.createAnalyser();
    analyser.fftSize = 2048;                  // Use an FFT size of 2048 for analysis
    analyser.smoothingTimeConstant = 0.8;     // Smooth out the frequencies a bit
    
    // Connect an audio element source to the analyser
    const audioElement = document.getElementById('audio-player');  // <audio> element
    const source = audioContext.createMediaElementSource(audioElement);
    source.connect(analyser);
    analyser.connect(audioContext.destination);  // connect to output so sound plays

    Here we set fftSize to 2048, which means the analyser will break the audio into 1024 frequency bins (frequencyBinCount is half of fftSize). We also set a smoothingTimeConstant to make the data less jumpy frame-to-frame. Now, as the audio plays, we can repeatedly query the analyser for data. The method analyser.getByteFrequencyData(array) fills an array with the current frequency magnitudes (0–255) across the spectrum. Similarly, getByteTimeDomainData gives waveform amplitude data. In our animation loop, I call analyser.getByteFrequencyData() on each frame to get fresh data:

    const frequencyData = new Uint8Array(analyser.frequencyBinCount);
    
    function animate() {
      requestAnimationFrame(animate);
    
      // ... update Three.js controls, etc.
      if (analyser) {
        analyser.getByteFrequencyData(frequencyData);
        // Compute an average volume level from frequency data
        let sum = 0;
        for (let i = 0; i < frequencyData.length; i++) {
          sum += frequencyData[i];
        }
        const average = sum / frequencyData.length;
        let audioLevel = average / 255;  // normalize to 0.0–1.0
        // Apply a sensitivity scaling (from a UI slider) 
        audioLevel *= (sensitivity / 5.0);
        // Now audioLevel represents the intensity of the music (0 = silence, ~1 = very loud)
      }
    
      // ... (use audioLevel to update visuals)
      renderer.render(scene, camera);
    }

    In my case, I also identified a “peak frequency” (the frequency bin with the highest amplitude at a given moment) and some other metrics just for fun, which I display on the UI (e.g. showing the dominant frequency in Hz, amplitude, etc., as “Anomaly Metrics”). But the key takeaway is the audioLevel – a value representing overall music intensity – which we’ll use to drive the 3D visual changes.

    Syncing Audio with Visuals: Once we have audioLevel, we can inject it into our Three.js world. I passed this value into the shaders as a uniform every frame, and also used it to tweak some high-level motion (like rotation speed). Additionally, GSAP animations were triggered by play/pause events (for example, a slight camera zoom when music starts, which we’ll cover next). The result is that the visuals move in time with the music: louder or more intense moments in the audio make the anomaly glow brighter and distort more, while quiet moments cause it to settle down.

    Creating the Audio-Reactive Shaders

    To achieve the dynamic look for the anomaly, I used custom GLSL shaders in the material. Three.js lets us write our own shaders via THREE.ShaderMaterial, which is perfect for this because it gives fine-grained control over vertex positions and fragment colors. This might sound difficult if you’re new to shaders, but conceptually we did two major things in the shader:

    1. Vertex Distortion with Noise: We displace the vertices of the sphere mesh over time to make it wobble and spike. I included a 3D noise function (Simplex noise) in the vertex shader – it produces a smooth pseudo-random value for any 3D coordinate. For each vertex, I calculate a noise value based on its position (plus a time factor to animate it). Then I move the vertex along its normal by an amount proportional to that noise. We also multiply this by our audioLevel and a user-controlled distortion factor. Essentially, when the music is intense (high audioLevel), the sphere gets spikier and more chaotic; when the music is soft or paused, the sphere is almost smooth.
    2. Fresnel Glow in Fragment Shader: To make the wireframe edges glow and fade realistically, I used a fresnel effect in the fragment shader. This effect makes surfaces more luminous at glancing angles. We calculate it by taking the dot product of the view direction and the vertex normal – it results in a value that’s small on edges (grazing angles) and larger on faces directly facing the camera. By inverting and exponentiating this, we get a nice glow on the outline of the sphere that intensifies at the edges. I modulated the fresnel intensity with the audioLevel as well, so the glow pulsates with the beat.

    Let’s look at a simplified version of the shader code for the outer wireframe sphere material:

    const outerMaterial = new THREE.ShaderMaterial({
      uniforms: {
        time:      { value: 0 },
        audioLevel:{ value: 0 },            // this will be updated each frame
        distortion:{ value: 1.0 },
        color:     { value: new THREE.Color(0xff4e42) }  // a reddish-orange base color
      },
      wireframe: true,
      transparent: true,
      vertexShader: `
        uniform float time;
        uniform float audioLevel;
        uniform float distortion;
        // (noise function omitted for brevity)
    
        void main() {
          // Start with the original position
          vec3 pos = position;
          // Calculate procedural noise value for this vertex (using its position and time)
          float noise = snoise(pos * 0.5 + vec3(0.0, 0.0, time * 0.3));
          // Displace vertex along its normal
          pos += normal * noise * distortion * (1.0 + audioLevel);
          // Standard transformation
          gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
        }
      `,
      fragmentShader: `
        uniform vec3 color;
        uniform float audioLevel;
        varying vec3 vNormal;
        varying vec3 vPosition;
        
        void main() {
          // Calculate fresnel (view-angle dependent) term
          vec3 viewDir = normalize(cameraPosition - vPosition);
          float fresnel = 1.0 - max(0.0, dot(viewDir, vNormal));
          fresnel = pow(fresnel, 2.0 + audioLevel * 2.0);
          // Make the fragment color brighter on edges (fresnel) and pulse it slightly with time
          float pulse = 0.8 + 0.2 * sin(time * 2.0);
          vec3 emissiveColor = color * fresnel * pulse * (1.0 + audioLevel * 0.8);
          // Alpha fade out a bit when audio is high (to make spikes more ethereal)
          float alpha = fresnel * (0.7 - audioLevel * 0.3);
          gl_FragColor = vec4(emissiveColor, alpha);
        }
      `
    });

    In this shader, snoise is a Simplex noise function (not shown above) producing values ~-1 to 1. The vertex shader uses it to offset each vertex (pos += normal * noise * …). We multiply the noise by (1.0 + audioLevel) so that when audioLevel rises, the displacement increases. The distortion uniform is controlled by a slider in the UI, so the user can manually dial the overall spikiness. The fragment shader calculates a fresnel factor to make the wireframe edges glow. Notice how audioLevel factors into the power and into the final color intensity – louder audio makes the fresnel exponent higher (sharper glow) and also increases brightness a bit. We also included a gentle pulsing (sin(time)) independent of audio, just to give a constant breathing motion.

    For the inner glow sphere, we used a separate ShaderMaterial: it’s basically a sphere drawn with side: THREE.BackSide (so we see the inner surface) and Additive Blending to give a blooming halo. Its fragment shader also uses a fresnel term, but with a much lower alpha so it appears as a soft haze around the orb. The inner sphere’s size is slightly larger (I used about 1.2× the radius of the outer sphere) so that the glow extends beyond the wireframe. When combined, the outer and inner shaders create the effect of a translucent, energy-filled orb whose surface ripples with music.

    To tie it all together, every frame in the render loop I update the shader uniforms with the current time and audio level:

    // in the animation loop:
    outerMaterial.uniforms.time.value = elapsedTime;
    outerMaterial.uniforms.audioLevel.value = audioLevel;
    outerMaterial.uniforms.distortion.value = currentDistortion; 
    glowMaterial.uniforms.time.value = elapsedTime;
    glowMaterial.uniforms.audioLevel.value = audioLevel;

    The result is a 3D object that truly feels alive with the music, it oscillates, pulses, and glows in sync with whatever track is playing. Even the one you add.

    Animations and Interactions with GSAP

    With the visuals reacting to sound, I added GSAP to handle smooth animations and user interactions. GSAP is great for creating timeline sequences and tweening properties with easing, and it also comes with plugins that were perfect for this project: Draggable for click-and-drag UI, and InertiaPlugin for momentum. Best of all, every GSAP plugin is now completely free to use. Below are the key ways I used GSAP in the project:

    Intro Animation & Camera Movement: When the user selects a track and hits play, I trigger a brief “activation” sequence. This involves some text appearing in the “terminal” and a slight camera zoom-in toward the orb to signal that the system is online. The camera movement was done with a simple GSAP tween of the camera’s position. For example, I defined a default camera position and a slightly closer “zoomed” position. On play, I use gsap.to() to interpolate the camera position to the zoomed-in coordinates, and on pause/stop I tween it back out. GSAP makes this kind of 3D property animation straightforward:

    const defaultCameraPos = { x: 0, y: 0, z: 10 };
    const zoomedCameraPos = { x: 0, y: 0, z: 7 }; // move camera closer on zoom
    
    function zoomCameraForAudio(zoomIn) {
      const target = zoomIn ? zoomedCameraPos : defaultCameraPos;
      gsap.to(camera.position, {
        x: target.x,
        y: target.y,
        z: target.z,
        duration: 1.5,
        ease: "power2.inOut"
      });
    }
    
    // When audio starts:
    zoomCameraForAudio(true);
    // When audio ends or is stopped:
    zoomCameraForAudio(false);

    This smooth zoom adds drama when the music kicks in, drawing the viewer into the scene. The power2.inOut easing gives it a nice gentle start and stop. I also used GSAP timelines for any other scripted sequences (like fading out the “Analyzing…” overlay text after a few seconds, etc.), since GSAP’s timeline control is very handy for orchestrating arranging multiple animations in order.

    Draggable UI Panels: The interface has a few UI components overlaying the 3D canvas – e.g. an “Anomaly Controls” panel (with sliders for rotation speed, distortion amount, etc.), an “Audio Spectrum Analyzer” panel (showing a bar graph of frequencies and track selection buttons), and a “System Terminal” readout (displaying log messages like a console). To make the experience playful, I made these panels draggable. Using GSAP’s Draggable plugin, I simply turned each .panel element into a draggable object:

    Draggable.create(".panel", {
      type: "x,y",
      bounds: "body",         // confine dragging within the viewport
      inertia: true,          // enable momentum after release
      edgeResistance: 0.65,   // a bit of resistance at the edges
      onDragStart: () => { /* bring panel to front, etc. */ },
      onDragEnd: function() {
        // Optionally, log the velocity or other info for fun
        console.log("Panel thrown with velocity:", this.getVelocity());
      }
    });

    Setting inertia: true means when the user releases a panel, it will continue moving in the direction they tossed it, gradually slowing to a stop (thanks to InertiaPlugin). This little touch makes the UI feel more tactile and real – you can flick the panels around and they slide with some “weight.” According to GSAP’s docs, Draggable will automatically handle the physics when inertia is enabled , so it was plug-and-play. I also constrained dragging within the body bounds so panels don’t get lost off-screen. Each panel has a clickable header (a drag handle area), set via the handle option, to restrict where a user can grab it. Under the hood, InertiaPlugin calculates the velocity of the drag and creates a tween that smoothly decelerates the element after you let go, mimicking friction.

    Interactive Orb Drag (Bonus): As a creative experiment, I even made the 3D anomaly orb itself draggable. This was a bit more involved since it’s not a DOM element, but I implemented it by raycasting for clicks on the 3D object and then rotating the object based on mouse movement. I applied a similar inertia effect manually: when you “throw” the orb, it keeps spinning and slowly comes to rest. This wasn’t using GSAP’s Draggable directly (since that works in screen space), but I did use the InertiaPlugin concept by capturing the drag velocity and then using an inertial decay on that velocity each frame. It added a fun way to interact with the visualizer – you can nudge the orb and see it respond physically. For example, if you drag and release quickly, the orb will continue rotating with momentum. This kind of custom 3D dragging is outside the scope of a basic tutorial, but it shows how you can combine your own logic with GSAP’s physics concepts to enrich interactions.

    GSAP Draggable and Inertia in action

    In summary, GSAP handles all the non-audio animations: the camera moves, panel drags, and little transitions in the UI. The combination of sound-reactive shader animations (running every frame based on audio data) and event-based GSAP tweens (triggered on user actions or certain times) gives a layered result where everything feels responsive and alive.

    UI and Atmosphere

    Finally, a few words about the surrounding UI/atmosphere which glue the experience together. The visualizer’s style was inspired by sci-fi control panels, so I leaned into that:

    Control Panels and Readouts: I built the overlay UI with HTML/CSS, keeping it minimalistic (just semi-transparent dark panels with light text and a few sliders/buttons). Key controls include rotation speed (how fast the orb spins), resolution (tessellation level of the icosahedron mesh), distortion amount, audio reactivity (scaling of audio impact), and sensitivity (which adjusts how the audio’s volume is interpreted). Changing these in real-time immediately affects the Three.js scene – for example, dragging the “Resolution” slider rebuilds the icosahedron geometry with more or fewer triangles, which is a cool way to see the orb go from coarse to finely subdivided. The “Audio Spectrum Analyzer” panel displays a classic bar graph of frequencies (drawn on a canvas using the analyser data) so you have a 2D visualization accompanying the 3D one. There’s also a console-style terminal readout that logs events (like “AUDIO ANALYSIS SYSTEM INITIALIZED” or the velocity of drags in a playful GSAP log format) to reinforce the concept of a high-tech system at work.

    Design elements: To boost the sci-fi feel, I added a subtle grid overlay across the whole screen. This was done with pure CSS – a pair of repeating linear gradients forming horizontal and vertical lines (1px thin, very transparent) over a transparent background . It’s barely noticeable but gives a technical texture, especially against the glow of the orb. I also added some drifting ambient particles (tiny dots) floating slowly in the background, implemented as simple divs animated with JavaScript. They move in pseudo-random orbits.

    Soundtrack: I curated three atmospheric and moody tracks, along with one of my own unreleased tracks, under my music alias LXSTNGHT. The track was produced in Ableton, and it’s unfinished. The end result is an experience where design, code, and music production collide in real time.

    Bringing all these elements together, the final result is an interactive art piece: you load a track, the “Audio ARK” system comes online with a flurry of text feedback, the ambient music starts playing, and the orb begins to pulse and mutate in sync with the sound. You can tweak controls or toss around panels (or the orb itself) to explore different visuals.

    Final result

    The combination of Three.js (for rendering and shader effects), Web Audio API (for sound analysis), and GSAP (for polished interactions) showcases how creative coding tools can merge to produce an immersive experience that engages multiple senses.

    And that’s a wrap, thanks for following along!



    Source link

  • Top 10 Cloud Security Challenges in 2025 And How to Solve Them with Seqrite

    Top 10 Cloud Security Challenges in 2025 And How to Solve Them with Seqrite


    In today’s world, organizations are rapidly embracing cloud security to safeguard their data and operations. However, as cloud adoption grows, so do the risks. In this post, we highlight the top cloud security challenges and show how Seqrite can help you tackle them with ease.

    1.    Misconfigurations

    One of the simplest yet most dangerous mistakes is misconfiguring cloud workloads think storage buckets left public, weak IAM settings, or missing encryption. Cybercriminals actively scan for these mistakes. A small misconfiguration can lead to significant data leakage or worst-case, ransomware deployment. Seqrite Endpoint Protection Cloud ensure your cloud environment adheres to best-practice security settings before threats even strike.

    2.    Shared Responsibility Confusion

    The cloud model operates on shared responsibility: providers secure infrastructure, you manage your data and configurations. Too many teams skip this second part. Inadequate control over access, authentication, and setup drives serious risks. With Seqrite’s unified dashboard for access control, IAM, and policy enforcement, you stay firmly in control without getting overwhelmed.

    3.    Expanded Attack Surface

    More cloud services, more code, more APIs, more opportunities for attacks. Whether it’s serverless functions or public API endpoints, the number of access points grows quickly. Seqrite tackles this with integrated API scanning, vulnerability assessment, and real-time threat detection. Every service, even ephemeral ones is continuously monitored.

    4.    Unauthorized Access & Account Hijacking

    Attackers often gain entry via stolen credentials, especially in shared or multi-cloud environments. Once inside, they move laterally and hijack more resources. Seqrite’s multi-factor authentication, adaptive risk scoring, and real-time anomaly detection lock out illicit access and alert you instantly.

    5.    Insufficient Data Encryption

    Unencrypted data whether at rest or in transit is a gold mine for attackers. Industries with sensitive or regulated information, like healthcare or finance, simply can’t afford this. Seqrite ensures enterprise-grade encryption everywhere you store or transmit data and handles key management so that it’s secure and hassle-free.

    6.    Poor Visibility and Monitoring

    Without centralized visibility, security teams rely on manual cloud consoles and piecemeal logs. That slows response and leaves gaps. Seqrite solves this with a unified monitoring layer that aggregates logs and events across all your cloud environments. You get complete oversight and lightning-fast detection.

    7.     Regulatory Compliance Pressures

    Compliance with GDPR, HIPAA, PCI-DSS, DPDPA and other regulations is mandatory—but complex in multi-cloud environments. Seqrite Data Privacy simplifies compliance with continuous audits, policy enforcement, and detailed reports, helping you reduce audit stress and regulatory risk.

    8.    Staffing & Skills Gap

    Hiring cloud-native, security-savvy experts is tough. Many teams lack the expertise to monitor and secure dynamic cloud environments. Seqrite’s intuitive interface, automation, and policy templates remove much of the manual work, allowing lean IT teams to punch above their weight.

    9.    Multi-cloud Management Challenges

    Working across AWS, Azure, Google Cloud and maybe even private clouds? Each has its own models and configurations. This fragmentation creates blind spots and policy drift. Seqrite consolidates everything into one seamless dashboard, ensuring consistent cloud security policies across all environments.

    10.  Compliance in Hybrid & Multi-cloud Setups

    Hybrid cloud setups introduce additional risks, cross-environment data flows, networking complexities, and inconsistent controls. Seqrite supports consistent security policy application across on-premises, private clouds, and public clouds, no matter where a workload lives.

    Bring in Seqrite to secure your cloud journey, safe, compliant, and hassle-free.

     



    Source link