بلاگ

  • 10 underestimated tasks to do before your next virtual presentation | Code4IT


    When performing a talk, the audience experience is as important as the content. They must be focused on what you say, and not get distracted by external outputs. So, here’s 10 tips to rock your next virtual talk.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    More and more developers crave to be also tech speakers. We can see every day dozens of meetups, live streaming, and YouTube videos by developers from all over the world. But regardless of the topic and the type of talk you’re doing, there are a few tips you should keep in mind to rock the execution.

    Those tips are not about the content, but about the presentation itself. So, maybe, consider re-reading this checklist about 30 minutes before your next virtual conference.

    1- Hide desktop icons

    Many of you have lots of icons on your desktop, right? Me too. I often save on Desktop temporary files (that I always forget to move or delete) and many program icons, like Postman, Fiddler, Word, and so on.

    They are just a distraction to your audience. You should keep the desktop as clean as possible.

    You can do it in 2 ways: hide all the icons (on Windows: right-click > View > untick Show desktop icons) or just remove the ones that are not necessary.

    The second option is better if you have lots of content to show from different sources, like images, plots, demo with different tools, and so on.

    If you have everything under a single folder, you can simply hide all icons and pin that folder on Quick Access.

    2- Choose a neutral desktop background

    Again, your audience should focus on your talk, not on your desktop. So just remove funny or distracting background images.

    Even more, if you use memes or family photos as desktop background.

    A good idea is to create a custom desktop background for the event you are participating in: a simple image with the name of the talk, your name, and your social contacts.

    A messy background is cool, but distracts the audience

    3- Mute your phone

    Avoid all the possible distractions. WhatsApp notifications, calls from Call Centres, alarm clocks you forgot to turn off…

    So, just use Airplane mode.

    4- Remove useless bookmarks (or use a different browser)

    Just as desktop icons, bookmarks can distract your audience.

    You don’t want to show everyone which social networks are you using, what are the projects you’re currently working on, and other private info about you.

    A good alternative is to use a different browser. But remember to do a rehearsal with that browser: sometimes some JavaScript and CSS functionalities are not available on every browser, so don’t take anything for granted.

    5- Close background processes

    What if you get an awkward message on Skype or Slack while you’re sharing your screen?

    So, remember to close all useless background processes: all the chats (Skype, Discord, Telegram…) and all the backup platforms (OneDrive, Dropbox, and so on).

    A risk: unwanted notifications that appear while sharing your screen. And even worse, all those programs require network bandwidth and use CPU and Memory: shutting them down will boost the other applications and make everything run smoother.

    6- Check font size and screen resolution

    You don’t know the device your audience will use. Some of them will watch you talk on a smartphone, some others on a 60″ TV.

    So, even if you’re used to small fonts and icons, make everything bigger. Start with screen resolution. If it is OK, now increase the font size for both your slides and your IDE.

    Make sure everyone can read it. If you can, during the rehearsals share your screen with a smartphone and a big TV, and find the balance.

    7- Disable dark mode

    Accessibility is the key, even more for virtual events. And not everyone can see everything as you do. So, switch everything to light mode: IDEs, websites, tools. Everything that natively comes with light mode.

    8- Check mic volume

    This is simple: if your mic volume is too low, your audience won’t hear a word from you. So, instead of screaming for one hour, just put your mic near you or increase the volume.

    9- Use ZoomIt to draw on your screen

    «Ok, now, I click on this button on the top-left corner with the Home icon».

    How many times have you heard this phrase? It’s not wrong to say so, but you can simply show it. Remember, show, don’t tell!

    For Windows, you can install a small tool, ZoomIt, that allows you to draw lines, arrows, and shapes on your screen.

    You can read more on this page by Microsoft, where you can find the download file, some shortcuts, and more info.

    So, download it, try out some shortcuts (eg: R, G, B to use a red, green, or blue pen, and Hold Ctrl + Shift to draw an arrow) and use it to help your audience see what you’re indicating with your mouse.

    With ZoomIt you can draw lines and rectangles on your screen

    10- Have a backup in case of network failures

    Your internet connection goes down during the live. First reaction: shock. But then, you remember you have everything under control: you can use your smartphone as a hotspot and use that connection to move on with your talk. So, always have a plan B.

    And what if the site you’re showing for your demos goes down? Say that you’re explaining what are Azure Functions, and suddenly the Azure Dashboard becomes unavailable. How to prevent this situation?

    You can’t. But you can have a backup plan: save screenshots and screencasts, and show them if you cannot access the original sites.

    Wrapping up

    We’ve seen that there are lots of things to do to improve the quality of your virtual talks. If you have more tips to share, share them in the comment section below or on this discussion on Twitter.

    Performing your first talks is really challenging, I know. But it’s worth a try. If you want to read more about how to be ready for it, here’s the recap of what I’ve learned after my very first public speech.





    Source link

  • 14 to 2 seconds: how I improved the performance of an endpoint by 82%


    Language details may impact application performance. In this article we’ll see some of the C# tips that brought me to improve my application. Singleton creation, StringBuilder and more!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    In this second article, I’m going to share some more tips that brought me to improve the performance of an API from 14sec to less than 3 seconds: an improvement of 82%.

    In the previous article, we’ve seen some general, language-agnostic ways to approach this kind of problem, and what you can try (and avoid) to do to achieve a similar result.

    In this article, we’re going to see some .NET-related tips that can help to improve your APIs performance.

    WarmUp your application using Postman to create Singleton dependencies

    In my application, we use (of course) dependency injection. Almost all the dependencies are marked ad Singleton: this means that every dependency is created at the start-up of the application and is then shared through all the lifespan of the application.

    Pss: if you want to know the difference between Singleton, Transient, and Scoped lifetimes with real examples, check out this article!

    It makes sense, right? But have a closer look at the timing in this picture:

    Timings with initial warmup time

    The blue line is the whole HTTP call, and the black line is the API Action.

    There are almost 2 seconds of nothing! Why?

    Well, as explained in the article “Reducing initial request latency by pre-building services in a startup task in ASP.NET Core” by Andrew Lock, singletons are created during the first request, not at the real start-up of the application. And, given that all the dependencies in this application are singletons, the first 2 seconds are being used to create those instances.

    While Andrew explains how to create a Startup task to warm up the dependencies, I opted for a quick-and-dirty option: create a Warmup endpoint and call it before any call in Postman.

    [HttpGet, Route("warmup")]
    public ActionResult<string> WarmUp()
    {
        var obj = new
        {
            status = "ready"
        };
    
        return Ok(obj);
    }
    

    It is important to expose that endpoint under a controller that uses DI: as we’ve seen before, dependencies are created during the first request they’re needed; so, if you create an empty controller with only the WarmUp method, you won’t build any dependency and you’ll never see improvements. My suggestion is to place the WarmUp method under a controller that requires one of the root services: in this way, you’ll create the services and all their dependencies.

    To call the WarmUp endpoint before every request, I’ve created this simple script:

    pm.sendRequest("https://localhost:44326/api/warmup", function (err, response) {
      console.log("ok")
    })
    

    So, if you paste it in Postman, into the Pre-requests Script tab, it executes this call before the main HTTP call and warms up your application.

    Pre-request script on Postman

    This tip will not speed up your application but gives your a more precise value for the timings.

    Improve language-specific details

    Understanding how C# works and what functionalities it offers is crucial to get well working applications.

    There’s plenty of articles around the Internet that tell you some nice tips and trick to improve .NET performance; here I’ll list some of my favorite tips an why you should care about them.

    Choose the correct data type

    There’s a lot you can do, like choosing the right data type: if you are storing a player’s age, is int the right choice? Remember that int.MinValue is -2147483648 and int.MaxValue is -2147483648.

    You could use byte: its range is [0,255], so it’s perfectly fine to use it.

    To have an idea of what data type to choose, here’s a short recap with the Min value, the Max value, and the number of bytes occupied by that data type:

    Data type Min value Max Value # of bytes
    byte 0 255 1
    short -32768 32767 2
    ushort 0 65535 2
    int -2147483648 2147483647 4
    uint 0 4294967295 4

    So, just by choosing the right data type, you’ll improve memory usage and then the overall performance.

    It will not bring incredible results, but it’s a good idea to think well of what you need and why you should use a particular data type.

    StringBuilder instead of string concatenation

    Strings are immutable, in C#. This means that every time you concatenate 2 strings, you are actually creating a third one that will contain the result.

    So, have a look at this snippet of code:

    string result = "<table>";
    for (int i = 0; i < 19000; i++)
    {
        result += "<tr><td>"+i+"</td><td>Number:"+i+"</td></tr>";
    }
    
    result += "</table>";
    
    Console.WriteLine(result);
    

    This loop took 2784 milliseconds.

    That’s where the StringBuilder class comes in handy: you avoid all the concatenation and store all the substrings in the StringBuilder object:

    StringBuilder result = new StringBuilder();
    
    result.Append("<table>");
    for (int i = 0; i < 19000; i++)
    {
        result.Append("<tr><td>");
        result.Append(i);
        result.Append("</td><td>Number:");
        result.Append(i);
        result.Append("</td></tr>");
    }
    
    result.Append("</table>");
    
    Console.WriteLine(result.ToString());
    

    Using StringBuilder instead of string concatenation I got the exact same result as the example above but in 58 milliseconds.

    So, just by using the StringBuilder, you can speed up that part by 98%.

    Don’t return await if it’s the only operation in that method

    Every time you mark a method as async, behind the scenes .NET creates a state machine that keeps track of the execution of each method.

    So, have a look at this program where every method returns the result from another one. Pay attention to the many return await statements;

    async Task Main()
    {
        var isAvailable = await IsArticleAvailable();
        Console.WriteLine(isAvailable);
    }
    
    async Task<bool> IsArticleAvailable()
    {
        var articlePath = "/blog/clean-code-error-handling";
        return await IsPathAvailable(articlePath);
    }
    
    async Task<bool> IsPathAvailable(string articlePath)
    {
        var baseUrl = "https://www.code4it.dev/";
        return await IsResourceAvailable(baseUrl, articlePath);
    }
    
    async Task<bool> IsResourceAvailable(string baseUrl, string articlePath)
    {
        using (HttpClient client = new HttpClient() { BaseAddress = new Uri(baseUrl) })
        {
            HttpResponseMessage response = await client.GetAsync(articlePath);
            return response.IsSuccessStatusCode;
        }
    }
    

    So, what did I mean with state machine?

    Here’s just a small part of the result of the decompilation of that code. It’s a looooong listing: don’t focus on the details, just have a look at the general structure:

    If you are interested in the full example, here you can find the gist with both the original and the decompiled file.

    internal static class <Program>$
    {
        private sealed class <<<Main>$>g__Main|0_0>d : IAsyncStateMachine
        {
            public int <>1__state;
    
            public AsyncTaskMethodBuilder <>t__builder;
    
            private bool <isAvailable>5__1;
    
            private bool <>s__2;
    
            private TaskAwaiter<bool> <>u__1;
    
            private void MoveNext()
            {
                int num = <>1__state;
                try
                {
                    TaskAwaiter<bool> awaiter;
                    if (num != 0)
                    {
                        awaiter = <<Main>$>g__IsArticleAvailable|0_1().GetAwaiter();
                        if (!awaiter.IsCompleted)
                        {
                            num = (<>1__state = 0);
                            <>u__1 = awaiter;
                            <<<Main>$>g__Main|0_0>d stateMachine = this;
                            <>t__builder.AwaitUnsafeOnCompleted(ref awaiter, ref stateMachine);
                            return;
                        }
                    }
                    else
                    {
                        awaiter = <>u__1;
                        <>u__1 = default(TaskAwaiter<bool>);
                        num = (<>1__state = -1);
                    }
                    <>s__2 = awaiter.GetResult();
                    <isAvailable>5__1 = <>s__2;
                    Console.WriteLine(<isAvailable>5__1);
                }
                catch (Exception exception)
                {
                    <>1__state = -2;
                    <>t__builder.SetException(exception);
                    return;
                }
                <>1__state = -2;
                <>t__builder.SetResult();
            }
    
            void IAsyncStateMachine.MoveNext()
            {
                //ILSpy generated this explicit interface implementation from .override directive in MoveNext
                this.MoveNext();
            }
    
            [DebuggerHidden]
            private void SetStateMachine(IAsyncStateMachine stateMachine)
            {
            }
    
            void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine stateMachine)
            {
                //ILSpy generated this explicit interface implementation from .override directive in SetStateMachine
                this.SetStateMachine(stateMachine);
            }
        }
    
        private sealed class <<<Main>$>g__IsArticleAvailable|0_1>d : IAsyncStateMachine
        {
            public int <>1__state;
    
            public AsyncTaskMethodBuilder<bool> <>t__builder;
    
            private string <articlePath>5__1;
    
            private bool <>s__2;
    
            private TaskAwaiter<bool> <>u__1;
    
            private void MoveNext()
            {
                int num = <>1__state;
                bool result;
                try
                {
                    TaskAwaiter<bool> awaiter;
                    if (num != 0)
                    {
                        <articlePath>5__1 = "/blog/clean-code-error-handling";
                        awaiter = <<Main>$>g__IsPathAvailable|0_2(<articlePath>5__1).GetAwaiter();
                        if (!awaiter.IsCompleted)
                        {
                            num = (<>1__state = 0);
                            <>u__1 = awaiter;
                            <<<Main>$>g__IsArticleAvailable|0_1>d stateMachine = this;
                            <>t__builder.AwaitUnsafeOnCompleted(ref awaiter, ref stateMachine);
                            return;
                        }
                    }
                    else
                    {
                        awaiter = <>u__1;
                        <>u__1 = default(TaskAwaiter<bool>);
                        num = (<>1__state = -1);
                    }
                    <>s__2 = awaiter.GetResult();
                    result = <>s__2;
                }
                catch (Exception exception)
                {
                    <>1__state = -2;
                    <articlePath>5__1 = null;
                    <>t__builder.SetException(exception);
                    return;
                }
                <>1__state = -2;
                <articlePath>5__1 = null;
                <>t__builder.SetResult(result);
            }
    
            void IAsyncStateMachine.MoveNext()
            {
                //ILSpy generated this explicit interface implementation from .override directive in MoveNext
                this.MoveNext();
            }
    
            [DebuggerHidden]
            private void SetStateMachine(IAsyncStateMachine stateMachine)
            {
            }
    
            void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine stateMachine)
            {
                //ILSpy generated this explicit interface implementation from .override directive in SetStateMachine
                this.SetStateMachine(stateMachine);
            }
        }
    
        [AsyncStateMachine(typeof(<<<Main>$>g__IsArticleAvailable|0_1>d))]
        [DebuggerStepThrough]
        internal static Task<bool> <<Main>$>g__IsArticleAvailable|0_1()
        {
            <<<Main>$>g__IsArticleAvailable|0_1>d stateMachine = new <<<Main>$>g__IsArticleAvailable|0_1>d();
            stateMachine.<>t__builder = AsyncTaskMethodBuilder<bool>.Create();
            stateMachine.<>1__state = -1;
            stateMachine.<>t__builder.Start(ref stateMachine);
            return stateMachine.<>t__builder.Task;
        }
    

    Every method marked as async “creates” a class that implements the IAsyncStateMachine interface and implements the MoveNext method.

    So, to improve performance, we have to get rid of lots of this stuff: we can do it by simply removing await calls when there is only one awaited method and you do nothing after calling that method.

    So, we can transform the previous snippet:

    async Task Main()
    {
        var isAvailable = await IsArticleAvailable();
        Console.WriteLine(isAvailable);
    }
    
    async Task<bool> IsArticleAvailable()
    {
        var articlePath = "/blog/clean-code-error-handling";
        return await IsPathAvailable(articlePath);
    }
    
    async Task<bool> IsPathAvailable(string articlePath)
    {
        var baseUrl = "https://www.code4it.dev/";
        return await IsResourceAvailable(baseUrl, articlePath);
    }
    
    async Task<bool> IsResourceAvailable(string baseUrl, string articlePath)
    {
        using (HttpClient client = new HttpClient() { BaseAddress = new Uri(baseUrl) })
        {
            HttpResponseMessage response = await client.GetAsync(articlePath);
            return response.IsSuccessStatusCode;
        }
    }
    

    into this one:

    async Task Main()
    {
        var isAvailable = await IsArticleAvailable();
        Console.WriteLine(isAvailable);
    }
    
    Task<bool> IsArticleAvailable()
    {
        var articlePath = "/blog/clean-code-error-handling";
        return IsPathAvailable(articlePath);
    }
    
    Task<bool> IsPathAvailable(string articlePath)
    {
        var baseUrl = "https://www.code4it.dev/";
        return IsResourceAvailable(baseUrl, articlePath);
    }
    
    async Task<bool> IsResourceAvailable(string baseUrl, string articlePath)
    {
        using (HttpClient client = new HttpClient() { BaseAddress = new Uri(baseUrl) })
        {
            HttpResponseMessage response = await client.GetAsync(articlePath);
            return response.IsSuccessStatusCode;
        }
    }
    

    Notice that I removed both async and await keywords in the IsArticleAvailable and IsPathAvailable method.

    So, as you can see in this Gist, the only state machines are the ones for the Main method and for the IsResourceAvailable method.

    As usual, the more we improve memory usage, the better our applications will work.

    Other stuff

    There’s a lot more that you can improve. Look for articles that explain the correct usage of LINQ and why you should prefer HttpClientFactory over HttpClient.

    Run operations in parallel – but pay attention to the parallelism

    Let’s recap a bit what problem I needed to solve: I needed to get some details for a list of sports matches:

    Initial sequence diagram

    As you see, I perform the same set of operations for every match. Working on them in parallel improved a bit the final result.

    Sequence diagram with parallel operations

    Honestly, I was expecting a better improvement. Parallel calculation is not the silver bullet. And you should know how to implement it.

    And I still don’t know.

    After many attempts, I’ve created this class that centralizes the usage or parallel operations, so that if I find a better way to implement it, I just need to update a single class.

    Feel free to copy it or suggest improvements.

    public static class ParallelHelper
    {
        public static IEnumerable<Out> PerformInParallel<In, Out>(IEnumerable<In> items, Func<In, Out> fn, int maxDegreeOfParallelism = 10)
        {
            var options = new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
    
            ConcurrentBag<Out> cb = new ConcurrentBag<Out>();
    
            Parallel.ForEach(items, options, item =>
            {
                cb.Add(fn(item));
            });
            return cb.ToList();
        }
    
        public static IEnumerable<Out> PerformInParallel<In, Out>(IEnumerable<IEnumerable<In>> batches, Func<In, Out> fn, int maxDegreeOfParallelism = 10)
        {
            var options = new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
            ConcurrentBag<Out> cb = new ConcurrentBag<Out>();
    
            foreach (var batch in batches)
            {
                Parallel.ForEach(batch, options, item =>
                {
                    cb.Add(fn(item));
                });
            }
            return cb.ToList();
        }
    
        public static IEnumerable<Out> PerformInParallel<In, Out>(IEnumerable<IEnumerable<In>> batches, Func<IEnumerable<In>, IEnumerable<Out>> fn, int maxDegreeOfParallelism = 10)
        {
            var options = new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
            ConcurrentBag<Out> cb = new ConcurrentBag<Out>();
    
            Parallel.ForEach(batches, options, batch =>
                {
                    var resultValues = fn(batch).ToList();
                    foreach (var result in resultValues)
                    {
                        cb.Add(result);
                    }
                });
            return cb.ToList();
        }
    }
    

    The first method performs the operation specified in the Func on every item passed in the IEnumerable parameter: then it aggregates the result in the ConcurrentBag object (it’s a thread-safe collection) and then returns the final result.

    The other methods do a similar thing but to a list of lists: this is useful when splitting the calculation into batches and performing each of these batches in sequence.

    But, why the MaxDegreeOfParallelism? Well, resources are not infinite; you can’t perform the same heavy operation on 200000 items at the same time, even more, if many requests arrive simultaneously. You have to reduce the number of items processed in parallel.

    Parallel execution of assets

    In the picture above you can see the parallel execution of the search for assets: every call begins at the same moment, so the final timing is a lot better than if I had performed all the operations in sequence.

    Move to .NET 5

    As reported by the official documentation, there has been a huge improvement in performance in the latest version of .NET.

    Those improvements are mainly about the usage of Garbage Collector, JIT optimization, and usage of strings and Regex-s.

    If you are interested, here’s a good article on Microsoft’s blog.

    So, did it really improved my application?

    Well, no.

    As you already know, the main bottlenecks are because of external dependencies (aka API calls). So, nothing that an update of the whole framework could impact.

    But, just to try it, I moved my application from .NET Core 3.1 to .NET 5: the porting was incredibly easy. But, as I was expecting, I did not get any significant improvement.

    So, since the application was a dependency of a wider system, I rolled it back to .NET 3.1.

    Ask, discuss, communicate

    The last tip is one of the most simple yet effective ones: talk with your colleagues, keep track of what worked and what didn’t, and communicate with other developers and managers.

    Even if a question is silly, ask. Maybe you’ll find some tip that gives you the best idea.

    Have a call with your colleagues, share your code and let them help you: even a simple trick, a tool they can suggest, an article that solves one of your problems, can be the key to the success.

    Don’t expect any silver bullet: you’ll improve your application with small steps.

    Wrapping up

    We’ve seen how I managed to improve the performance of an API endpoint passing from 14 seconds to 3.

    In this article you’ve seen some .NET-related tips to improve the performance of your applications: nothing fancy, but those little steps might help you reach the desired result.

    Of course, there is more: if you are want to know how compression algorithms and hosting models affect your applications, check out this article!

    If you have more tips, feel free to share them in the comments session!

    Happy coding!



    Source link

  • NITEX: Building a Brand and Digital Platform for Fashion’s New Supply Chain

    NITEX: Building a Brand and Digital Platform for Fashion’s New Supply Chain



    NITEX is not just another fashion-tech company. Their mission is to redefine the supply chain for fashion – bringing speed, sustainability, and intelligence to a traditionally rigid process. Their platform spans the entire workflow: design, trend forecasting, material sourcing, production, and logistics. In short, they offer a seamless, end-to-end system for brands who want to move faster and smarter.

    When NITEX approached us, the challenge was clear: they needed more than a website. They needed a platform that could translate their vision into an experience that worked for multiple audiences – brands seeking services, investors looking for clarity, factories wanting partnerships, and talent exploring opportunities.

    The project took shape over several months, moving from brand definition to UX architecture, UI design, and technical development. The turning point came with the realization that a single, linear site could not balance storytelling with action. To resolve this, we developed a dual-structure model: one path for narrative and inspiration, and another for practical conversion. This idea shaped every design and technical decision moving forward.

    Crafting the Hybrid Identity

    NITEX’s identity needed to reflect a unique duality: part fashion brand, part technology company. Our approach was to build a system that could flex between editorial elegance and sharp technical clarity.

    At the heart of the identity sits the NITEX logo, an angular form created from a forward-leaning N and X. This symbol is more than a mark – it acts as a flexible frame. The hollow center creates a canvas for imagery, data, or color, visualizing collaboration and adaptability.

    This angular geometry informed much of the visual language across the site:

    • Buttons expand or tilt along the logo’s angles when hovered.
    • The progress bar in navigation and footer fills in the same diagonal form.
    • Headlines reveal themselves with angled wipes, reinforcing a consistent rhythm.

    Typography was kept bold yet minimal, with global sans-serif structures that feel equally at home in high fashion and digital environments. Imagery played an equally important role. We chose photography that conveyed motion and energy, often with candid blur or dynamic framing. To push this further, we incorporated AI-generated visuals, adding intensity and reinforcing the sense of momentum at the core of the NITEX story. The result is a brand system that feels dynamic, flexible, and scalable – capable of stretching from streetwear to luxury contexts while always staying rooted in clarity and adaptability.

    Building the Engine

    A complex brand and experience required a strong technical foundation. For this, our developers chose tools that balanced performance, flexibility, and scalability:

    • Frontend: Nuxt
    • Backend / CMS: Sanity
    • Animations & Motion: GSAP and the Web Animations API

    The heavy reliance on native CSS transitions and the Web Animations API ensured smooth performance even on low-powered devices. GSAP was used to orchestrate more complex transitions while still keeping load times and resource use efficient. A key architectural decision was to give overlays their own URLs. This meant that when users opened deep-dive layers or content modules, those states were addressable, shareable, and SEO-friendly. This approach kept the experience immersive while ensuring that content remained accessible outside the narrative scroll.

    Defining the Flow

    Several features stand out in the NITEX site for how they balance storytelling with functionality:

    • Expandable overlays: Each narrative chapter can unfold into deep-dive layers – showing case studies, workflow diagrams, or leadership perspectives without breaking the scroll.
    • Dynamic conversion flows: Forms adapt to the user’s audience type – brands, investors, talent, or factories – showing tailored fields and next steps.
    • Calendar integration: Visitors can book demos or design lab visits directly, streamlining the lead process and reinforcing immediacy.

    This mix of storytelling modules and smart conversion flows ensured that every audience had a pathway forward, whether to be inspired, informed, or engaged.

    Bringing It to Life

    NITEX’s brand identity found its fullest expression in the motion and interaction design of the site. The site opens with scroll-based storytelling, each chapter unfolding with smooth transitions. Page transitions maintain energy, using angled wipes and overlays that slide in from the side. These overlays carry their own links, allowing users to dive deep without losing orientation. The angular motion language of the logo carries through:

    • Buttons expand dynamically on hover.
    • Rectangular components tilt into angular forms.
    • The dual-image module sees the N and X frame track the viewport, dynamically revealing new perspectives.

    This creates a consistent visual rhythm, where every motion feels connected to the brand’s DNA. The imagery reinforces this, emphasizing speed and creativity through motion blur, candid composition, and AI-driven intensity. Importantly, we kept the overall experience modular and scalable. Each content block is built on a flexible grid with clear typographic hierarchy. This ensures usability while leaving room for surprise – whether it’s an animated reveal, a bold image transition, or a subtle interactive detail.

    Under the Hood

    From a structural standpoint, the site was designed to scale as NITEX grows. The codebase follows a modular approach, with reusable components that can be repurposed across sections. Sanity’s CMS allows editors to easily add new chapters, forms, or modules without breaking the system.

    The split-entry structure – narrative vs. action – was the architectural anchor. This allowed us to keep storytelling immersive without sacrificing usability for users who came with a clear transactional intent.

    Looking Back

    This project was as much about balance as it was about creativity. Balancing brand storytelling with user conversion. Balancing motion and expressiveness with speed and performance. Balancing multiple audience needs within a single coherent system.

    One of the most rewarding aspects was seeing how the dual-experience model solved what initially felt like an unsolvable challenge: how to serve users who want inspiration and those who want action without building two entirely separate sites.

    The deep-dive overlays also proved powerful, letting NITEX show rather than just tell their story. They allowed us to layer complexity while keeping the surface experience clean and intuitive.

    Looking ahead, the NITEX platform is built to evolve. Future possibilities include investor dashboards with live performance metrics, brand-specific case modules curated by industry, or interactive workflow tools aligned with NITEX’s trend-to-delivery logic. The foundation we built makes all of this possible.

    Ultimately, the NITEX project reflects the company’s own values: clarity, adaptability, and speed. For us, it was an opportunity to merge brand design, UX, UI, and development into a single seamless system – one that redefines what a fashion-tech platform can look and feel like.



    Source link

  • Clean code tips – Tests | Code4IT


    Tests are as important as production code. Well, they are even more important! So writing them well brings lots of benefits to your projects.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Clean code principles apply not only to production code but even to tests. Indeed, a test should be even more clean, easy-to-understand, and meaningful than production code.

    In fact, tests not only prevent bugs: they even document your application! New team members should look at tests to understand how a class, a function, or a module works.

    So, every test must have a clear meaning, must have its own raison d’être, and must be written well enough to let the readers understand it without too much fuss.

    In this last article of the Clean Code Series, we’re gonna see some tips to improve your tests.

    If you are interested in more tips about Clean Code, here are the other articles:

    1. names and function arguments
    2. comments and formatting
    3. abstraction and objects
    4. error handling
    5. tests

    Why you should keep tests clean

    As I said before, tests are also meant to document your code: given a specific input or state, they help you understand what the result will be in a deterministic way.

    But, since tests are dependent on the production code, you should adapt them when the production code changes: this means that tests must be clean and flexible enough to let you update them without big issues.

    If your test suite is a mess, even the slightest update in your code will force you to spend a lot of time updating your tests: that’s why you should organize your tests with the same care as your production code.

    Good tests have also a nice side effect: they make your code more flexible. Why? Well, if you have a good test coverage, and all your tests are meaningful, you will be more confident in applying changes and adding new functionalities. Otherwise, when you change your code, you will not be sure not only that the new code works as expected, but that you have not introduced any regression.

    So, having a clean, thorough test suite is crucial for the life of your application.

    How to keep tests clean

    We’ve seen why we should write clean tests. But how should you write them?

    Let’s write a bad test:

    [Test]
    public void CreateTableTest()
    {
        //Arrange
        string tableContent = @"<table>
            <thead>
                <tr>
                    <th>ColA</th>
                    <th>ColB</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td>Text1A</td>
                    <td>Text1B</td>
                </tr>
                <tr>
                    <td>Text2A</td>
                    <td>Text2B</td>
                </tr>
            </tbody>
        </table>";
    
        var tableInfo = new TableInfo(2);
    
    
        HtmlDocument doc = new HtmlDocument();
        doc.LoadHtml(tableContent);
        var node = doc.DocumentNode.ChildNodes[0];
    
        var part = new TableInfoCreator(node);
    
        var result = part.CreateTableInfo();
    
        tableInfo.SetHeaders(new string[] { "ColA", "ColB" });
        tableInfo.AddRow(new string[] { "Text1A", "Text1B" });
        tableInfo.AddRow(new string[] { "Text2A", "Text2B" });
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    

    This test proves that the CreateTableInfo method of the TableInfoCreator class parses correctly the HTML passed in input and returns a TableInfo object that contains info about rows and headers.

    This is kind of a mess, isn’t it? Let’s improve it.

    Use appropriate test names

    What does CreateTableTest do? How does it help the reader understand what’s going on?

    We need to explicitly say what the tests want to achieve. There are many ways to do it; one of the most used is the Given-When-Then pattern: every method name should express those concepts, possibly in a consistent way.

    I like to use always the same format when naming tests: {Something}_Should_{DoSomething}_When_{Condition}. This format explicitly shows what and why the test exists.

    So, let’s change the name:

    [Test]
    public void CreateTableInfo_Should_CreateTableInfoWithCorrectHeadersAndRows_When_TableIsWellFormed()
    {
        //Arrange
        string tableContent = @"<table>
            <thead>
                <tr>
                    <th>ColA</th>
                    <th>ColB</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td>Text1A</td>
                    <td>Text1B</td>
                </tr>
                <tr>
                    <td>Text2A</td>
                    <td>Text2B</td>
                </tr>
            </tbody>
        </table>";
    
        var tableInfo = new TableInfo(2);
    
    
        HtmlDocument doc = new HtmlDocument();
        doc.LoadHtml(tableContent);
        HtmlNode node = doc.DocumentNode.ChildNodes[0];
    
        var part = new TableInfoCreator(node);
    
        var result = part.CreateTableInfo();
    
        tableInfo.SetHeaders(new string[] { "ColA", "ColB" });
        tableInfo.AddRow(new string[] { "Text1A", "Text1B" });
        tableInfo.AddRow(new string[] { "Text2A", "Text2B" });
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    

    Now, just by reading the name of the test, we know what to expect.

    Initialization

    The next step is to refactor the tests to initialize all the stuff in a better way.

    The first step is to remove the creation of the HtmlNode seen in the previous example, and move it to an external function: this will reduce code duplication and help the reader understand the test without worrying about the HtmlNode creation details:

    [Test]
    public void CreateTableInfo_Should_CreateTableWithHeadersAndRows_When_TableIsWellFormed()
    {
        //Arrange
        string tableContent = @"<table>
            <thead>
                <tr>
                    <th>ColA</th>
                    <th>ColB</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td>Text1A</td>
                    <td>Text1B</td>
                </tr>
                <tr>
                    <td>Text2A</td>
                    <td>Text2B</td>
                </tr>
            </tbody>
        </table>";
    
        var tableInfo = new TableInfo(2);
    
     // HERE!
        HtmlNode node = CreateNodeElement(tableContent);
    
        var part = new TableInfoCreator(node);
    
        var result = part.CreateTableInfo();
    
        tableInfo.SetHeaders(new string[] { "ColA", "ColB" });
        tableInfo.AddRow(new string[] { "Text1A", "Text1B" });
        tableInfo.AddRow(new string[] { "Text2A", "Text2B" });
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    
    
    private static HtmlNode CreateNodeElement(string content)
    {
        HtmlDocument doc = new HtmlDocument();
        doc.LoadHtml(content);
        return doc.DocumentNode.ChildNodes[0];
    }
    

    Then, depending on what you are testing, you could even extract input and output creation into different methods.

    If you extract them, you may end up with something like this:

    [Test]
    public void CreateTableInfo_Should_CreateTableWithHeadersAndRows_When_TableIsWellFormed()
    {
        var node = CreateWellFormedHtmlTable();
    
        var part = new TableInfoCreator(node);
    
        var result = part.CreateTableInfo();
    
        TableInfo tableInfo = CreateWellFormedTableInfo();
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    
    private static TableInfo CreateWellFormedTableInfo()
    {
        var tableInfo = new TableInfo(2);
        tableInfo.SetHeaders(new string[] { "ColA", "ColB" });
        tableInfo.AddRow(new string[] { "Text1A", "Text1B" });
        tableInfo.AddRow(new string[] { "Text2A", "Text2B" });
        return tableInfo;
    }
    
    private HtmlNode CreateWellFormedHtmlTable()
    {
        var table = CreateWellFormedTable();
        return CreateNodeElement(table);
    }
    
    private static string CreateWellFormedTable()
        => @"<table>
            <thead>
                <tr>
                    <th>ColA</th>
                    <th>ColA</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td>Text1A</td>
                    <td>Text1B</td>
                </tr>
                <tr>
                    <td>Text2A</td>
                    <td>Text2B</td>
                </tr>
            </tbody>
        </table>";
    

    So, now, the general structure of the test is definitely better. But, to understand what’s going on, readers have to jump to the details of both CreateWellFormedHtmlTable and CreateWellFormedTableInfo.

    Even worse, you have to duplicate those methods for every test case. You could do a further step by joining the input and the output into a single object:

    
    public class TableTestInfo
    {
        public HtmlNode Html { get; set; }
        public TableInfo ExpectedTableInfo { get; set; }
    }
    
    private TableTestInfo CreateTestInfoForWellFormedTable() =>
    new TableTestInfo
    {
        Html = CreateWellFormedHtmlTable(),
        ExpectedTableInfo = CreateWellFormedTableInfo()
    };
    

    and then, in the test, you simplify everything in this way:

    [Test]
    public void CreateTableInfo_Should_CreateTableWithHeadersAndRows_When_TableIsWellFormed()
    {
        var testTableInfo = CreateTestInfoForWellFormedTable();
    
        var part = new TableInfoCreator(testTableInfo.Html);
    
        var result = part.CreateTableInfo();
    
        TableInfo tableInfo = testTableInfo.ExpectedTableInfo;
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    

    In this way, you have all the info in a centralized place.

    But, sometimes, this is not the best way. Or, at least, in my opinion.

    In the previous example, the most important part is the elaboration of a specific input. So, to help readers, I usually prefer to keep inputs and outputs listed directly in the test method.

    On the contrary, if I had to test for some properties of a class or method (for instance, test that the sorting of an array with repeated values works as expected), I’d extract the initializations outside the test methods.

    AAA: Arrange, Act, Assert

    A good way to write tests is to write them with a structured and consistent template. The most used way is the Arrange-Act-Assert pattern:

    That means that in the first part of the test you set up the objects and variables that will be used; then, you’ll perform the operation under test; finally, you check if the test passes by using assertion (like a simple Assert.IsTrue(condition)).

    I prefer to explicitly write comments to separate the 3 parts of each test, like this:

    [Test]
    public void CreateTableInfo_Should_CreateTableWithHeadersAndRows_When_TableIsWellFormed()
    {
        // Arrange
        var testTableInfo = CreateTestInfoForWellFormedTable();
        TableInfo expectedTableInfo = testTableInfo.ExpectedTableInfo;
    
        var part = new TableInfoCreator(testTableInfo.Html);
    
        // Act
        var actualResult = part.CreateTableInfo();
    
        // Assert
        actualResult.Should().BeEquivalentTo(expectedTableInfo);
    }
    

    Only one assertion per test (with some exceptions)

    Ideally, you may want to write tests with only a single assertion.

    Let’s take as an example a method that builds a User object using the parameters in input:

    public class User
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public DateTime BirthDate { get; set; }
        public Address AddressInfo { get; set; }
    }
    
    public class Address
    {
        public string Country { get; set; }
        public string City { get; set; }
    }
    
    public User BuildUser(string name, string lastName, DateTime birthdate, string country, string city)
    {
        return new User
        {
            FirstName = name,
            LastName = lastName,
            BirthDate = birthdate,
            AddressInfo = new Address
            {
                Country = country,
                City = city
            }
        };
    }
    

    Nothing fancy, right?

    So, ideally, we should write tests with a single assert (ignore in the next examples the test names – I removed the when part!):

    [Test]
    public void BuildUser_Should_CreateUserWithCorrectName()
    {
        // Arrange
        var name = "Davide";
    
        // Act
        var user = BuildUser(name, null, DateTime.Now, null, null);
    
        // Assert
        user.FirstName.Should().Be(name);
    }
    
    [Test]
    public void BuildUser_Should_CreateUserWithCorrectLastName()
    {
        // Arrange
        var lastName = "Bellone";
    
        // Act
        var user = BuildUser(null, lastName, DateTime.Now, null, null);
    
        // Assert
        user.LastName.Should().Be(lastName);
    }
    

    … and so on. Imagine writing a test for each property: your test class will be full of small methods that only clutter the code.

    If you can group assertions in a logical way, you could write more asserts in a single test:

    [Test]
    public void BuildUser_Should_CreateUserWithCorrectPlainInfo()
    {
        // Arrange
        var name = "Davide";
        var lastName = "Bellone";
        var birthDay = new DateTime(1991, 1, 1);
    
        // Act
        var user = BuildUser(name, lastName, birthDay, null, null);
    
        // Assert
        user.FirstName.Should().Be(name);
        user.LastName.Should().Be(lastName);
        user.BirthDate.Should().Be(birthDay);
    }
    

    This is fine because the three properties (FirstName, LastName, and BirthDate) are logically on the same level and with the same meaning.

    One concept per test

    As we stated before, it’s not important to test only one property per test: each and every test must be focused on a single concept.

    By looking at the previous examples, you can notice that the AddressInfo property is built using the values passed as parameters on the BuildUser method. That makes it a good candidate for its own test.

    Another way of seeing this tip is thinking of the properties of an object (I mean, the mathematical properties). If you’re creating your custom sorting, think of which properties can be applied to your method. For instance:

    • an empty list, when sorted, is still an empty list
    • an item with 1 item, when sorted, still has one item
    • applying the sorting to an already sorted list does not change the order

    and so on.

    So you don’t want to test every possible input but focus on the properties of your method.

    In a similar way, think of a method that gives you the number of days between today and a certain date. In this case, just a single test is not enough.

    You have to test – at least – what happens if the other date:

    • is exactly today
    • it is in the future
    • it is in the past
    • it is next year
    • it is February, the 29th of a valid year (to check an odd case)
    • it is February, the 30th (to check an invalid date)

    Each of these tests is against a single value, so you might be tempted to put everything in a single test method. But here you are running tests against different concepts, so place every one of them in a separate test method.

    Of course, in this example, you must not rely on the native way to get the current date (in C#, DateTime.Now or DateTime.UtcNow). Rather, you have to mock the current date.

    FIRST tests: Fast, Independent, Repeatable, Self-validating, and Timed

    You’ll often read the word FIRST when talking about the properties of good tests. What does FIRST mean?

    It is simply an acronym. A test must be Fast, Independent, Repeatable, Self-validating, and Timed.

    Fast

    Tests should be fast. How much? Enough to don’t discourage the developers to run them. This property applies only to Unit Tests: in fact, while each test should run in less than 1 second, you may have some Integration and E2E tests that take more than 10 seconds – it depends on what you’re testing.

    Now, imagine if you have to update one class (or one method), and you have to re-run all your tests. If the whole tests suite takes just a few seconds, you can run them whenever you want – some devs run all the tests every time they hit Save; if every single test takes 1 second to run, and you have 200 tests, just a simple update to one class makes you lose at least 200 seconds: more than 3 minutes. Yes, I know that you can run them in parallel, but that’s not the point!

    So, keep your tests short and fast.

    Independent

    Every test method must be independent of the other tests.

    This means that the result and the execution of one method must not impact the execution of another one. Conversely, one method must not rely on the execution of another method.

    A concrete example?

    public class MyTests
    {
        string userName = "Lenny";
    
        [Test]
        public void Test1()
        {
            Assert.AreEqual("Lenny", userName);
            userName = "Carl";
    
        }
    
        [Test]
        public void Test2()
        {
            Assert.AreEqual("Carl", userName);
        }
    
    }
    

    Those tests are perfectly valid if run in sequence. But Test1 affects the execution of Test2 by setting a global variable
    used by the second method. But what happens if you run only Test2? It will fail. Same result if the tests are run in a different order.

    So, you can transform the previous method in this way:

    public class MyTests
    {
        string userName;
    
        [SetUp]
        public void Setup()
        {
            userName = "Boe";
        }
    
        [Test]
        public void Test1()
        {
            userName = "Lenny";
            Assert.AreEqual("Lenny", userName);
    
        }
    
        [Test]
        public void Test2()
        {
            userName = "Carl";
            Assert.AreEqual("Carl", userName);
        }
    
    }
    

    In this way, we have a default value, Boe, that gets overridden by the single methods – only when needed.

    Repeatable

    Every Unit test must be repeatable: this means that you must be able to run them at any moment and on every machine (and get always the same result).

    So, avoid all the strong dependencies on your machine (like file names, absolute paths, and so on), and everything that is not directly under your control: the current date and time, random-generated numbers, and GUIDs.

    To work with them there’s only a solution: abstract them and use a mocking mechanism.

    If you want to learn 3 ways to do this, check out my 3 ways to inject DateTime and test it. There I explained how to inject DateTime, but the same approaches work even for GUIDs and random numbers.

    Self-validating

    You must be able to see the result of a test without performing more actions by yourself.

    So, don’t write your test results on an external file or source, and don’t put breakpoints on your tests to see if they’ve passed.

    Just put meaningful assertions and let your framework (and IDE) tell you the result.

    Timely

    You must write your tests when required. Usually, when using TDD, you write your tests right before your production code.

    So, this particular property applies only to devs who use TDD.

    Wrapping up

    In this article, we’ve seen that even if many developers consider tests redundant and not worthy of attention, they are first-class citizens of our applications.

    Paying enough attention to tests brings us a lot of advantages:

    • tests document our code, thus helping onboarding new developers
    • they help us deploy with confidence a new version of our product, without worrying about regressions
    • they prove that our code has no bugs (well, actually you’ll always have a few bugs, it’s just that you haven’t discovered them yet )
    • code becomes more flexible and can be extended without too many worries

    So, write meaningful tests, and always well written.

    Quality over quantity, always!

    Happy coding!



    Source link

  • Generating Your Website from Scratch for Remixing and Exploration

    Generating Your Website from Scratch for Remixing and Exploration



    Codrops’ “design” has been long overdue for a refresh. I’ve had ideas for a new look floating around for ages, but actually making time to bring them to life has been tough. It’s the classic shoemaker’s shoes problem: I spend my days answering emails, editing articles and (mostly) managing Codrops and the amazing contributions from the community, while the site itself quietly gathers dust 😂

    Still, the thought of reimagining Codrops has been sitting in the back of my mind. I’d already been eyeing Anima as a tool that could make the process faster, so I reached out to their team. They were kind enough to support us with this review (thank you so much!) and it’s a true win-win: I get to finally test my idea for Codrops, and you get a good look at how the tool holds up in practice 🤜🤛

    So, Anima is a platform made to bridge the gap between design and development. It allows you to take an existing website, either one of your own projects or something live on the web, and bring it into a workspace where the layout and elements can be inspected, edited, and reworked. From there, you can export the result as clean, production-ready code in React, HTML/CSS, or Tailwind. In practice, this means you can quickly prototype new directions, remix existing layouts, or test ideas without starting completely from scratch.

    Obviously, you should not use this to copy other people’s work, but rather to prototype your own ideas and remix your projects!

    Let me take you along on a little experiment I ran with it.

    Getting started

    Screenshot of Anima Playground interface

    Anima Link to Code was introduced in July this year and promises to take any design or web page and transform it into live editable code. You can generate, preview, and export production ready code in React, TypeScript, Tailwind CSS, or plain HTML and CSS. That means you can start with a familiar environment, test an idea, and immediately see how it holds up in real code rather than staying stuck in the design stage. It also means you can poke around, break things, and try different directions without manually rebuilding the scaffolding each time. That kind of speed is what usually makes or breaks whether I stick with an experiment or abandon it halfway through.

    To begin, I decided to use the Codrops homepage as my guinea pig. I have always wondered how it would feel reimagined as a bento style grid. Normally, if I wanted to try that, I would either spend hours rewriting markup and CSS by hand or rely on an AI prompt that would often spiral into unrelated layouts and syntax errors. It would be already a great help if I could envision my idea and play with it bit!

    After pasting in the Codrops URL, this is what came out. A React project was generated in seconds.

    Generated Codrops homepage project

    The first impression was surprisingly positive. The homepage looked recognizable and the layout did not completely collapse. Yes, there was a small glitch where the Webzibition box background was not sized correctly, but overall it was close enough that I felt comfortable moving on. That is already more than I can say for many auto generation tools where the output is so mangled that you do not even know where to start.

    Experimenting with a bento grid

    Now for the fun part. I typed a simple prompt that said, “Make a bento grid of all these items.” Almost immediately I hit an error. My usual instinct in this situation is to give up since vibe coding often collapses the moment an error shows up, and then it becomes a spiral of debugging someone else’s half generated mess. But let’s try this instead of quitting right away 🙂 The fix worked and I got a quirky but functioning bento grid layout:

    First attempt at bento grid

    The result was not exactly what I had in mind. Some elements felt off balance and the spacing was not ideal. Still, I had something on screen to iterate on, which is already a win compared to starting from scratch. So I pushed further. Could I bring the Creative Hub and Webzibition modules into this grid? A natural language prompt like “Place the Creative Hub box into the bento style container of the articles” felt like a good test.

    And yes, it actually worked. The Creative Hub box slipped into the grid container:

    Creative Hub moved into container

    The layout was starting to look cramped, so I tried another prompt. I asked Anima to also move the Webzibition box into the same container and to make it span full width. The generation was quick with barely a pause, and suddenly the page turns into this:

    Webzibition added to full width

    This really showed me what it’s good at: iteration is fast. You don’t have to stop, rethink the grid, or rewrite CSS by hand. You just throw an idea in, see what comes back, and keep moving. It feels more like sketching in a notebook than carefully planning a layout. For prototyping, that rhythm is exactly what I want. Really into this type of layout for Codrops!

    Looking under the hood

    Visuals are only half the story. The bigger question is what kind of code Anima actually produces. I opened the generated React and Tailwind output, fully expecting a sea of meaningless divs and tangled class names.

    To my surprise, the code was clean. Semantic elements were present, the structure was logical, and everything was just readable. There was no obvious divitis, and the markup did not feel like something I would want to burn and rewrite from scratch. It even got me thinking about how much simpler maintaining Codrops might be if it were a lean React app with Tailwind instead of living inside the layers of WordPress 😂

    There is also a Chrome extension called Web to Code, which lets you capture any page you are browsing and instantly get editable code. With this it’s easy to capture and generate inner pages like dashboards, login screens, or even private areas of a site you are working on could be pulled into a sandbox and played with directly.

    Anima Web to Code Chrome extension

    Pros and cons

    • Pros: Fast iteration, surprisingly clean code, easy setup, beginner-friendly, genuinely fun to experiment with.
    • Cons: Occasional glitches, exported code still needs cleanup, limited customization, not fully production-ready.

    Final thoughts

    Anima is not magic and it is not perfect. It will not replace deliberate coding, and it should not. But as a tool for quick prototyping, remixing existing designs, or exploring how a site might feel with a new structure, it is genuinely fun and surprisingly capable. The real highlight for me is the speed of iteration: you try an idea, see the result instantly, and either refine it or move on. That rhythm is addictive for creative developers who like to sketch in code rather than commit to heavy rebuilds from scratch.

    Verdict: Anima shines as a playground for experimentation and learning. If you’re a designer or developer who enjoys fast iteration, you’ll likely find it inspiring. If you need production-ready results for client work, you’ll still want to polish the output or stick with more mature frameworks. But for curiosity, prototyping, and a spark of creative joy, Anima is worth your time and you might be surprised at how much fun it is to remix the web this way.



    Source link

  • how to view Code Coverage report on Azure DevOps | Code4IT


    Code coverage is a good indicator of the health of your projects. We’ll see how to show Cobertura reports associated to your builds on Azure DevOps and how to display the progress on Dashboard.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Code coverage is a good indicator of the health of your project: the more your project is covered by tests, the lesser are the probabilities that you have easy-to-find bugs in it.

    Even though 100% of code coverage is a good result, it is not enough: you have to check if your tests are meaningful and bring value to the project; it really doesn’t make any sense to cover each line of your production code with tests valid only for the happy path; you also have to cover the edge cases!

    But, even if it’s not enough, having an idea of the code coverage on your project is a good practice: it helps you understanding where you should write more tests and, eventually, help you removing some bugs.

    In a previous article, we’ve seen how to use Coverlet and Cobertura to view the code coverage report on Visual Studio (of course, for .NET projects).

    In this article, we’re gonna see how to show that report on Azure DevOps: by using a specific command (or, even better, a set of flags) on your YAML pipeline definition, we are going to display that report for every build we run on Azure DevOps. This simple addition will help you see the status of a specific build and, if it’s the case, update the code to add more tests.

    Then, in the second part of this article, we’re gonna see how to view the coverage history on your Azure DevOps dashboard, by using a plugin called Code Coverage Protector.

    But first, let’s start with the YAML pipelines!

    Coverlet – the NuGet package for code coverage

    As already explained in my previous article, the very first thing to do to add code coverage calculation is to install a NuGet package called Coverlet. This package must be installed in every test project in your Solution.

    So, running a simple dotnet add package coverlet.msbuild on your test projects is enough!

    Create YAML tasks to add code coverage

    Once we have Coverlet installed, it’s time to add the code coverage evaluation to the CI pipeline.

    We need to add two steps to our YAML file: one for collecting the code coverage on test projects, and one for actually publishing it.

    Run tests and collect code coverage results

    Since we are working with .NET Core applications, we need to use a DotNetCoreCLI@2 task to run dotnet test. But we need to specify some attributes: in the arguments field, add /p:CollectCoverage=true to tell the task to collect code coverage results, and /p:CoverletOutputFormat=cobertura to specify which kind of code coverage format we want to receive as output.

    The task will have this form:

    - task: DotNetCoreCLI@2
      displayName: "Run tests"
      inputs:
        command: "test"
        projects: "**/*[Tt]est*/*.csproj"
        publishTestResults: true
        arguments: "--configuration $(buildConfiguration) /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura"
    

    You can see the code coverage preview directly in the log panel of the executing build. The ASCII table tells you the code coverage percentage for each module, specifying the lines, branches, and methods covered by tests for every module.

    Logging dotnet test

    Another interesting thing to notice is that this task generates two files: a trx file, that contains the test results info (which tests passed, which ones failed, and other info), and a coverage.cobertura.xml, that is the file we will use in the next step to publish the coverage results.

    dotnet test generated files

    Publish code coverage results

    Now that we have the coverage.cobertura.xml file, the last thing to do is to publish it.

    Create a task of type PublishCodeCoverageResults@1, specify that the result format is Cobertura, and then specify the location of the file to be published.

    - task: PublishCodeCoverageResults@1
      displayName: "Publish code coverage results"
      inputs:
        codeCoverageTool: "Cobertura"
        summaryFileLocation: "**/*coverage.cobertura.xml"
    

    Final result

    Now that we know what are the tasks to add, we can write the most basic version of a build pipeline:

    trigger:
      - master
    
    pool:
      vmImage: "windows-latest"
    
    variables:
      solution: "**/*.sln"
      buildPlatform: "Any CPU"
      buildConfiguration: "Release"
    
    steps:
      - task: DotNetCoreCLI@2
        displayName: "Build"
        inputs:
          command: "build"
      - task: DotNetCoreCLI@2
        displayName: "Run tests"
        inputs:
          command: "test"
          projects: "**/*[Tt]est*/*.csproj"
          publishTestResults: true
          arguments: "--configuration $(buildConfiguration) /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura"
      - task: PublishCodeCoverageResults@1
        displayName: "Publish code coverage results"
        inputs:
          codeCoverageTool: "Cobertura"
          summaryFileLocation: "**/*coverage.cobertura.xml"
    

    So, here, we simply build the solution, run the tests and publish both test and code coverage results.

    Where can we see the results?

    If we go to the build execution details, we can see the tests and coverage results under the Tests and coverage section.

    Build summary panel

    By clicking on the Code Coverage tab, we can jump to the full report, where we can see how many lines and branches we have covered.

    Test coverage report

    And then, when we click on a class (in this case, CodeCoverage.MyArray), you can navigate to the class details to see which lines have been covered by tests.

    Test coverage details on the MyArray class

    Code Coverage Protector: an Azure DevOps plugin

    Now what? We should keep track of the code coverage percentage over time. But open every Build execution to see the progress is not a good idea, isn’t it? We should find another way to see the progress.

    A really useful plugin to manage this use case is Code Coverage Protector, developed by Dave Smits: among other things, it allows you to display the status of code coverage directly on your Azure DevOps Dashboards.

    To install it, head to the plugin page on the marketplace and click get it free.

    &ldquo;Code Coverage Protector plugin&rdquo;

    Once you have installed it, you can add one or more of its widgets to your project’s Dashboard, define which Build pipeline it must refer to, select which metric must be taken into consideration (line, branch, class, and so on), and set up a few other options (like the size of the widget).

    &ldquo;Code Coverage Protector widget on Azure Dashboard&rdquo;

    So, now, with just one look you can see the progress of your project.

    Wrapping up

    In this article, we’ve seen how to publish code coverage reports for .NET applications on Azure DevOps. We’ve used Cobertura and Coverlet to generate the reports, some YAML configurations to show them in the related build panel, and Code Coverage Protector to show the progress in your Azure DevOps dashboard.

    If you want to do one further step, you could use Code Coverage Protector as a build step to make your builds fail if the current Code Coverage percentage is less than the one from the previous builds.

    Happy coding!





    Source link

  • [ITA] Azure DevOps: build and release pipelines to deploy with confidence


    About the author

    Davide Bellone is a Principal Backend Developer with more than 10 years of professional experience with Microsoft platforms and frameworks.

    He loves learning new things and sharing these learnings with others: that’s why he writes on this blog and is involved as speaker at tech conferences.

    He’s a Microsoft MVP 🏆, conference speaker (here’s his Sessionize Profile) and content creator on LinkedIn.



    Source link

  • How to Animate WebGL Shaders with GSAP: Ripples, Reveals, and Dynamic Blur Effects

    How to Animate WebGL Shaders with GSAP: Ripples, Reveals, and Dynamic Blur Effects



    In this tutorial, we’ll explore how to bring motion and interactivity to your WebGL projects by combining GSAP with custom shaders. Working with the Dev team at Adoratorio Studio, I’ll guide you through four GPU-powered effects, from ripples that react to clicks to dynamic blurs that respond to scroll and drag.

    We’ll start by setting up a simple WebGL scene and syncing it with our HTML layout. From there, we’ll move step by step through more advanced interactions, animating shader uniforms, blending textures, and revealing images through masks, until we turn everything into a scrollable, animated carousel.

    By the end, you’ll understand how to connect GSAP timelines with shader parameters to create fluid, expressive visuals that react in real time and form the foundation for your own immersive web experiences.

    Creating the HTML structure

    As a first step, we will set up the page using HTML.

    We will create a container without specifying its dimensions, allowing it to extend beyond the page width. Then, we will set the main container’s overflow property to hidden, as the page will be later made interactive through the GSAP Draggable and ScrollTrigger functionalities.

    <main>
      <section class="content">
        <div class="content__carousel">
          <div class="content__carousel-inner-static">
            <div class="content__carousel-image">
              <img src="/images/01.webp" alt="" role="presentation">
              <span>Lorem — 001</span>
            </div>
            <div class="content__carousel-image">
              <img src="/images/04.webp" alt="" role="presentation">
              <span>Ipsum — 002</span>
            </div>
            <div class="content__carousel-image">
              <img src="/images/02.webp" alt="" role="presentation">
              <span>Dolor — 003</span>
            </div>
            ...
          </div>
        </div>
      </section>
    </main>

    We’ll style all this and then move on to the next step.

    Sync between HTML and Canvas

    We can now begin integrating Three.js into our project by creating a Stage class responsible for managing all 3D engine logic. Initially, this class will set up a renderer, a scene, and a camera.

    We will pass an HTML node as the first parameter, which will act as the container for our canvas.
    Next, we will update the CSS and the main script to create a full-screen canvas that resizes responsively and renders on every GSAP frame.

    export default class Stage {
      constructor(container) {
        this.container = container;
    
        this.DOMElements = [...this.container.querySelectorAll('img')];
    
        this.renderer = new WebGLRenderer({
          powerPreference: 'high-performance',
          antialias: true,
          alpha: true,
        });
        this.renderer.setPixelRatio(Math.min(1.5, window.devicePixelRatio));
        this.renderer.setSize(window.innerWidth, window.innerHeight);
        this.renderer.domElement.classList.add('content__canvas');
    
        this.container.appendChild(this.renderer.domElement);
    
        this.scene = new Scene();
    
        const { innerWidth: width, innerHeight: height } = window;
        this.camera = new OrthographicCamera(-width / 2, width / 2, height / 2, -height / 2, -1000, 1000);
        this.camera.position.z = 10;
      }
    
      resize() {
        // Update camera props to fit the canvas size
        const { innerWidth: screenWidth, innerHeight: screenHeight } = window;
    
        this.camera.left = -screenWidth / 2;
        this.camera.right = screenWidth / 2;
        this.camera.top = screenHeight / 2;
        this.camera.bottom = -screenHeight / 2;
        this.camera.updateProjectionMatrix();
    
        // Update also planes sizes
        this.DOMElements.forEach((image, index) => {
          const { width: imageWidth, height: imageHeight } = image.getBoundingClientRect();
          this.scene.children[index].scale.set(imageWidth, imageHeight, 1);
        });
    
        // Update the render using the window sizes
        this.renderer.setSize(screenWidth, screenHeight);
      }
    
      render() {
        this.renderer.render(this.scene, this.camera);
      }
    }

    Back in our main.js file, we’ll first handle the stage’s resize event. After that, we’ll synchronize the renderer’s requestAnimationFrame (RAF) with GSAP by using gsap.ticker.add, passing the stage’s render function as the callback.

    // Update resize with the stage resize
    function resize() {
      ...
      stage.resize();
    }
    
    // Add render cycle to gsap ticker
    gsap.ticker.add(stage.render.bind(stage));
    
    <style>
    .content__canvas {
      position: absolute;
      top: 0;
      left: 0;
      width: 100vw;
      height: 100svh;
    
      z-index: 2;
      pointer-events: none;
    }
    </style>

    It’s now time to load all the images included in the HTML. For each image, we will create a plane and add it to the scene. To achieve this, we’ll update the class by adding two new methods:

    setUpPlanes() {
      this.DOMElements.forEach((image) => {
        this.scene.add(this.generatePlane(image));
      });
    }
    
    generatePlane(image, ) {
      const loader = new TextureLoader();
      const texture = loader.load(image.src);
    
      texture.colorSpace = SRGBColorSpace;
      const plane = new Mesh(
        new PlaneGeometry(1, 1),
        new MeshStandardMaterial(),
      );
    
      return plane;
    }

    We can then call setUpPlanes() within the constructor of our Stage class.
    The result should resemble the following, depending on the camera’s z-position or the planes’ placement—both of which can be adjusted to fit our specific needs.

    The next step is to position the planes precisely to correspond with the location of their associated images and update their positions on each frame. To achieve this, we will implement a utility function that converts screen space (CSS pixels) into world space, leveraging the Orthographic Camera, which is already aligned with the screen.

    const getWorldPositionFromDOM = (element, camera) => {
      const rect = element.getBoundingClientRect();
    
      const xNDC = (rect.left + rect.width / 2) / window.innerWidth * 2 - 1;
      const yNDC = -((rect.top + rect.height / 2) / window.innerHeight * 2 - 1);
    
      const xWorld = xNDC * (camera.right - camera.left) / 2;
      const yWorld = yNDC * (camera.top - camera.bottom) / 2;
    
      return new Vector3(xWorld, yWorld, 0);
    };
    render() {
      this.renderer.render(this.scene, this.camera);
    
      // For each plane and each image update the position of the plane to match the DOM element position on page
      this.DOMElements.forEach((image, index) => {
         this.scene.children[index].position.copy(getWorldPositionFromDOM(image, this.camera, this.renderer));
      });
    }

    By hiding the original DOM carousel, we can now display only the images as planes within the canvas. Create a simple class extending ShaderMaterial and use it in place of MeshStandardMaterial for the planes.

    const plane = new Mesh(
      new PlaneGeometry(1, 1),
      new PlanesMaterial(),
    );
    ...
    
    import { ShaderMaterial } from 'three';
    import baseVertex from './base.vert?raw';
    import baseFragment from './base.frag?raw';
    
    export default class PlanesMaterial extends ShaderMaterial {
      constructor() {
        super({
          vertexShader: baseVertex,
          fragmentShader: baseFragment,
        });
      }
    }
    
    // base.vert
    varying vec2 vUv;
    
    void main() {
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
      vUv = uv;
    }
    
    // base.frag
    varying vec2 vUv;
    
    void main() {
      gl_FragColor = vec4(vUv.x, vUv.y, 0.0, 1.0);
    }

    We can then replace the shader output with texture sampling based on the UV coordinates, passing the texture to the material and shaders as a uniform.

    ...
    const plane = new Mesh(
      new PlaneGeometry(1, 1),
      new PlanesMaterial(texture),
    );
    ...
    
    export default class PlanesMaterial extends ShaderMaterial {
      constructor(texture) {
        super({
          vertexShader: baseVertex,
          fragmentShader: baseFragment,
          uniforms: {
            uTexture: { value: texture },
          },
        });
      }
    }
    
    // base.frag
    varying vec2 vUv;
    
    uniform sampler2D uTexture;
    
    void main() {
      vec4 diffuse = texture2D(uTexture, vUv);
      gl_FragColor = diffuse;
    }

    Click on the images for a ripple and coloring effect

    This steps breaks down the creation of an interactive grayscale transition effect, emphasizing the relationship between JavaScript (using GSAP) and GLSL shaders.

    Step 1: Instant Color/Grayscale Toggle

    Let’s start with the simplest version: clicking the image makes it instantly switch between color and grayscale.

    The JavaScript (GSAP)

    At this stage, GSAP’s role is to act as a simple “on/off” switch so let’s create a GSAP Observer to monitor the mouse click interaction:

    this.observer = Observer.create({
      target: document.querySelector('.content__carousel'),
      type: 'touch,pointer',
      onClick: e => this.onClick(e),
    });

    And here come the following steps:

    • Click Detection: We use an Observer to detect a click on our plane.
    • State Management: A boolean flag, isBw (is Black and White), is toggled on each click.
    • Shader Update: We use gsap.set() to instantly change a uniform in our shader. We’ll call it uGrayscaleProgress.
      • If isBw is trueuGrayscaleProgress becomes 1.0.
      • If isBw is falseuGrayscaleProgress becomes 0.0.
    onClick(e) {
      if (intersection) {
        const { material, userData } = intersection.object;
    
        userData.isBw = !userData.isBw;
    
        gsap.set(material.uniforms.uGrayscaleProgress, {
          value: userData.isBw ? 1.0 : 0.0
        });
      }
    }

    The Shader (GLSL)

    The fragment shader is very simple. It receives uGrayscaleProgress and uses it as a switch.

    uniform sampler2D uTexture;
    uniform float uGrayscaleProgress; // Our "switch" (0.0 or 1.0)
    varying vec2 vUv;
    
    vec3 toGrayscale(vec3 color) {
      float gray = dot(color, vec3(0.299, 0.587, 0.114));
      return vec3(gray);
    }
    
    void main() {
      vec3 originalColor = texture2D(uTexture, vUv).rgb;
      vec3 grayscaleColor = toGrayscale(originalColor);
      
       vec3 finalColor = mix(originalColor, grayscaleColor, uGrayscaleProgress);
       gl_FragColor = vec4(finalColor, 1.0);
    }

    Step 2: Animated Circular Reveal

    An instant switch is boring. Let’s make the transition a smooth, circular reveal that expands from the center.

    The JavaScript (GSAP)

    GSAP’s role now changes from a switch to an animator.
    Instead of gsap.set(), we use gsap.to() to animate uGrayscaleProgress from 0 to 1 (or 1 to 0) over a set duration. This sends a continuous stream of values (0.0, 0.01, 0.02, …) to the shader.

    gsap.to(material.uniforms.uGrayscaleProgress, {
      value: userData.isBw ? 1 : 0,
      duration: 1.5,
      ease: 'power2.inOut'
    });

    The Shader (GLSL)

    The shader now uses the animated uGrayscaleProgress to define the radius of a circle.

    void main() {
      float dist = distance(vUv, vec2(0.5));
      
      // 2. Create a circular mask.
      float mask = smoothstep(uGrayscaleProgress - 0.1, uGrayscaleProgress, dist);
    
      // 3. Mix the colors based on the mask's value for each pixel.
      vec3 finalColor = mix(originalColor, grayscaleColor, mask);
      gl_FragColor = vec4(finalColor, 1.0);
    }

    How smoothstep works here: Pixels where dist is less than uGrayscaleProgress – 0.1 get a mask value of 0. Pixels where dist is greater than uGrayscaleProgress get a value of 1. In between, it’s a smooth transition, creating the soft edge.

    Step 3: Originating from the Mouse Click

    The effect is much more engaging if it starts from the exact point of the click.

    The JavaScript (GSAP)

    We need to tell the shader where the click happened.

    • Raycasting: We use a Raycaster to find the precise (u, v) texture coordinate of the click on the mesh.
    • uMouse Uniform: We add a uniform vec2 uMouse to our material.
    • GSAP Timeline: Before the animation starts, we use .set() on our GSAP timeline to update the uMouse uniform with the intersection.uv coordinates.
    if (intersection) {
      const { material, userData } = intersection.object;
    
      material.uniforms.uMouse.value = intersection.uv;
    
      gsap.to(material.uniforms.uGrayscaleProgress, {
          value: userData.isBw ? 1 : 0
      });
    }

    The Shader (GLSL)

    We simply replace the hardcoded center with our new uMouse uniform.

    ...
    uniform vec2 uMouse; // The (u,v) coordinates from the click
    ...
    
    void main() {
    ...
    
    // 1. Calculate distance from the MOUSE CLICK, not the center.
    float dist = distance(vUv, uMouse);
    }

    Important Detail: To ensure the circular reveal always covers the entire plane, even when clicking in a corner, we calculate the maximum possible distance from the click point to any of the four corners (getMaxDistFromCorners) and normalize our dist value with it: dist / maxDist.

    This guarantees the animation completes fully.

    Step 4: Adding the Final Ripple Effect

    The last step is to add the 3D ripple effect that deforms the plane. This requires modifying the vertex shader.

    The JavaScript (GSAP)

    We need one more animated uniform to control the ripple’s lifecycle.

    1. uRippleProgress Uniform: We add a uniform float uRippleProgress.
    2. GSAP Keyframes: In the same timeline, we animate uRippleProgress from 0 to 1 and back to 0. This makes the wave rise up and then settle back down.
    gsap.timeline({ defaults: { duration: 1.5, ease: 'power3.inOut' } })
      .set(material.uniforms.uMouse, { value: intersection.uv }, 0)
      .to(material.uniforms.uGrayscaleProgress, { value: 1 }, 0)
      .to(material.uniforms.uRippleProgress, {
          keyframes: { value: [0, 1, 0] } // Rise and fall
      }, 0)

    The Shaders (GLSL)

    High-Poly Geometry: To see a smooth deformation, the PlaneGeometry in Three.js must be created with many segments (e.g., new PlaneGeometry(1, 1, 50, 50)). This gives the vertex shader more points to manipulate.

    generatePlane(image, ) {
      ...
      const plane = new Mesh(
        new PlaneGeometry(1, 1, 50, 50),
        new PlanesMaterial(texture),
      );
    
      return plane;
    }

    Vertex Shader: This shader now calculates the wave and moves the vertices.

    uniform float uRippleProgress;
    uniform vec2 uMouse;
    varying float vRipple; // Pass the ripple intensity to the fragment shader
    
    void main() {
      vec3 pos = position;
      float dist = distance(uv, uMouse);
    
      float ripple = sin(-PI * 10.0 * (dist - uTime * 0.1));
      ripple *= uRippleProgress;
    
      pos.y += ripple * 0.1;
    
      vRipple = ripple;
      gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
    }

    Fragment Shader: We can use the ripple intensity to add a final touch, like making the wave crests brighter.

    varying float vRipple; // Received from vertex shader
    
    void main() {
      // ... (all the color and mask logic from before)
      vec3 color = mix(color1, color2, mask);
    
      // Add a highlight based on the wave's height
      color += vRipple * 2.0;
    
      gl_FragColor = vec4(color, diffuse.a);
    }

    By layering these techniques, we create a rich, interactive effect where JavaScript and GSAP act as the puppet master, telling the shaders what to do, while the shaders handle the heavy lifting of drawing it beautifully and efficiently on the GPU.

    Step 5: Reverse effect on previous tile

    As a final step, we set up a reverse animation of the current tile when a new tile is clicked. Let’s start by creating the reset animation that reverses the animation of the uniforms:

    resetMaterial(object) {
      // Reset all shader uniforms to default values
      gsap.timeline({
        defaults: { duration: 1, ease: 'power2.out' },
    
        onUpdate() {
          object.material.uniforms.uTime.value += 0.1;
        },
        onComplete() {       
          object.userData.isBw = false;
        }
      })
      .set(object.material.uniforms.uMouse, { value: { x: 0.5, y: 0.5} }, 0)
      .set(object.material.uniforms.uDirection, { value: 1.0 }, 0)
      .fromTo(object.material.uniforms.uGrayscaleProgress, { value: 1 }, { value: 0 }, 0)
      .to(object.material.uniforms.uRippleProgress, { keyframes: { value: [0, 1, 0] } }, 0);
    }

    Now, at each click, we need to set the current tile so that it’s saved in the constructor, allowing us to pass the current material to the reset animation. Let’s modify the onClick function like this and analyze it step by step:

    if (this.activeObject && intersection.object !== this.activeObject && this.activeObject.userData.isBw) {
      this.resetMaterial(this.activeObject)
      
      // Stops timeline if active
      if (this.activeObject.userData.tl?.isActive()) this.activeObject.userData.tl.kill();
      
      // Cleans timeline
      this.activeObject.userData.tl = null;
    }
    
    // Setup active object
    this.activeObject = intersection.object;
    • If this.activeObject exists (initially set to null in the constructor), we proceed to reset it to its initial black and white state
    • If there’s a current animation on the active tile, we use GSAP’s kill method to avoid conflicts and overlapping animations
    • We reset userData.tl to null (it will be assigned a new timeline value if the tile is clicked again)
    • We then set the value of this.activeObject to the object selected via the Raycaster

    In this way, we’ll have a double ripple animation: one on the clicked tile, which will be colored, and one on the previously active tile, which will be reset to its original black and white state.

    Texture reveal mask effect

    In this tutorial, we will create an interactive effect that blends two images on a plane when the user hovers or touches it.

    Step 1: Setting Up the Planes

    Unlike the previous examples, in this case we need different uniforms for the planes, as we are going to create a mix between a visible front texture and another texture that will be revealed through a mask that “cuts through” the first texture.

    Let’s start by modifying the index.html file, adding a data attribute to all images where we’ll specify the underlying texture:

    <img src="/images/front-texture.webp" alt="" role="presentation" data-back="/images/back-texture.webp">

    Then, inside our Stage.js, we’ll modify the generatePlane method, which is used to create the planes in WebGL. We’ll start by retrieving the second texture to load via the data attribute, and we’ll pass the plane material the parameters with both textures and the aspect ratio of the images:

    generatePlane(image) {
      const loader = new TextureLoader();
      const texture = loader.load(image.src);
      const textureBack = loader.load(image.dataset.back);
    
      texture.colorSpace = SRGBColorSpace;
      textureBack.colorSpace = SRGBColorSpace;
    
      const { width, height } = image.getBoundingClientRect();
    
      const plane = new Mesh(
        new PlaneGeometry(1, 1),
        new PlanesMaterial(texture, textureBack, height / width),
      );
    
      return plane;
    }
    

    Step 2: Material Setup

    import { ShaderMaterial, Vector2 } from 'three';
    import baseVertex from './base.vert?raw';
    import baseFragment from './base.frag?raw';
    
    export default class PlanesMaterial extends ShaderMaterial {
      constructor(texture, textureBack, imageRatio) {
        super({
          vertexShader: baseVertex,
          fragmentShader: baseFragment,
          uniforms: {
            uTexture: { value: texture },
            uTextureBack: { value: textureBack },
            uMixFactor: { value: 0.0 },
            uAspect: { value: imageRatio },
            uMouse: { value: new Vector2(0.5, 0.5) },
          },
        });
      }
    }
    

    Let’s quickly analyze the uniforms passed to the material:

    • uTexture and uTextureBack are the two textures shown on the front and through the mask
    • uMixFactor represents the blending value between the two textures inside the mask
    • uAspect is the aspect ratio of the images used to calculate a circular mask
    • uMouse represents the mouse coordinates, updated to move the mask within the plane

    Step 3: The Javascript (GSAP)

    this.observer = Observer.create({
      target: document.querySelector('.content__carousel'),
      type: 'touch,pointer',
      onMove: e => this.onMove(e),
      onHoverEnd: () => this.hoverOut(),
    });

    Quickly, let’s create a GSAP Observer to monitor the mouse movement, passing two functions:

    • onMove checks, using the Raycaster, whether a plane is being hit in order to manage the opening of the reveal mask
    • onHoverEnd is triggered when the cursor leaves the target area, so we’ll use this method to reset the reveal mask’s expansion uniform value back to 0.0

    Let’s go into more detail on the onMove function to explain how it works:

    onMove(e) {
      const normCoords = {
        x: (e.x / window.innerWidth) * 2 - 1,
        y: -(e.y / window.innerHeight) * 2 + 1,
      };
    
      this.raycaster.setFromCamera(normCoords, this.camera);
    
      const [intersection] = this.raycaster.intersectObjects(this.scene.children);
    
      if (intersection) {
        this.intersected = intersection.object;
        const { material } = intersection.object;
    
        gsap.timeline()
          .set(material.uniforms.uMouse, { value: intersection.uv }, 0)
          .to(material.uniforms.uMixFactor, { value: 1.0, duration: 3, ease: 'power3.out' }, 0);
      } else {
        this.hoverOut();
      }
    }

    In the onMove method, the first step is to normalize the mouse coordinates from -1 to 1 to allow the Raycaster to work with the correct coordinates.

    On each frame, the Raycaster is then updated to check if any object in the scene is intersected. If there is an intersection, the code saves the hit object in a variable.

    When an intersection occurs, we proceed to work on the animation of the shader uniforms.

    Specifically, we use GSAP’s set method to update the mouse position in uMouse, and then animate the uMixFactor variable from 0.0 to 1.0 to open the reveal mask and show the underlying texture.

    If the Raycaster doesn’t find any object under the pointer, the hoverOut method is called.

    hoverOut() {
        if (!this.intersected) return;
    
        // Stop any running tweens on the uMixFactor uniform
        gsap.killTweensOf(this.intersected.material.uniforms.uMixFactor);
    
        // Animate uMixFactor back to 0 smoothly
        gsap.to(this.intersected.material.uniforms.uMixFactor, { value: 0.0, duration: 0.5, ease: 'power3.out });
    
        // Clear the intersected reference
        this.intersected = null;
      }

    This method handles closing the reveal mask once the cursor leaves the plane.

    First, we rely on the killAllTweensOf method to prevent conflicts or overlaps between the mask’s opening and closing animations by stopping all ongoing animations on the uMixFactor .

    Then, we animate the mask’s closing by setting the uMixFactor uniform back to 0.0 and reset the variable that was tracking the currently highlighted object.

    Step 4: The Shader (GLSL)

    uniform sampler2D uTexture;
    uniform sampler2D uTextureBack;
    uniform float uMixFactor;
    uniform vec2 uMouse;
    uniform float uAspect;
    
    varying vec2 vUv;
    
    void main() {
        vec2 correctedUv = vec2(vUv.x, (vUv.y - 0.5) * uAspect + 0.5);
        vec2 correctedMouse = vec2(uMouse.x, (uMouse.y - 0.5) * uAspect + 0.5);
        
        float distance = length(correctedUv - correctedMouse);
        float influence = 1.0 - smoothstep(0.0, 0.5, distance);
    
        float finalMix = uMixFactor * influence;
    
        vec4 textureFront = texture2D(uTexture, vUv);
        vec4 textureBack = texture2D(uTextureBack, vUv);
    
        vec4 finalColor = mix(textureFront, textureBack, finalMix);
    
        gl_FragColor = finalColor;
    }

    Inside the main() function, it starts by normalizing the UV coordinates and the mouse position relative to the image’s aspect ratio. This correction is applied because we are using non-square images, so the vertical coordinates must be adjusted to keep the mask’s proportions correct and ensure it remains circular. Therefore, the vUv.y and uMouse.y coordinates are modified so they are “scaled” vertically according to the aspect ratio.

    At this point, the distance is calculated between the current pixel (correctedUv) and the mouse position (correctedMouse). This distance is a numeric value that indicates how close or far the pixel is from the mouse center on the surface.

    We then move on to the actual creation of the mask. The uniform influence must vary from 1 at the cursor’s center to 0 as it moves away from the center. We use the smoothstep function to recreate this effect and obtain a soft, gradual transition between two values, so the effect naturally fades.

    The final value for the mix between the two textures, that is the finalMix uniform, is given by the product of the global factor uMixFactor (which is a static numeric value passed to the shader) and this local influence value. So the closer a pixel is to the mouse position, the more its color will be influenced by the second texture, uTextureBack.

    The last part is the actual blending: the two colors are mixed using the mix() function, which creates a linear interpolation between the two textures based on the value of finalMix. When finalMix is 0, only the front texture is visible.

    When it is 1, only the background texture is visible. Intermediate values create a gradual blend between the two textures.

    Click & Hold mask reveal effect

    This document breaks down the creation of an interactive effect that transitions an image from color to grayscale. The effect starts from the user’s click, expanding outwards with a ripple distortion.

    Step 1: The “Move” (Hover) Effect

    In this step, we’ll create an effect where an image transitions to another as the user hovers their mouse over it. The transition will originate from the pointer’s position and expand outwards.

    The JavaScript (GSAP Observer for onMove)

    GSAP’s Observer plugin is the perfect tool for tracking pointer movements without the boilerplate of traditional event listeners.

    • Setup Observer: We create an Observer instance that targets our main container and listens for touch and pointer events. We only need the onMove and onHoverEnd callbacks.
    • onMove(e) Logic:
      When the pointer moves, we use a Raycaster to determine if it’s over one of our interactive images.
      • If an object is intersected, we store it in this.intersected.
      • We then use a GSAP Timeline to animate the shader’s uniforms.
      • uMouse: We instantly set this vec2 uniform to the pointer’s UV coordinate on the image. This tells the shader where the effect should originate.
      • uMixFactor: We animate this float uniform from 0 to 1. This uniform will control the blend between the two textures in the shader.
    • onHoverEnd() Logic:
      • When the pointer leaves the object, Observer calls this function.
      • We kill any ongoing animations on uMixFactor to prevent conflicts.
      • We animate uMixFactor back to 0, reversing the effect.

    Code Example: the “Move” effect

    This code shows how Observer is configured to handle the hover interaction.

    import { gsap } from 'gsap';
    import { Observer } from 'gsap/Observer';
    import { Raycaster } from 'three';
    
    gsap.registerPlugin(Observer);
    
    export default class Effect {
      constructor(scene, camera) {
        this.scene = scene;
        this.camera = camera;
        this.intersected = null;
        this.raycaster = new Raycaster();
    
    	// 1. Create the Observer
    	this.observer = Observer.create({
          target: document.querySelector('.content__carousel'),
          type: 'touch,pointer',
          onMove: e => this.onMove(e),
          onHoverEnd: () => this.hoverOut(), // Called when the pointer leaves the target
        });
      }
    
      hoverOut() {
        if (!this.intersected) return;
    
    	// 3. Animate the effect out
        gsap.killTweensOf(this.intersected.material.uniforms.uMixFactor);
        gsap.to(this.intersected.material.uniforms.uMixFactor, {
          value: 0.0,
          duration: 0.5,
          ease: 'power3.out'
        });
    
        this.intersected = null;
      }
    
      onMove(e) {
    	// ... (Raycaster logic to find intersection)
    	const [intersection] = this.raycaster.intersectObjects(this.scene.children);
    
        if (intersection) {
          this.intersected = intersection.object;
          const { material } = intersection.object;
    
    	  // 2. Animate the uniforms on hover
          gsap.timeline()
            .set(material.uniforms.uMouse, { value: intersection.uv }, 0) // Set origin point
            .to(material.uniforms.uMixFactor, { // Animate the blendvalue: 1.0,
              duration: 3,
              ease: 'power3.out'
            }, 0);
        } else {
          this.hoverOut(); // Reset if not hovering over anything
        }
      }
    }

    The Shader (GLSL)

    The fragment shader receives the uniforms animated by GSAP and uses them to draw the effect.

    • uMouse: Used to calculate the distance of each pixel from the pointer.
    • uMixFactor: Used as the interpolation value in a mix() function. As it animates from 0 to 1, the shader smoothly blends from textureFront to textureBack.
    • smoothstep(): We use this function to create a circular mask that expands from the uMouse position. The radius of this circle is controlled by uMixFactor.
    uniform sampler2D uTexture; // Front image
    uniform sampler2D uTextureBack; // Back image
    uniform float uMixFactor; // Animated by GSAP (0 to 1)
    uniform vec2 uMouse; // Set by GSAP on move
    
    // ...
    
    void main() {
      // ... (code to correct for aspect ratio)
    
      // 1. Calculate distance of the current pixel from the mouse
      float distance = length(correctedUv - correctedMouse);
    
      // 2. Create a circular mask that expands as uMixFactor increases
      float influence = 1.0 - smoothstep(0.0, 0.5, distance);
      float finalMix = uMixFactor * influence;
    
      // 3. Read colors from both textures
      vec4 textureFront = texture2D(uTexture, vUv);
      vec4 textureBack = texture2D(uTextureBack, vUv);
    
      // 4. Mix the two textures based on the animated value
      vec4 finalColor = mix(textureFront, textureBack, finalMix);
    	
      gl_FragColor = finalColor;
    }

    Step 2: The “Click & Hold” Effect

    Now, let’s build a more engaging interaction. The effect will start when the user presses down, “charge up” while they hold, and either complete or reverse when they release.

    The JavaScript (GSAP)

    Observer makes this complex interaction straightforward by providing clear callbacks for each state.

    • Setup Observer: This time, we configure Observer to use onPressonMove, and onRelease.
    • onPress(e):
      • When the user presses down, we find the intersected object and store it in this.active.
      • We then call onActiveEnter(), which starts a GSAP timeline for the “charging” animation.
    • onActiveEnter():
      • This function defines the multi-stage animation. We use await with a GSAP tween to create a sequence.
      • First, it animates uGrayscaleProgress to a midpoint (e.g., 0.35) and holds it. This is the “hold” part of the interaction.
      • If the user continues to hold, a second tween completes the animation, transitioning uGrayscaleProgress to 1.0.
      • An onComplete callback then resets the state, preparing for the next interaction.
    • onRelease():
      • If the user releases the pointer before the animation completes, this function is called.
      • It calls onActiveLeve(), which kills the “charging” animation and animates uGrayscaleProgress back to 0, effectively reversing the effect.
    • onMove(e):
      • This is still used to continuously update the uMouse uniform, so the shader’s noise effect tracks the pointer even during the hold.
      • Crucially, if the pointer moves off the object, we call onRelease() to cancel the interaction.

    Code Example: Click & Hold

    This code demonstrates the press, hold, and release logic managed by Observer.

    import { gsap } from 'gsap';
    import { Observer } from 'gsap/Observer';
    
    // ...
    
    export default class Effect {
      constructor(scene, camera) {
    	// ...
    		
        this.active = null; // Currently active (pressed) object
    	this.raycaster = new Raycaster();
    	
    	// 1. Create the Observer for press, move, and release
    	this.observer = Observer.create({
    	  target: document.querySelector('.content__carousel'),
    	  type: 'touch,pointer',
          onPress: e => this.onPress(e),
          onMove: e => this.onMove(e),
    	  onRelease: () => this.onRelease(),
    	});
    	
    	// Continuously update uTime for the procedural effect
    	gsap.ticker.add(() => {
    	  if (this.active) {
    	    this.active.material.uniforms.uTime.value += 0.1;
    	  }
    	});
      }
    
      // 3. The "charging" animation
      async onActiveEnter() {
        gsap.killTweensOf(this.active.material.uniforms.uGrayscaleProgress);
    
        // First part of the animation (the "hold" phase)
    	await gsap.to(this.active.material.uniforms.uGrayscaleProgress, {
          value: 0.35,
          duration: 0.5,
        });
    
    	// Second part, completes after the hold
        gsap.to(this.active.material.uniforms.uGrayscaleProgress, {
          value: 1,
          duration: 0.5,
          delay: 0.12,
          ease: 'power2.in',
          onComplete: () => {/* ... reset state ... */ },
        });
      }
    
      // 4. Reverses the animation on early release
      onActiveLeve(mesh) {
        gsap.killTweensOf(mesh.material.uniforms.uGrayscaleProgress);
        gsap.to(mesh.material.uniforms.uGrayscaleProgress, {
          value: 0,
          onUpdate: () => {
            mesh.material.uniforms.uTime.value += 0.1;
          },
        });
      }
    
      // ... (getIntersection logic) ...
    	
      // 2. Handle the initial press
      onPress(e) {
        const intersection = this.getIntersection(e);
    
        if (intersection) {
          this.active = intersection.object;
          this.onActiveEnter(this.active); // Start the animation
        }
      }
    
      onRelease() {
        if (this.active) {
          const prevActive = this.active;
          this.active = null;
          this.onActiveLeve(prevActive); // Reverse the animation
        }
      }
    
      onMove(e) {
    	// ... (getIntersection logic) ...
    		
    	if (intersection) {
    	  // 5. Keep uMouse updated while holding
    	  const { material } = intersection.object;
          gsap.set(material.uniforms.uMouse, { value: intersection.uv });
        } else {
          this.onRelease(); // Cancel if pointer leaves
        }
      }
    }

    The Shader (GLSL)

    The fragment shader for this effect is more complex. It uses the animated uniforms to create a distorted, noisy reveal.

    • uGrayscaleProgress: This is the main driver, animated by GSAP. It controls both the radius of the circular mask and the strength of a “liquid” distortion effect.
    • uTime: This is continuously updated by gsap.ticker as long as the user is pressing. It’s used to add movement to the noise, making the effect feel alive and dynamic.
    • noise() function: A standard GLSL noise function generates procedural, organic patterns. We use this to distort both the shape of the circular mask and the image texture coordinates (UVs).
    // ... (uniforms and helper functions)
    
    void main() {
      // 1. Generate a noise value that changes over time
      float noisy = (noise(vUv * 25.0 + uTime * 0.5) - 0.5) * 0.05;
    
      // 2. Create a distortion that pulses using the main progress animation
      float distortionStrength = sin(uGrayscaleProgress * PI) * 0.5;
      vec2 distortedUv = vUv + vec2(noisy) * distortionStrength;
    
      // 3. Read the texture using the distorted coordinates for a liquid effect
      vec4 diffuse = texture2D(uTexture, distortedUv);
      // ... (grayscale logic)
    	
      // 4. Calculate distance from the mouse, but add noise to it
      float dist = distance(vUv, uMouse);
      float distortedDist = dist + noisy;
    
      // 5. Create the circular mask using the distorted distance and progress
      float maxDist = getMaxDistFromCorners(uMouse);
      float mask = smoothstep(uGrayscaleProgress - 0.1, uGrayscaleProgress, distortedDist / maxDist);
    
      // 6. Mix between the original and grayscale colors
      vec3 color = mix(color1, color2, mask);
    
      gl_FragColor = vec4(color, diffuse.a);
    }

    This shader combines noise-based distortion, smooth circular masking, and real-time uniform updates to create a liquid, organic transition that radiates from the click position. As GSAP animates the shader’s progress and time values, the effect feels alive and tactile — a perfect example of how animation logic in JavaScript can drive complex visual behavior directly on the GPU.

    Dynamic blur effect carousel

    Step 1: Create the carousel

    In this final demo, we will create an additional implementation, turning the image grid into a scrollable carousel that can be navigated both by dragging and scrolling.

    First we will implement the Draggable plugin by registering it and targeting the appropriate <div>
    with the desired configuration. Make sure to handle boundary constraints and update them accordingly when the window is resized.

    const carouselInnerRef = document.querySelector('.content__carousel-inner');
    const draggable = new Draggable(carouselInnerRef, {
      type: 'x',
      inertia: true,
      dragResistance: 0.5,
      edgeResistance: 0.5,
      throwResistance: 0.5,
      throwProps: true,
    });
    
    function resize() {
      const innerWidth = carouselInnerRef.scrollWidth;
      const viewportWidth = window.innerWidth;
      maxScroll = Math.abs(Math.min(0, viewportWidth - innerWidth));
    
      draggable.applyBounds({ minX: -maxScroll, maxX: 0 });
    }
    
    window.addEventListener('resize', debounce(resize));

    We ill also link GSAP Draggable to the scroll functionality using the GSAP ScrollTrigger plugin, allowing us to synchronize both scroll and drag behavior within the same container. Let’s explore this in more detail:

    let maxScroll = Math.abs(Math.min(0, window.innerWidth - carouselInnerRef.scrollWidth));
    
    const scrollTriggerInstance = ScrollTrigger.create({
      trigger: carouselWrapper,
      start: 'top top',
      end: `+=${2.5 * maxScroll}`,
      pin: true,
      scrub: 0.05,
      anticipatePin: 1,
      invalidateOnRefresh: true,
    });
    
    ...
    
    resize() {
      ...
      scrollTriggerInstance.refresh();
    }

    Now that ScrollTrigger is configured on the same container, we can focus on synchronizing the scroll position between both plugins, starting from the ScrollTrigger instance:

    onUpdate(e) {
      const x = -maxScroll * e.progress;
    
      gsap.set(carouselInnerRef, { x });
      draggable.x = x;
      draggable.update();
    }

    We then move on to the Draggable instance, which will be updated within both its onDrag and onThrowUpdate callbacks using the scrollPos variable. This variable will serve as the final scroll position for both the window and the ScrollTrigger instance.

    onDragStart() {},
    onDrag() {
      const progress = gsap.utils.normalize(draggable.maxX, draggable.minX, draggable.x);
      scrollPos = scrollTriggerInstance.start + (scrollTriggerInstance.end - scrollTriggerInstance.start) * progress;
      window.scrollTo({ top: scrollPos, behavior: 'instant' });
    
      scrollTriggerInstance.scroll(scrollPos);
    },
    onThrowUpdate() {
      const progress = gsap.utils.normalize(draggable.maxX, draggable.minX, draggable.x);
      scrollPos = scrollTriggerInstance.start + (scrollTriggerInstance.end - scrollTriggerInstance.start) * progress;
      window.scrollTo({ top: scrollPos, behavior: 'instant' });
    },
    onThrowComplete() {
      scrollTriggerInstance.scroll(scrollPos);
    }

    Step 2: Material setup

    export default class PlanesMaterial extends ShaderMaterial {
      constructor(texture) {
        super({
          vertexShader: baseVertex,
          fragmentShader: baseFragment,
          uniforms: {
            uTexture: { value: texture },
            uBlurAmount: { value: 0 },
          },
        });
      }
    }

    Let’s quickly analyze the uniforms passed to the material:

    • uTexture is the base texture rendered on the plane
    • uBlurAmount represents the blur strength based on the distance from the window center

    Step 3: The JavaScript (GSAP)

    constructor(scene, camera) {
      ...
      this.callback = this.scrollUpdateCallback;
      this.centerX = window.innerWidth / 2
      ...
    }

    In the constructor we set up two pieces we’ll use to drive the dynamic blur effect:

    • <strong>this.callback</strong> references the function used inside ScrollTrigger’s onUpdate to refresh the blur amount
    • this.centerX represents the window center on X axes and is updated on each window resize

    Let’s dive into the callback passed to ScrollTrigger:

    scrollUpdateCallback() {
      this.tiles.forEach(tile => {
        const worldPosition = tile.getWorldPosition(new Vector3());
        const vector = worldPosition.clone().project(this.camera);
    
        const screenX = (vector.x * 0.5 + 0.5) * window.innerWidth;
    
        const distance = Math.abs(screenX - this.centerX);
        const maxDistance = window.innerWidth / 2;
    
        const blurAmount = MathUtils.clamp(distance / maxDistance * 5, 0.0, 5.0);
    
        gsap.to(tile.material.uniforms.uBlurAmount, {
          value: Math.round(blurAmount / 2) * 2,
          duration: 1.5,
          ease: 'power3.out'
        });
      });
    }
    

    Let’s dive deeper into this:

    • Vector projects each plane’s 3D position into normalized device coordinates; .project(this.camera) converts to the -1..1 range, then it’s scaled to real screen pixel coordinates.
    • screenX are the 2D screen-space coordinates.
    • distance measures how far the plane is from the screen center.
    • maxDistance is the maximum possible distance from center to corner.
    • blurAmount computes blur strength based on distance from the center; it’s clamped between 0.0 and 5.0 to avoid extreme values that would harm visual quality or shader performance.
    • The <strong>uBlurAmount</strong> uniform is animated toward the computed blurAmount. Rounding to the nearest even number (Math.round(blurAmount / 2) * 2) helps avoid overly frequent tiny changes that could cause visually unstable blur.

    Step 4: The Shader (GLSL)

    uniform sampler2D uTexture;
    uniform float uBlurAmount;
    
    varying vec2 vUv;
    
    vec4 kawaseBlur(sampler2D tex, vec2 uv, float offset) {
      vec2 texelSize = vec2(1.0) / vec2(textureSize(tex, 0));
      
      vec4 color = vec4(0.0);
      
      color += texture2D(tex, uv + vec2(offset, offset) * texelSize);
      color += texture2D(tex, uv + vec2(-offset, offset) * texelSize);
      color += texture2D(tex, uv + vec2(offset, -offset) * texelSize);
      color += texture2D(tex, uv + vec2(-offset, -offset) * texelSize);
      
      return color * 0.25;
    }
    
    vec4 multiPassKawaseBlur(sampler2D tex, vec2 uv, float blurStrength) {
      vec4 baseTexture = texture2D(tex, uv);
      
      vec4 blur1 = kawaseBlur(tex, uv, 1.0 + blurStrength * 1.5);
      vec4 blur2 = kawaseBlur(tex, uv, 2.0 + blurStrength);
      vec4 blur3 = kawaseBlur(tex, uv, 3.0 + blurStrength * 2.5);
      
      float t1 = smoothstep(0.0, 3.0, blurStrength);
      float t2 = smoothstep(3.0, 7.0, blurStrength);
      
      vec4 blurredTexture = mix(blur1, blur2, t1);
      blurredTexture = mix(blurredTexture, blur3, t2);
      
      float mixFactor = smoothstep(0.0, 1.0, blurStrength);
      
      return mix(baseTexture, blurredTexture, mixFactor);
    }
    
    void main() {
      vec4 color = multiPassKawaseBlur(uTexture, vUv, uBlurAmount);
      gl_FragColor = color;
    }
    

    This GLSL fragment receives a texture (uTexture) and a dynamic value (uBlurAmount) indicating how much the plane should be blurred. Based on this value, the shader applies a multi-pass Kawase blur, an efficient technique that simulates a soft, pleasing blur while staying performant.

    Let’s examine the kawaseBlur function, which applies a light blur by sampling 4 points around the current pixel (uv), each offset positively or negatively.

    • texelSize computes the size of one pixel in UV coordinates so offsets refer to “pixel amounts” regardless of texture resolution.
    • Four samples are taken in a diagonal cross pattern around uv.
    • The four colors are averaged (multiplied by 0.25) to return a balanced result.

    This function is a light single pass. To achieve a stronger effect, we apply it multiple times.

    The multiPassKawaseBlur function does exactly that, progressively increasing blur and then blending the passes:

    vec4 blur1 = kawaseBlur(tex, uv, 1.0 + blurStrength * 1.5);
    vec4 blur2 = kawaseBlur(tex, uv, 2.0 + blurStrength);
    vec4 blur3 = kawaseBlur(tex, uv, 3.0 + blurStrength * 2.5);

    This produces a progressive, visually smooth result.

    Next, we blend the different blur levels using two separate smoothsteps:

    float t1 = smoothstep(0.0, 3.0, blurStrength);
    float t2 = smoothstep(3.0, 7.0, blurStrength);
      
    vec4 finalBlur = mix(blur1, blur2, t1);
    finalBlur = mix(finalBlur, blur3, t2);

    The first mix blends blur1 and blur2, while the second blends that result with blur3. The resulting finalBlur represents the Kawase-blurred texture, which we finally mix with the base texture passed via the uniform.

    Finally, we mix the blurred texture with the original texture based on blurStrength, using another smoothstep from 0 to 1:

    float mixFactor = smoothstep(0.0, 1.0, blurStrength);
    return mix(baseTexture, finalBlur, mixFactor);

    Final Words

    Bringing together GSAP’s animation power and the creative freedom of GLSL shaders opens up a whole new layer of interactivity for the web. By animating shader uniforms directly with GSAP, we’re able to blend smooth motion design principles with the raw flexibility of GPU rendering — crafting experiences that feel alive, fluid, and tactile.

    From simple grayscale transitions to ripple-based deformations and dynamic blur effects, every step in this tutorial demonstrates how motion and graphics can respond naturally to user input, creating interfaces that invite exploration rather than just observation.

    While these techniques push the boundaries of front-end development, they also highlight a growing trend: the convergence of design, code, and real-time rendering.

    So, take these examples, remix them, and make them your own — because the most exciting part of working with GSAP and shaders is that the canvas is quite literally infinite.



    Source link

  • [ITA] Azure DevOps: plan, build, and release projects | Global Azure Verona



    [ITA] Azure DevOps: plan, build, and release projects | Global Azure Verona



    Source link

  • Ponpon Mania: How WebGL and GSAP Bring a Comic Sheep’s Dream to Life

    Ponpon Mania: How WebGL and GSAP Bring a Comic Sheep’s Dream to Life



    Ponpon Mania is an animated comic featuring Ponpon, a megalomaniac sheep dreaming of becoming a DJ. We wanted to explore storytelling beyond traditional comics by combining playful interactions, smooth GSAP-powered motion, and dynamic visuals. The goal was to create a comic that feels alive, where readers engage directly with Ponpon’s world while following the narrative. The project evolved over several months, moving from early sketches to interactive prototypes.

    About us

    We are Justine Soulié (Art Director & Illustrator) and Patrick Heng (Creative Developer), a creative duo passionate about storytelling through visuals and interaction. Justine brings expertise in illustration, art direction, and design, while Patrick focuses on creative development and interactive experiences. Together, we explore ways to make stories more playful, immersive, and engaging.

    Art Direction

    Our visual direction emphasizes clean layouts, bold colors, and playful details. From the start, we wanted the comic to feel vibrant and approachable while using design to support the story. On the homepage, we aimed to create a simple, welcoming scene that immediately draws the user in, offering many interactive elements to explore and encouraging engagement from the very first moment.

    The comic is mostly black and white, providing a simple and striking visual base. Color appears selectively, especially when Ponpon dreams of being a DJ and is fully immersed in his imagined world, highlighting these key moments and guiding the reader’s attention. Scroll-triggered animations naturally direct focus, while hover effects and clickable elements invite exploration without interrupting the narrative flow.

    To reinforce Ponpon’s connection to music, we designed the navigation to resemble a music player. Readers move through chapters as if they were albums, with each panel functioning like a song. This structure reflects Ponpon’s DJ aspirations, making the reading experience intuitive, dynamic, and closely tied to the story.

    Technical Approach

    Our main goal was to reduce technical friction so we could dedicate our energy to refining the artistic direction, motion design, and animation of the website.

    We used WebGL because it gave us full creative freedom over rendering. Even though the comic has a mostly 2D look, we wanted the flexibility to add depth and apply shader-based effects.

    Starting from Justine’s illustrator files, every layer and visual element from each panel was exported as an individual image. These assets were then packed into optimized texture atlases using Free TexturePacker.

    Atlas example

    Once exported, the images were further compressed into GPU-friendly formats to reduce memory usage. Using the data generated by the packer, we reconstructed each scene in WebGL by generating planes at the correct size. Finally, everything was placed in a 3D scene where we applied the necessary shaders and animations to achieve the desired visual effects.

    Tech Stack & Tools

    Design

    • Adobe Photoshop & Illustrator – illustration and asset preparation
    • Figma – layout and interface design

    Development

    • ogl – WebGL framework for rendering
    • Nuxt.js – frontend framework for structure and routing
    • GSAP – animation library for smooth and precise motion
    • Matter.js – physics engine used on the About page
    • Free TexturePacker – for creating optimized texture atlases from exported assets
    • Tweakpane – GUI tool for real-time debugging and fine-tuning parameters

    Animating using GSAP

    GSAP makes it easy to animate both DOM elements and WebGL objects with a unified syntax. Its timeline system brought structure to complex sequences, while combining it with ScrollTrigger streamlined scroll-based animations. We also used SplitText to handle text animations.

    Home page

    For the homepage, we wanted the very first thing users see to feel playful and full of life. It introduces the three main characters, all animated, and sets the tone for the rest of the experience. Every element reacts subtly to the mouse: the Ponpon mask deforms slightly, balloons collide softly, and clouds drift away in gentle repulsion. These micro-interactions make the scene feel tangible and invite visitors to explore the world of Ponpon Mania with curiosity and delight. We used GSAP timeline to choreograph the intro animation, allowing us to trigger each element in sequence for a smooth and cohesive reveal.

    // Simple repulsion we used for the clouds in our render function
    const dx = baseX - mouse.x;
    const dy = baseY - mouse.y;
    const dist = Math.sqrt(dx * dx + dy * dy);
    
    // Repel the cloud if the mouse is near
    const radius = 2; // interaction radius
    const strength = 1.5; // repulsion force
    const repulsion = Math.max(0, 1 - dist / radius) * strength;
    
    // Apply the repulsion with smooth spring motion
    const targetX = basePosX + dx * repulsion;
    const targetY = basePosY - Math.abs(dy * repulsion) / 2;
    
    velocity.x += (targetX - position.x) * springStrength * deltaTime;
    velocity.y += (targetY - position.y) * springStrength * deltaTime;
    
    position.x += velocity.x;
    position.y += velocity.y;

    Chapter Selection

    For the chapter selection, we wanted something simple yet evocative of Ponpon musical universe. Each chapter is presented as an album cover, inviting users to browse through them as if flipping through a record collection. We try to have a smooth and intuitive navigation, users can drag, scroll, or click to explore and each chapter snaps into place for an easy and satisfying selection experience.

    Panel Animation

    For the panel animations, we wanted each panel to feel alive bringing Justine’s illustrations to life through motion. We spent a lot of time refining every detail so that each scene feels expressive and unique. Using GSAP timelines made it easy to structure and synchronize the different animations, keeping them flexible and reusable. Here’s an example of a GSAP timeline animating a panel, showing how sequences can be chained together smoothly.

    // Animate ponpons in sequence with GSAP timelines
    const timeline = gsap.timeline({ repeat: -1, repeatDelay: 0.7 });
    const uFlash = { value: 0 };
    const flashTimeline = gsap.timeline({ paused: true });
    
    function togglePonponGroup(index) {
      ponponsGroups.forEach((g, i) => (g.mesh.visible = i === index));
    }
    
    function triggerFlash() {
      const flashes = Math.floor(Math.random() * 2) + 1; // 1–2 flashes
      const duration = 0.4 / flashes;
    
      flashTimeline.clear();
    
      for (let i = 0; i < flashes; i++) {
        flashTimeline
          .set(uFlash, { value: 0.6 }, i * duration) // bright flash
          .to(uFlash, { value: 0, duration: duration * 0.9 }, i * duration + duration * 0.1); // fade out
      }
    
      flashTimeline.play();
    }
    
    ponponMeshes.forEach((ponpon, i) => {
      timeline.fromTo(
        ponpon.position,
        { y: ponpon.initialY - 0.2 },  // start slightly below
        {
          y: ponpon.initialY,          // bounce up
          duration: 1,
          ease: "elastic.out",
          onStart: () => {
            togglePonponGroup(i);      // show active group
            triggerFlash();            // trigger flash
          }
        },
        i * 1.6 // stagger delay between ponpons
      );
    });

    About Page

    On the About page, GSAP ScrollTrigger tracks the scroll progress of each section. These values drive the WebGL scenes, controlling rendering, transitions, and camera movement. This ensures the visuals stay perfectly synchronized with the user’s scrolling.

    const sectionUniform = { progress: { value: 0 } };
    
    // create a ScrollTrigger for one section
    const sectionTrigger = ScrollTrigger.create({
      trigger: ".about-section",
      start: "top bottom",
      end: "bottom top",
      onUpdate: (self) => {
        sectionUniform.progress.value = self.progress; // update uniform
      }
    });
    
    // update scene each frame using trigger values
    function updateScene() {
      const progress = sectionTrigger.progress;  
      const velocity = sectionTrigger.getVelocity(); 
    
      // drive camera movement with scroll progress
      camera.position.y = map(progress, 0.75, 1, -0.4, 3.4);
      camera.position.z =
        5 + map(progress, 0, 0.3, -4, 0) +
            map(progress, 0.75, 1, 0, 2) + velocity * 0.01;
    
      // subtle velocity feedback on ponpon and camera
      ponpon.position.y = ponpon.initialY + velocity * 0.01;
    }

    Thanks to the SplitText plugin, we can animate each section title line by line as it comes into view while scrolling.

    // Split the text into lines for staggered animation
    const split = new SplitText(titleDomElement, { type: "lines" });
    const lines = split.lines;
    
    // Create a timeline for the text animation
    const tl = gsap.timeline({ paused: true });
    
    tl.from(lines, {
      x: "100%",
      skewX: () => Math.random() * 50 - 25,
      rotation: 5,
      opacity: 0,
      duration: 1,
      stagger: 0.06,
      ease: "elastic.out(0.7, 0.7)"
    });
    
    // Trigger the timeline when scrolling the section into view
    ScrollTrigger.create({
      trigger: ".about-section",
      start: "top 60%",
      end: "bottom top",
      onEnter: () => tl.play(),
      onLeaveBack: () => tl.reverse()
    });

    Page transitions

    For the page transitions, we wanted them to add a sense of playfulness to the experience while keeping navigation snappy and fluid. Each transition was designed to fit the mood of the page so rather than using a single generic effect, we built variations that keep the journey fresh.

    Technically, the transitions blend two WebGL scenes together using a custom shader, where the previous and next pages are rendered and mixed in real time. The animation of the blend is driven by GSAP tweens, which lets us precisely control the timing and progress of the shader for smooth, responsive transitions.

    Designing Playful Experiences

    Ponpon Mania pushed us to think beyond traditional storytelling. It was a joy to work on the narrative and micro-interactions that add playfulness and energy to the comic.

    Looking ahead, we plan to create new chapters, expand Ponpon’s story, and introduce small games and interactive experiences within the universe we’ve built. We’re excited to keep exploring Ponpon’s world and share more surprises with readers along the way.

    Thank you for reading! We hope you enjoyed discovering the creative journey behind Ponpon Mania and the techniques we used to bring Ponpon’s world to life.

    If you want to follow Ponpon, check us out on TikTok or Instagram.

    You can also support us on Tipeee!

    Justine Soulié & Patrick Heng





    Source link