دسته: برنامه‌نویسان

  • Designer Spotlight: Bimo Tri | Codrops

    Designer Spotlight: Bimo Tri | Codrops


    I’m Bimo Tri, a multidisciplinary designer and creative developer based in Indonesia. I run a small independent studio called Studio•Bämo.J®, working between Jakarta and Bali — or pretty much anywhere I can find a fast internet connection.

    My focus is on building expressive digital experiences, mostly portfolio sites and brand platforms for creatives, studios, and design-forward brands. With roots in both design and development, I enjoy blending visual precision with motion and interactivity to create work that feels both thoughtful and visceral. I care deeply about craft, story, and making things that resonate beyond just visuals.

    Showcase

    SAISEI

    Saisei is a visionary architecture firm based in Tokyo, Japan, focused on sustainability, culture, and timeless design. I designed and developed the site to reflect their philosophy merging traditional Japanese aesthetics with clean, contemporary digital design.

    Achievements

    This project was a major milestone in my career. It brought home my first Awwwards Site of the Day and earned recognition from several other platforms. The positive feedback from the design community affirmed my approach to cultural storytelling through digital mediums.

    Personal notes

    Saisei remains one of my favorite works. I’ve always been drawn to the tension between heritage and modernity, and this project gave me the space to explore that deeply. The recognition it received made the process even more meaningful.

    Live Site Link: https://saisei-sbj.webflow.io/

    NĀGARA

    Nagara is a concept project developed in collaboration with my buddy Felixander Yuan, created as part of the #DareToShare24 design challenge by @bentenwordring.

    It reimagines a luxury watch brand that fuses the precision of Swiss watchmaking with the cultural depth of the Majapahit Empire. Each timepiece acts as a tribute not just to technical craftsmanship, but to historical richness and aesthetic symbolism rooted in Indonesian heritage.

    Challenges

    One of the biggest hurdles was exploring AI-generated imagery and motion assets. Using tools like Midjourney and Kling, it took numerous iterations to dial in a visual direction that felt both on-brand and high-end. Getting the product visuals — especially the watches — to look authentic and aligned with the brand’s narrative was far more challenging than anticipated.

    Achievements

    The final result was a fully animated concept site that we were genuinely proud of. Yuan did an amazing job bringing the dev and motion to life. Beyond that, the project ended up winning the monthly challenge, earning recognition and some cool prizes — a nice bonus on top of the creative satisfaction.

    Personal notes

    This one felt personal. The month’s theme was “Luxury” — a space I naturally gravitate toward — and we were allowed to team up for the final challenge. I chose to work with Yuan, someone I’ve respected and known for a while. The entire process felt like a return to roots — storytelling, culture, and collaboration — wrapped inside a luxury narrative.

    Live Site Link: https://nagara-daretoshare.webflow.io/

    HorizonStudio

    Horizon Studio is a conceptual architecture firm based in Los Angeles, created to explore the intersection of art, design, and technology. Inspired by my love for architecture and interior design, the site showcases sleek, avant-garde visuals with a focus on sustainability. I used Midjourney for the visual assets and GPT to shape the narrative, crafting an experience that feels modern and immersive.

    Achievements

    The site received an Honorable Mention from Awwwards — a validating moment for me as it was one of my earliest forays into the architecture space. The feedback highlighted the strength of the design direction and the site’s overall atmosphere.

    Personal notes

    This was the first project where I went all in with generative AI — every asset was made using prompts, and honestly, it was pretty sloppy at first. But through experimentation, I managed to create a cohesive visual style that looked like it came from one photographer. It reminded me how fun it is to dive into the unknown and just explore.

    Live Site Link: https://horizonstudioarchitecture.webflow.io/

    REZN-8

    REZN-8 is a typographic and layout exploration rooted in Swiss design principles. It started as a poster experiment and evolved into a full website — my first time building a motion-heavy site entirely with code. It was all about translating static design into something dynamic, expressive, and functional in a digital format.

    Challenges

    Turning the poster into a functional site was already a challenge, but learning JavaScript on the fly to bring motion into the experience pushed me even further.

    The biggest challenge, though, was researching and presenting accurate information about the legendary designers featured. Some had very little online presence, so I had to dive deep into design history to get the details right.

    Personal notes

    REZN-8 holds a special place in my heart. It completely changed how I see layout, grids, and type — it was the project that shifted my design brain forever. Shoutout to Chris Do and TheFutur’s Typography 01 course, which sparked the whole thing.

    Live Site Link: https://rezn8.webflow.io/

    Notable Explorations & Projects

    Blacksmith Collective

    (Webflow Template) Shameless plug 🫠

    Live Site Link: https://blacksmith-sbj.webflow.io/

    Velthari

    Live Site Link: https://velthari.webflow.io/

    Velvet Blues

    Background

    I didn’t start out as a designer, at least not in the traditional sense. My early work was in a marketing agency where I handled everything from FB ad graphics to SEO landing pages and WordPress articles. It wasn’t glamorous, but it gave me a foundation in how digital systems work.

    Then I stumbled across Webflow — and everything changed. I got completely hooked on web design, especially sites with rich motion and interaction.

    That moment pushed me to quit the agency world and start my own studio. Since then, I’ve been building expressive, story-driven websites for creatives and design-forward brands, blending design, motion, and development into something that feels personal and intentional.

    Design Philosophy

    I’ve always leaned toward minimal design paired with bold, heavy type. To me, you don’t need a lot to make something striking, just the right balance of restraint and intention. If the typography is solid and the layout is thoughtful, even the simplest design can carry emotional weight. I focus on clarity, rhythm, and a strong visual pulse — letting motion, space, and type do the heavy lifting.

    Tools and Techniques

    • Figma for most of the design work
    • Webflow for front-end development and CMS integration
    • GSAP for all things motion and interaction
    • Cursor for dev support (because I wouldn’t call myself a “real dev,” but I make it work)

    Inspiration

    I pull inspiration from a lot of places — music, films, anime — especially the ones that are crafted with insane attention to detail. I’ve always admired how much intention goes into those worlds. There’s so much to steal from them — not just visually, but conceptually and emotionally. I’m also inspired by work that feels personal, raw, and beautifully uncompromising.

    Future Goals

    My main goal is to keep attracting work that aligns with the way I see and do things. I’m not chasing volume — I just want to keep collaborating with people who value design, story, and craft as much as I do. I’m also interested in exploring more personal projects, maybe even merging design with philosophy, fitness, or writing — things that feel more like extensions of who I am, not just what I do.

    Final Thoughts

    Learn from the past, embrace the present moment, and look into the future. You only live once, do what makes you happy and what feels right for you.

    Contact Info

    I’m mostly active on LinkedIn, X (Twitter), and occasionally Instagram.

    Instagram: @bamojk

    LinkedIn: https://www.linkedin.com/in/bamoj/

    X (Twitter): https://x.com/bamojk

    Website: bamoj.com





    Source link

  • Designing TrueKind: A Skincare Brand’s Journey Through Moodboards, Motion, and Meaning

    Designing TrueKind: A Skincare Brand’s Journey Through Moodboards, Motion, and Meaning


    Project Backstory

    TrueKind approached us with a clear but ambitious goal: they wanted a skincare website that stood out—not just in the Indian skincare space, but globally.

    The challenge? Most skincare websites (especially local ones) lean heavily commercial. They emphasize offers, discounts, and aggressive product pushes. But TrueKind wanted something gentler, more thoughtful, and centered on one message: honest skincare.

    From the very first conversation, I knew this would require a delicate balance. We wanted to create a site that was visually fresh and a little unconventional, but not so experimental that it alienated everyday customers.

    We set aside around 1–2 months for the design phase, allowing time for multiple iterations and careful refinement. One of the best parts of this project was the incredibly trusting, supportive client team—working with people who are genuinely open to creativity makes all the difference.

    Crafting the Visual Direction

    Every project I work on begins with listening. Before touching any design tools, I immersed myself in the client’s vision, mood, and tone.

    I created a moodboard to align with their aesthetic, making sure the images I pulled weren’t just random “nice” visuals. This is something I see many younger designers overlook: it’s not just about curating pretty pictures; it’s about curating pictures that match the brand’s energy, saturation, color language, and atmosphere.

    🌟 When building moodboards, don’t be afraid to tweak image properties. Adjust exposure, warmth, contrast, and saturation until they feel cohesive. You’re not just grabbing references—you’re crafting a controlled atmosphere.

    For the typefaces, I leaned on my go-to foundry, Pangram Pangram. Their fonts are beautifully made and (for personal projects) wonderfully accessible. For TrueKind, we selected PP Mori (for a modern, clean backbone) and Editorial Neue (to bring in an elegant, editorial touch).

    Even though the client wanted something unconventional, I knew we had to keep the animation and interaction design balanced. Too much movement can be overwhelming. So, we built the visual experience primarily around typography—letting type choices and layouts carry the creative weight.

    On Working Before AI Image Tools

    This project dates back to around 2021, before the surge of AI image generation tools. So when it came to placeholders and visual exploration, I often turned to Behance or similar platforms to source reference imagery that fit the vibe.

    Of course, for the final launch, we didn’t want any copyright issues—so we conducted a professional photoshoot in Worli, Mumbai, capturing clean, fresh product imagery. For the Awwwards showcase, we’ve swapped in AI-generated images purely for display purposes.

    Iteration and Evolution

    Here’s a personal moment of honesty: The first version I designed? I wasn’t thrilled with it.

    It lacked the polish, elegance, and depth I knew the brand deserved. But instead of settling, I went back, refined, iterated, and kept pushing. That’s something I’d tell any designer reading this:

    🌟 Don’t be afraid to walk away from your early drafts. You can feel when something’s not hitting the mark—trust that instinct, and give yourself room to improve.

    Animation & Interaction Design

    I’m a sucker for scroll-based animations. Smooth scrolling, layered reveals, subtle movement—these elements can elevate a static design a hundredfold if used thoughtfully.

    For TrueKind, I didn’t want unnecessary flash. The scroll interactions enhance the content flow without overpowering it. The text reveals, section transitions, and layered elements were designed to add just enough dynamism to keep the user engaged while still respecting the calm, honest tone of the brand.

    Bringing in Reksa: Development Insights

    At a certain point, I knew I needed help to fully do justice to the design. That’s when I reached out to Reksa—a developer I deeply admire, not just for his technical skill but for his meticulous creative eye.

    Handing over a design like this isn’t always easy. But with Reksa, it felt seamless. He understood the nuances, respected the design intention, and delivered 1000%.

    In the dev section below, Reksa will walk you through the stack, architecture, key challenges, and how he brought the design to life with care and precision.

    Tech Stack & Challenges

    Nuxt.js 3 for the frontend: This project was built with Nuxt.js 3 as the frontend framework. It’s my main tech stack and a powerful choice, especially for creative websites. I find Nuxt.js offers far more flexibility than other frameworks.

    SCSS for styling: While many developers prefer CSS frameworks, I lean toward vanilla CSS as my primary approach. SCSS is used here mainly for class scoping and maintainability, but the overall syntax remains vanilla. Writing custom CSS makes the most sense for my needs—especially in creative development, where unique layouts and their connection to animation/motion often demand full styling control.

    Vercel for hosting: It provides a simple, plug-and-play experience for hosting Nuxt.js 3 projects.

    Prismic as CMS: I use Prismic as the headless CMS. It’s my go-to for most projects—straightforward and well-suited to this project’s needs.

    GSAP for animations: For smooth motion experiences, GSAP is unmatched. Its exceptional plugins—like SplitText and DrawSVG—allow me to craft fantastic animations that elevate the design.

    Lenis for smooth scrolling: To enhance the motion and animation quality, implementing smooth scroll is a must. It ensures that animations flow beautifully in sync with the scroll timeline.

    The key challenges for this project were implementing the “floating” layout and ensuring it remained responsive across all screen sizes. Abhishek’s design was beautifully unique, though that uniqueness also posed its own set of difficulties. To bring it to life, I had to carefully apply techniques like position: absolute in CSS to achieve the right structure and layering.

    My favorite part of developing this project was the page transitions and micro-interactions.

    The page transition to the product view uses a solid color from the product background, expands it to full screen, and then switches the page seamlessly. Meanwhile, micro-interactions—like SVG draw motions, button hovers, and click animations—add small but impactful details. These make the site feel more alive and engaging for users.

    Awards & Recognition

    We’re incredibly happy that the project received such a positive response. Some of the awards and recognitions include:

    • Awwwards – Site of the Day & Developer Award
    • Awwwards – E-commerce Honors (Nominee)
    • FWA – FWA of the Day
    • CSSDA – Website of the Day
    • GSAP – Site of the Day
    • Muz.li – Picks Honor
    • Made With GSAP – Showcase Feature

    Reflections

    This project was a joy. Not just because of the outcome, but because of the process: working with thoughtful clients, collaborating with talented partners, and building something that felt true to its mission.

    There was, however, an interesting twist. While the final site looked and felt fresh and unconventional, over time, the client gradually shifted toward simpler, more familiar designs—closer to what everyday users are used to.

    And here’s a reflection for all creatives:

    🌟 Creative websites are a feast for the eyes, but they don’t always convert perfectly. As designers, we thrive on bold, experimental ideas. But businesses often need to balance creativity with practicality. And that’s okay.

    This project left a lasting impression—not just on the client, but on us as creators. It reminded me why we do this work: not just to make things look good, but to tell stories, evoke feelings, and bring meaningful ideas into the world.

    Final Thoughts

    If you’re a young creative reading this: Keep learning, keep experimenting, and keep collaborating. It’s not about chasing perfection—it’s about chasing truth in your work.

    And when you find a team that shares that vision? That’s where the magic happens.

    Thank you for reading.



    Source link

  • Building an Infinite Marquee Along an SVG Path with React & Motion



    Learn how to create an infinite marquee that follows a custom SVG path using React and Motion.



    Source link

  • Developer Spotlight: Robin Payot | Codrops

    Developer Spotlight: Robin Payot | Codrops


    Hey, I’m Robin, a Creative Developer since 2015, based in Paris and a former HETIC student.

    I’ve worked at agencies like 84.Paris and Upperquad, and I’ve also freelanced with many others, picking up a few web awards along the way. I created Wind Waker.js and started a YouTube channel where I teach WebGL tutorials.

    What really excites me about development is having an idea in mind and being able to see it come to life visually, tweaking it again and again until I find the right solution to achieve the result I want.

    Projects I’m Proud Of

    Wind Waker JS

    When I was a kid, I was a huge fan of a GameCube video game called Zelda: The Wind Waker. It was a vibrant, colorful game where you sailed a boat to explore the world, with a really cool pirate vibe! I wanted to challenge myself, so I decided to try recreating it in Three.js to see how far I could go.

    Luckily for me, a brilliant creative coder named Nathan Gordon had already written an article back in 2016 about recreating the game’s water. That gave me a solid foundation to start from.

    After a lot of effort, I managed to create something I was really proud of, including six islands with LOD (Level of Detail) logic, dynamic day/night and weather cycles, fake physics with objects floating on water, a mini-game similar to Temple Run, and a treasure hunt where you search for the Triforce.

    I faced many challenges along the way, and if you’re curious about how I tackled them, I made two videos explaining everything:

    The project received a lot of positive feedback, and I’m truly grateful I got the chance to pay tribute to this incredible Nintendo game.

    McDonald’s Switzerland – The Golden Slide Game

    Last December, I had the opportunity to create a mobile video game for McDonald’s Switzerland with the Swipe Back team.

    The 3D designer provided us with some really fun, toon-style assets, which made the game look both playful and professional—especially exciting for me, as it was my first time working on a real game project.

    I worked alongside David Ronai, just the two of us as developers, and it was quite a challenge! The game featured weekly quests, unlockable cosmetics, real-world rewards for top players, and a full server-side backend (which David handled).

    David also had this wild idea: to build the game using TSL, a new language in the Three.js ecosystem that automatically converts your JS shaders to WebGPU. I learned it during the project and used it to create the 3D game. At the time, documentation was sparse and the tech was very fresh, but it promised much better performance than WebGL. Despite the challenge, we made it work, and the result was amazing—WebGPU ran incredibly smoothly on Android.

    With all the 3D assets we had, we needed to optimize carefully. One of the key techniques we used was Batched Mesh, combining all obstacles into a single mesh, which didn’t require TSL but helped a lot with performance.

    The website is no longer available since it was part of a Christmas event, but I captured a video of the project that you can check out here.

    Issey Miyake – Le sel d’Issey

    Last year, I worked on a 3D project where users could create their own salt crystal using different ingredients, all as part of a campaign for a new Issey Miyake perfume. It was a really fun experience, and the main technical challenge was achieving a beautiful refraction shader effect.

    I handled the front-end development alone and used React Three Fiber for the first time, a WebGL framework based on Three.js that lets you build 3D scenes using React-style components.

    The library was super helpful for setting things up quickly. As I got deeper into the project, however, I ran into a few minor issues, but I managed to solve them with some custom code. I’d definitely recommend React Three Fiber if you already know a lot about WebGL/Three.js and enjoy working in the React ecosystem.

    This project was awarded Site of the Day (SOTD) on FWA.

    Portfolio 2021

    I’ve included my portfolio as the final case study. Even though it’s an older project and not always up to date, it still means a lot to me.

    I started working on it during a break right after the pandemic. I had a very vague idea at first, so I began designing and programming at the same time. It was a curious way of working because I was never quite sure how it would turn out. With lots of back and forth, trial and error, and restarts, I really enjoyed that creative, spontaneous process—and I’d definitely recommend it if you’re working on a personal project!

    This project received a Site of the Day (SOTD) award on both Awwwards and FWA.

    About me

    I’m a Creative Web Developer with 10 years of experience, based in Paris.

    I studied at a French school called HETIC, where I learned a wide range of web-related skills including design, project management, marketing, and programming. In 2015, I had the chance to do a six-month internship at UNIT9. This is where I discovered WebGL for the first time, and I immediately fell in love with it.

    My very first project involved building a VR version of a horror movie on the web using Three.js, and I found it absolutely fascinating.

    After that, I worked at several agencies: first at 84.Paris in France, then for a year and a half at Upperquad in San Francisco. At these agencies, I learned a lot from other developers about creative development, clean code architecture, and fine-tuning animations. I contributed to multiple award-winning websites (Awwwards, FWA), and in 2021, I finally decided to start freelancing.

    I won my first award solo with my portfolio, and since then I’ve worked with clients around the world, occasionally winning more awards along the way.

    Eventually, I decided it was my turn to share knowledge, so I created a YouTube channel where I teach how to build WebGL effects. I’ve also been part of the FWA jury since 2018, and I had the opportunity to publish Creating a Risograph Grain Light Effect in Three.js and Creating a Bulge Distortion Effect with WebGL on Codrops.

    Philosophy & Workflow

    As a front-end developer, I’ve always enjoyed pushing the limits of web animation. I love experimenting with different effects and sharing them with the team to inspire new ideas. I don’t have a specific workflow, because I work with many agencies all over the world and always have to adapt to new frameworks, workflows, and structures. So I wouldn’t recommend any specific workflow—just try different ones and pick the one that fits best for your project!

    Current learning & challenges

    Currently, I’m learning TSL, a Three.js-based approach that compiles your Three.js code to WebGPU (with a WebGL fallback) for even better performance! For my current and future challenges, I would love to create a 3D web development course!

    Final Thoughts

    Thank you Codrops for inviting me, I’ve always been a fan of the amazing web animation tutorials.

    If you have a project in mind, don’t give up on it! Try to find some free time to at least give it a shot. Stay creative!



    Source link

  • Building an Infinite Parallax Grid with GSAP and Seamless Tiling

    Building an Infinite Parallax Grid with GSAP and Seamless Tiling


    Hey! Jorge Toloza again, Co-Founder and Creative Director at DDS Studio. In this tutorial, we’re going to build a visually rich, infinitely scrolling grid where images move with a parallax effect based on scroll and drag interactions.

    We’ll use GSAP for buttery-smooth animations, add a sprinkle of math to achieve infinite tiling, and bring it all together with dynamic visibility animations and a staggered intro reveal.

    Let’s get started!

    Setting Up the HTML Container

    To start, we only need a single container to hold all the tiled image elements. Since we’ll be generating and positioning each tile dynamically with JavaScript, there’s no need for any static markup inside. This keeps our HTML clean and scalable as we duplicate tiles for infinite scrolling.

    <div id="images"></div>

    Basic Styling for the Grid Items

    Now that we have our container, let’s give it the foundational styles it needs to hold and animate a large set of tiles.

    We’ll use absolute positioning for each tile so we can freely place them anywhere in the grid. The outer container (#images) is set to relative so that all child .item elements are positioned correctly inside it. Each image fills its tile, and we’ll use will-change: transform to optimize animation performance.

    #images {
      width: 100%;
      height: 100%;
      display: inline-block;
      white-space: nowrap;
      position: relative;
      .item {
        position: absolute;
        top: 0;
        left: 0;
        will-change: transform;
        white-space: normal;
        .item-wrapper {
          will-change: transform;
        }
        .item-image {
          overflow: hidden;
          img {
            width: 100%;
            height: 100%;
            object-fit: cover;
            will-change: transform;
          }
        }
        small {
          width: 100%;
          display: block;
          font-size: 8rem;
          line-height: 1.25;
          margin-top: 12rem;
        }
      }
    }

    Defining Item Positions with JSON from Figma

    To control the visual layout of our grid, we’ll use design data exported directly from Figma. This gives us pixel-perfect placement while keeping layout logic separate from our code.

    I created a quick layout in Figma using rectangles to represent tile positions and dimensions. Then I exported that data into a JSON file, giving us a simple array of objects containing x, y, w, and h values for each tile.

    [
          {x: 71, y: 58, w: 400, h: 270},
          {x: 211, y: 255, w: 540, h: 360},
          {x: 631, y: 158, w: 400, h: 270},
          {x: 1191, y: 245, w: 260, h: 195},
          {x: 351, y: 687, w: 260, h: 290},
          {x: 751, y: 824, w: 205, h: 154},
          {x: 911, y: 540, w: 260, h: 350},
          {x: 1051, y: 803, w: 400, h: 300},
          {x: 71, y: 922, w: 350, h: 260},
    ]

    Generating an Infinite Grid with JavaScript

    With the layout data defined, the next step is to dynamically generate our tile grid in the DOM and enable it to scroll infinitely in both directions.

    This involves three main steps:

    1. Compute the scaled tile dimensions based on the viewport and the original Figma layout’s aspect ratio.
    2. Duplicate the grid in both the X and Y axes so that as one tile set moves out of view, another seamlessly takes its place.
    3. Store metadata for each tile, such as its original position and a random easing value, which we’ll use to vary the parallax animation slightly for a more organic effect.

    The infinite scroll illusion is achieved by duplicating the entire tile set horizontally and vertically. This 2×2 tiling approach ensures there’s always a full set of tiles ready to slide into view as the user scrolls or drags.

    onResize() {
      // Get current viewport dimensions
      this.winW = window.innerWidth;
      this.winH = window.innerHeight;
    
      // Scale tile size to match viewport width while keeping original aspect ratio
      this.tileSize = {
        w: this.winW,
        h: this.winW * (this.originalSize.h / this.originalSize.w),
      };
    
      // Reset scroll state
      this.scroll.current = { x: 0, y: 0 };
      this.scroll.target = { x: 0, y: 0 };
      this.scroll.last = { x: 0, y: 0 };
    
      // Clear existing tiles from container
      this.$container.innerHTML = '';
    
      // Scale item positions and sizes based on new tile size
      const baseItems = this.data.map((d, i) => {
        const scaleX = this.tileSize.w / this.originalSize.w;
        const scaleY = this.tileSize.h / this.originalSize.h;
        const source = this.sources[i % this.sources.length];
        return {
          src: source.src,
          caption: source.caption,
          x: d.x * scaleX,
          y: d.y * scaleY,
          w: d.w * scaleX,
          h: d.h * scaleY,
        };
      });
    
      this.items = [];
    
      // Offsets to duplicate the grid in X and Y for seamless looping (2x2 tiling)
      const repsX = [0, this.tileSize.w];
      const repsY = [0, this.tileSize.h];
    
      baseItems.forEach((base) => {
        repsX.forEach((offsetX) => {
          repsY.forEach((offsetY) => {
            // Create item DOM structure
            const el = document.createElement('div');
            el.classList.add('item');
            el.style.width = `${base.w}px`;
    
            const wrapper = document.createElement('div');
            wrapper.classList.add('item-wrapper');
            el.appendChild(wrapper);
    
            const itemImage = document.createElement('div');
            itemImage.classList.add('item-image');
            itemImage.style.width = `${base.w}px`;
            itemImage.style.height = `${base.h}px`;
            wrapper.appendChild(itemImage);
    
            const img = new Image();
            img.src = `./img/${base.src}`;
            itemImage.appendChild(img);
    
            const caption = document.createElement('small');
            caption.innerHTML = base.caption;
    
            // Split caption into lines for staggered animation
            const split = new SplitText(caption, {
              type: 'lines',
              mask: 'lines',
              linesClass: 'line'
            });
            split.lines.forEach((line, i) => {
              line.style.transitionDelay = `${i * 0.15}s`;
              line.parentElement.style.transitionDelay = `${i * 0.15}s`;
            });
    
            wrapper.appendChild(caption);
            this.$container.appendChild(el);
    
            // Observe caption visibility for animation triggering
            this.observer.observe(caption);
    
            // Store item metadata including offset, easing, and bounding box
            this.items.push({
              el,
              container: itemImage,
              wrapper,
              img,
              x: base.x + offsetX,
              y: base.y + offsetY,
              w: base.w,
              h: base.h,
              extraX: 0,
              extraY: 0,
              rect: el.getBoundingClientRect(),
              ease: Math.random() * 0.5 + 0.5, // Random parallax easing for organic movement
            });
          });
        });
      });
    
      // Double the tile area to account for 2x2 duplication
      this.tileSize.w *= 2;
      this.tileSize.h *= 2;
    
      // Set initial scroll position slightly off-center for visual balance
      this.scroll.current.x = this.scroll.target.x = this.scroll.last.x = -this.winW * 0.1;
      this.scroll.current.y = this.scroll.target.y = this.scroll.last.y = -this.winH * 0.1;
    }
    

    Key Concepts

    • Scaling the layout ensures that your Figma-defined design adapts to any screen size without distortion.
    • 2×2 duplication ensures seamless continuity when the user scrolls in any direction.
    • Random easing values create slight variation in tile movement, making the parallax effect feel more natural.
    • extraX and extraY values will later be used to shift tiles back into view once they scroll offscreen.
    • SplitText animation is used to break each caption (<small>) into individual lines, enabling line-by-line animation.

    Adding Interactive Scroll and Drag Events

    To bring the infinite grid to life, we need to connect it to user input. This includes:

    • Scrolling with the mouse wheel or trackpad
    • Dragging with a pointer (mouse or touch)
    • Smooth motion between input updates using linear interpolation (lerp)

    Rather than instantly snapping to new positions, we interpolate between the current and target scroll values, which creates fluid, natural transitions.

    Scroll and Drag Tracking

    We capture two types of user interaction:

    1) Wheel Events
    Wheel input updates a target scroll position. We multiply the deltas by a damping factor to control sensitivity.

    onWheel(e) {
      e.preventDefault();
      const factor = 0.4;
      this.scroll.target.x -= e.deltaX * factor;
      this.scroll.target.y -= e.deltaY * factor;
    }

    2) Pointer Dragging
    On mouse or touch input, we track when the drag starts, then update scroll targets based on the pointer’s movement.

    onMouseDown(e) {
      e.preventDefault();
      this.isDragging = true;
      document.documentElement.classList.add('dragging');
      this.mouse.press.t = 1;
      this.drag.startX = e.clientX;
      this.drag.startY = e.clientY;
      this.drag.scrollX = this.scroll.target.x;
      this.drag.scrollY = this.scroll.target.y;
    }
    
    onMouseUp() {
      this.isDragging = false;
      document.documentElement.classList.remove('dragging');
      this.mouse.press.t = 0;
    }
    
    onMouseMove(e) {
      this.mouse.x.t = e.clientX / this.winW;
      this.mouse.y.t = e.clientY / this.winH;
    
      if (this.isDragging) {
        const dx = e.clientX - this.drag.startX;
        const dy = e.clientY - this.drag.startY;
        this.scroll.target.x = this.drag.scrollX + dx;
        this.scroll.target.y = this.drag.scrollY + dy;
      }
    }

    Smoothing Motion with Lerp

    In the render loop, we interpolate between the current and target scroll values using a lerp function. This creates smooth, decaying motion rather than abrupt changes.

    render() {
      // Smooth current → target
      this.scroll.current.x += (this.scroll.target.x - this.scroll.current.x) * this.scroll.ease;
      this.scroll.current.y += (this.scroll.target.y - this.scroll.current.y) * this.scroll.ease;
    
      // Calculate delta for parallax
      const dx = this.scroll.current.x - this.scroll.last.x;
      const dy = this.scroll.current.y - this.scroll.last.y;
    
      // Update each tile
      this.items.forEach(item => {
        const parX = 5 * dx * item.ease + (this.mouse.x.c - 0.5) * item.rect.width * 0.6;
        const parY = 5 * dy * item.ease + (this.mouse.y.c - 0.5) * item.rect.height * 0.6;
    
        // Infinite wrapping
        const posX = item.x + this.scroll.current.x + item.extraX + parX;
        if (posX > this.winW)  item.extraX -= this.tileSize.w;
        if (posX + item.rect.width < 0) item.extraX += this.tileSize.w;
    
        const posY = item.y + this.scroll.current.y + item.extraY + parY;
        if (posY > this.winH)  item.extraY -= this.tileSize.h;
        if (posY + item.rect.height < 0) item.extraY += this.tileSize.h;
    
        item.el.style.transform = `translate(${posX}px, ${posY}px)`;
      });
    
      this.scroll.last.x = this.scroll.current.x;
      this.scroll.last.y = this.scroll.current.y;
    
      requestAnimationFrame(this.render);
    }

    The scroll.ease value controls how fast the scroll position catches up to the target—smaller values result in slower, smoother motion.

    Animating Item Visibility with IntersectionObserver

    To enhance the visual hierarchy and focus, we’ll highlight only the tiles that are currently within the viewport. This creates a dynamic effect where captions appear and styling changes as tiles enter view.

    We’ll use the IntersectionObserver API to detect when each tile becomes visible and toggle a CSS class accordingly.

    this.observer = new IntersectionObserver(entries => {
      entries.forEach(entry => {
        entry.target.classList.toggle('visible', entry.isIntersecting);
      });
    });
    // …and after appending each wrapper:
    this.observer.observe(wrapper);

    Creating an Intro Animation with GSAP

    To finish the experience with a strong visual entry, we’ll animate all currently visible tiles from the center of the screen into their natural grid positions. This creates a polished, attention-grabbing introduction and adds a sense of depth and intentionality to the layout.

    We’ll use GSAP for this animation, utilizing gsap.set() to position elements instantly, and gsap.to() with staggered timing to animate them into place.

    Selecting Visible Tiles for Animation

    First, we filter all tile elements to include only those currently visible in the viewport. This avoids animating offscreen elements and keeps the intro lightweight and focused:

    import gsap from 'gsap';
    initIntro() {
      this.introItems = [...this.$container.querySelectorAll('.item-wrapper')].filter((item) => {
        const rect = item.getBoundingClientRect();
        return (
          rect.x > -rect.width &&
          rect.x < window.innerWidth + rect.width &&
          rect.y > -rect.height &&
          rect.y < window.innerHeight + rect.height
        );
      });
      this.introItems.forEach((item) => {
        const rect = item.getBoundingClientRect();
        const x = -rect.x + window.innerWidth * 0.5 - rect.width * 0.5;
        const y = -rect.y + window.innerHeight * 0.5 - rect.height * 0.5;
        gsap.set(item, { x, y });
      });
    }

    Animating to Final Positions

    Once the tiles are centered, we animate them outward to their natural positions using a smooth easing curve and staggered timing:

    intro() {
      gsap.to(this.introItems.reverse(), {
        duration: 2,
        ease: 'expo.inOut',
        x: 0,
        y: 0,
        stagger: 0.05,
      });
    }
    • x: 0, y: 0 restores the original position set via CSS transforms.
    • expo.inOut provides a dramatic but smooth easing curve.
    • stagger creates a cascading effect, enhancing visual rhythm

    Wrapping Up

    What we’ve built is a scrollable, draggable image grid with a parallax effect, visibility animations, and a smooth GSAP-powered intro. It’s a flexible base you can adapt for creative galleries, interactive backgrounds, or experimental interfaces.



    Source link

  • How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL

    How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL


    Fragment shaders allow us to create smooth, organic visuals that are difficult to achieve with standard polygon-based rendering in WebGL. One powerful example is the metaball effect, where multiple objects blend and deform seamlessly. This can be implemented using a technique called ray marching, directly within a fragment shader.

    In this tutorial, we’ll walk you through how to create droplet-like, bubble spheres using Three.js and GLSL—an effect that responds interactively to your mouse movements. But first, take a look at the demo video below to see the final result in action.

    Overview

    Let’s take a look at the overall structure of the demo and review the steps we’ll follow to build it.

    1. Setting Up the Fullscreen Plane

    We create a fullscreen plane that covers the entire viewport.

    2. Rendering Spheres with Ray Marching

    We’ll render spheres using ray marching in the fragment shader.

    3. From Spheres to Metaballs

    We blend multiple spheres smoothly to create a metaball effect.

    4. Adding Noise for a Droplet-like Appearance

    By adding noise to the surface, we create a realistic droplet-like texture.

    5. Simulating Stretchy Droplets with Mouse Movement

    We arrange spheres along the mouse trail to create a stretchy, elastic motion.

    Let’s get started!

    1. Setup

    We render a single fullscreen plane that covers the entire viewport.

    // Output.ts
    
    const planeGeometry = new THREE.PlaneGeometry(2.0, 2.0);
    const planeMaterial = new THREE.RawShaderMaterial({
        vertexShader: base_vert,
        fragmentShader: output_frag,
        uniforms: this.uniforms,
    });
    const plane = new THREE.Mesh(planeGeometry, planeMaterial);
    this.scene.add(plane);

    We define a uniform variable named uResolution to pass the canvas size to the shader, where Common.width and Common.height represent the width and height of the canvas in pixels. This uniform will be used to normalize coordinates based on the screen resolution.

    // Output.ts
    
    this.uniforms = {
        uResolution: {
            value: new THREE.Vector2(Common.width, Common.height),
        },
    };

    When using RawShaderMaterial, you need to provide your own shaders. Therefore, we prepare both a vertex shader and a fragment shader.

    // base.vert
    
    attribute vec3 position;
    varying vec2 vTexCoord;
    
    void main() {
        vTexCoord = position.xy * 0.5 + 0.5;
        gl_Position = vec4(position, 1.0);
    }

    The vertex shader receives the position attribute.

    Since the xy components of position originally range from -1 to 1, we convert them to a range from 0 to 1 and output them as a texture coordinate called vTexCoord. This is passed to the fragment shader and used to calculate colors or effects based on the position on the screen.

    // output.frag
    
    precision mediump float;
    
    uniform vec2 uResolution;
    varying vec2 vTexCoord;
    
    void main() {
        gl_FragColor = vec4(vTexCoord, 1.0, 1.0);
    }

    The fragment shader receives the interpolated texture coordinate vTexCoord and the uniform variable uResolution representing the canvas size. Here, we temporarily use vTexCoord to output color for testing.

    Now we’re all set to start drawing in the fragment shader!
    Next, let’s move on to actually rendering the spheres.

    2. Ray Marching

    2.1. What is Ray Marching?

    As mentioned at the beginning, we will use a method called ray marching to render spheres. Ray marching proceeds in the following steps:

    1. Define the scene
    2. Set the camera (viewing) direction
    3. Cast rays
    4. Evaluate the distance from the current ray position to the nearest object in the scene.
    5. Move the ray forward by that distance
    6. Check for a hit

    For example, let’s consider a scene with three spheres. These spheres are expressed using SDFs (Signed Distance Functions), which will be explained in detail later.

    First, we determine the camera direction. Once the direction is set, we cast a ray in that direction.

    Next, we evaluate the distance to all objects from the current ray position, and take the minimum of these distances.

    After obtaining this distance, we move the ray forward by that amount.

    We repeat this process until either the ray gets close enough to an object—closer than a small threshold—or the maximum number of steps is reached.
    If the distance is below the threshold, we consider it a “hit” and shade the corresponding pixel.

    For example, in the figure above, a hit is detected on the 8th ray marching step.

    If the maximum number of steps were set to 7, the 7th step would not have hit anything yet. But since the limit is reached, the loop ends and no hit is detected.

    Therefore, nothing would be rendered at that position. If parts of an object appear to be missing in the final image, it may be due to an insufficient number of steps. However, be aware that increasing the step count will also increase the computational load.

    To better understand this process, try running this demo to see how it works in practice.

    2.2. Signed Distance Function

    In the previous section, we briefly mentioned the SDF (Signed Distance Function).
    Let’s take a moment to understand what it is.

    An SDF is a function that returns the distance from a point to a particular shape. The key characteristic is that it returns a positive or negative value depending on whether the point is outside or inside the shape.

    For example, here is the distance function for a sphere:

    float sdSphere(vec3 p, float s)
    {
        return length(p) - s;
    }

    Here, p is a vector representing the position relative to the origin, and s is the radius of the sphere.

    This function calculates how far the point p is from the surface of a sphere centered at the origin with radius s.

    • If the result is positive, the point is outside the sphere.
    • If negative, it is inside the sphere.
    • If the result is zero, the point is on the surface—this is considered a hit point (in practice, we detect a hit when the distance is less than a small threshold).

    In this demo, we use a sphere’s distance function, but many other shapes have their own distance functions as well.

    If you’re interested, here’s a great article on distance functions.

    2.3. Rendering Spheres

    Let’s try rendering spheres.
    In this demo, we’ll render two slightly overlapping spheres.

    // output.frag
    
    precision mediump float;
    
    const float EPS = 1e-4;
    const int ITR = 16;
    
    uniform vec2 uResolution;
    
    varying vec2 vTexCoord;
    
    // Camera Params
    vec3 origin = vec3(0.0, 0.0, 1.0);
    vec3 lookAt = vec3(0.0, 0.0, 0.0);
    vec3 cDir = normalize(lookAt - origin);
    vec3 cUp = vec3(0.0, 1.0, 0.0);
    vec3 cSide = cross(cDir, cUp);
    
    vec3 translate(vec3 p, vec3 t) {
        return p - t;
    }
    
    float sdSphere(vec3 p, float s)
    {
        return length(p) - s;
    }
    
    float map(vec3 p) {
        float radius = 0.5;
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
        d = min(sphere0, sphere1);
    
        return d;
    }
    
    void main() {
        vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);
    
        // Orthographic Camera
        vec3 ray = origin + cSide * p.x + cUp * p.y;
        vec3 rayDirection = cDir;
    
        float dist = 0.0;
    
        for (int i = 0; i < ITR; ++i) {
            dist = map(ray);
            ray += rayDirection * dist;
            if (dist < EPS) break;
        }
    
        vec3 color = vec3(0.0);
    
        if (dist < EPS) {
            color = vec3(1.0, 1.0, 1.0);
        }
    
        gl_FragColor = vec4(color, 1.0);
    }

    First, we normalize the screen coordinates:

    vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);

    Next, we set up the camera. This demo uses an orthographic camera (parallel projection):

    // Camera Params
    vec3 origin = vec3(0.0, 0.0, 1.0);
    vec3 lookAt = vec3(0.0, 0.0, 0.0);
    vec3 cDir = normalize(lookAt - origin);
    vec3 cUp = vec3(0.0, 1.0, 0.0);
    vec3 cSide = cross(cDir, cUp);
    
    // Orthographic Camera
    vec3 ray = origin + cSide * p.x + cUp * p.y;
    vec3 rayDirection = cDir;

    After that, inside the map function, two spheres are defined and their distances calculated using sdSphere. The variable d is initially set to a large value and updated with the min function to keep track of the shortest distance to the surface.

    float map(vec3 p) {
        float radius = 0.5;
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
        d = min(sphere0, sphere1);
    
        return d;
    }

    Then we run a ray marching loop, which updates the ray position by computing the distance to the nearest object at each step. The loop ends either after a fixed number of iterations or when the distance becomes smaller than a threshold (dist < EPS):

    for ( int i = 0; i < ITR; ++ i ) {
    	dist = map(ray);
    	ray += rayDirection * dist;
    	if ( dist < EPS ) break ;
    }

    Finally, we determine the output color. We use black as the default color (background), and render a white pixel only if a hit is detected:

    vec3 color = vec3(0.0);
    
    if ( dist < EPS ) {
    	color = vec3(1.0);
    }

    We’ve successfully rendered two overlapping spheres using ray marching!

    2.4. Normals

    Although we successfully rendered spheres in the previous section, the scene still looks flat and lacks depth. This is because we haven’t applied any shading or visual effects that respond to surface orientation.

    While we won’t implement full shading in this demo, we’ll still compute surface normals, as they’re essential for adding surface detail and other visual effects.

    Let’s look at the code first:

    vec3 generateNormal(vec3 p) {
        return normalize(vec3(
                map(p + vec3(EPS, 0.0, 0.0)) - map(p + vec3(-EPS, 0.0, 0.0)),
                map(p + vec3(0.0, EPS, 0.0)) - map(p + vec3(0.0, -EPS, 0.0)),
                map(p + vec3(0.0, 0.0, EPS)) - map(p + vec3(0.0, 0.0, -EPS))
            ));
    }

    At first glance, this may seem hard to understand. Put simply, this computes the gradient of the distance function, which corresponds to the normal vector.

    If you’ve studied vector calculus, this might be easy to understand. For many others, though, it may seem a bit difficult.

    That’s totally fine—a full understanding of the details isn’t necessary to use the result. If you just want to move on, feel free to skip ahead to the section where we debug normals by visualizing them with color.

    However, for those who are interested in how it works, we’ll now walk through the explanation in more detail.

    The gradient of a scalar function 𝑓(𝑥,𝑦,𝑧) is simply a vector composed of its partial derivatives. It points in the direction of the greatest rate of increase of the function:

    To compute this gradient numerically, we can use the central difference method. For example:

    We apply the same idea for the 𝑦 and 𝑧 components.
    Note: The factor 2𝜀 is omitted in the code since we normalize the result using normalize().

    Next, let us consider a signed distance function 𝑓(𝑥,𝑦,𝑧), which returns the shortest distance from any point in space to the surface of an object. By definition, 𝑓(𝑥,𝑦,𝑧)=0 on the surface of the object.

    Assume that 𝑓 is smooth (i.e., differentiable) in the region of interest. When the point (𝑥,𝑦,𝑧) undergoes a small displacement Δ𝒓=(Δ𝑥,Δ𝑦,Δ𝑧), the change in the function value Δ𝑓 can be approximated using the first-order Taylor expansion:

    Here,∇𝑓 is the gradient vector of 𝑓, and Δ𝒓 is an arbitrary small displacement vector.

    Now, since 𝑓=0 on the surface and remains constant as we move along the surface (i.e., tangentially), the function value does not change, so Δ𝑓=0. Therefore:

    This means that the gradient vector is perpendicular to any tangent vector Δ𝒓 on the surface. In other words, the gradient vector ∇𝑓 points in the direction of the surface normal.

    Thus, the gradient of a signed distance function gives the surface normal direction at any point on the surface.

    2.5. Visualizing Normals with Color

    To verify that the surface normals are being calculated correctly, we can visualize them using color.

    if ( dist < EPS ) {
    	vec3 normal = generateNormal(ray);
    	color = normal;
    }

    Note that within the if block, ray refers to a point on the surface of the object. So by passing ray to generateNormal, we can obtain the surface normal at the point of intersection.

    When we render the scene, you’ll notice that the surface of the sphere is shaded in red, green, and blue based on the orientation of the normal vectors. This is because we’re mapping the 𝑥, 𝑦, and 𝑧 components of the normal vector to the RGB color channels respectively.

    This is a common and intuitive way to debug normal vectors visually, helping us ensure they are computed correctly.

    When combining two spheres with the standard min() function, a hard edge forms where the shapes intersect, resulting in an unnatural boundary.
    To avoid this, we can use a blending function called smoothMin, which softens the transition by merging the distance values smoothly.

    // added
    float smoothMin(float d1, float d2, float k) {
        float h = exp(-k * d1) + exp(-k * d2);
        return -log(h) / k;
    }
    
    float map(vec3 p) {
        float radius = 0.5;
        float k = 7.; // added: smoothing factor for metaball effect
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-.4, 0.0, 0.0)), radius);
        d = smoothMin(d, sphere0, k); // modified: blend with smoothing
        d = smoothMin(d, sphere1, k); // modified
    
        return d;
    }

    This function creates a smooth, continuous connection between shapes—producing a metaball-like effect where the forms appear to merge organically.

    The parameter k controls the smoothness of the blend. A higher k value results in a sharper transition (closer to min()), while a lower k produces smoother, more gradual merging.

    For more details, please refer to the following two articles:

    1. wgld.org | GLSL: オブジェクト同士を補間して結合する
    2. Inigo Quilez :: computer graphics, mathematics, shaders, fractals, demoscene and more

    4. Adding Noise for a Droplet-like Appearance

    So far, we’ve covered how to calculate normals and how to smoothly blend objects.

    Next, let’s tune the surface appearance to make things feel more realistic.

    In this demo, we’re aiming to create droplet-like metaballs. So how can we achieve that kind of look? The key idea here is to use noise to distort the surface.

    Let’s jump right into the code:

    // output.frag
    
    uniform float uTime;
    
    // ...
    
    float rnd3D(vec3 p) {
        return fract(sin(dot(p, vec3(12.9898, 78.233, 37.719))) * 43758.5453123);
    }
    
    float noise3D(vec3 p) {
        vec3 i = floor(p);
        vec3 f = fract(p);
    
        float a000 = rnd3D(i); // (0,0,0)
        float a100 = rnd3D(i + vec3(1.0, 0.0, 0.0)); // (1,0,0)
        float a010 = rnd3D(i + vec3(0.0, 1.0, 0.0)); // (0,1,0)
        float a110 = rnd3D(i + vec3(1.0, 1.0, 0.0)); // (1,1,0)
        float a001 = rnd3D(i + vec3(0.0, 0.0, 1.0)); // (0,0,1)
        float a101 = rnd3D(i + vec3(1.0, 0.0, 1.0)); // (1,0,1)
        float a011 = rnd3D(i + vec3(0.0, 1.0, 1.0)); // (0,1,1)
        float a111 = rnd3D(i + vec3(1.0, 1.0, 1.0)); // (1,1,1)
    
        vec3 u = f * f * (3.0 - 2.0 * f);
        // vec3 u = f*f*f*(f*(f*6.0-15.0)+10.0);
    
        float k0 = a000;
        float k1 = a100 - a000;
        float k2 = a010 - a000;
        float k3 = a001 - a000;
        float k4 = a000 - a100 - a010 + a110;
        float k5 = a000 - a010 - a001 + a011;
        float k6 = a000 - a100 - a001 + a101;
        float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;
    
        return k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;
    }
    
    vec3 dropletColor(vec3 normal, vec3 rayDir) {
        vec3 reflectDir = reflect(rayDir, normal);
    
        float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
        float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);
    
        vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
        vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;
    
        float intensity = 2.3;
        vec3 color = (_color0 + _color1) * intensity;
    
        return color;
    }
    
    // ...
    
    void main() {
    	// ...
    
    	if ( dist < EPS ) {
    		vec3 normal = generateNormal(ray);
    		color = dropletColor(normal, rayDirection);
    	}
    	
    	 gl_FragColor = vec4(color, 1.0);
    }

    To create the droplet-like texture, we’re using value noise. If you’re unfamiliar with these noise techniques, the following articles provide helpful explanations:

    3D value noise is generated by interpolating random values placed at the eight vertices of a cube. The process involves three stages of linear interpolation:

    1. Bottom face interpolation: First, we interpolate between the four corner values on the bottom face of the cube
    2. Top face interpolation: Similarly, we interpolate between the four corner values on the top face
    3. Final z-axis interpolation: Finally, we interpolate between the results from the bottom and top faces along the z-axis

    This triple interpolation process is called trilinear interpolation.

    The following code demonstrates the trilinear interpolation process for 3D value noise:

    float n = mix(
    	mix( mix( a000, a100, u.x ), mix( a010, a110, u.x ), u.y ),
    	mix( mix( a001, a101, u.x ), mix( a011, a111, u.x ), u.y ),
    	u.z
    );

    The nested mix() functions above can be converted into an explicit polynomial form for better performance:

    float k0 = a000;
    float k1 = a100 - a000;
    float k2 = a010 - a000;
    float k3 = a001 - a000;
    float k4 = a000 - a100 - a010 + a110;
    float k5 = a000 - a010 - a001 + a011;
    float k6 = a000 - a100 - a001 + a101;
    float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;
    
    float n = k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;

    By sampling this noise using the reflection vector as coordinates, we can create a realistic water droplet-like texture. Note that we are using the surface normal obtained earlier to compute this reflection vector. To add time-based variation, we generate noise at positions offset by uTime:

    vec3 reflectDir = reflect(rayDir, normal);
    
    float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
    float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);

    Finally, we blend two noise-influenced colors and scale the result:

    vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
    vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;
    
    float intensity = 2.3;
    vec3 color = (_color0 + _color1) * intensity;

    It’s starting to look quite like a water droplet! However, it still appears a bit murky.
    To improve this, let’s add the following post-processing step:

    // output.frag
    
    if ( dist < EPS ) {
    	vec3 normal = generateNormal(ray);
    	color = dropletColor(normal, rayDirection);
    }
    
    vec3 finalColor = pow(color, vec3(7.0)); // added
    
    gl_FragColor = vec4(finalColor, 1.0); // modified

    Using pow(), darker regions are suppressed, allowing the highlights to pop and creating a more glass-like, translucent surface.

    5. Simulating Stretchy Droplets with Mouse Movement

    Finally, let’s make the droplet stretch and follow the mouse movement, giving it a soft and elastic feel.

    We’ll achieve this by placing multiple spheres along the mouse trail.

    // Output.ts
    
    constructor() {
    	// ...
    	this.trailLength = 15;
    	this.pointerTrail = Array.from({ length: this.trailLength }, () => new THREE.Vector2(0, 0));
    	
    	this.uniforms = {
    	    uTime: { value: Common.time },
    	    uResolution: {
    	        value: new THREE.Vector2(Common.width, Common.height),
    	    },
    	    uPointerTrail: { value: this.pointerTrail },
    	};
    }
    
    // ...
    
    /**
     * # rAF update
     */
    update() {
      this.updatePointerTrail();
      this.render();
    }
    
    /**
     * # Update the pointer trail
     */
    updatePointerTrail() {
      for (let i = this.trailLength - 1; i > 0; i--) {
         this.pointerTrail[i].copy(this.pointerTrail[i - 1]);
      }
      this.pointerTrail[0].copy(Pointer.coords);
    }
    // output.frag
    
    const int TRAIL_LENGTH = 15; // added
    uniform vec2 uPointerTrail[TRAIL_LENGTH]; // added
    
    // ...
    
    // modified
    float map(vec3 p) {
        float baseRadius = 8e-3;
        float radius = baseRadius * float(TRAIL_LENGTH);
        float k = 7.;
        float d = 1e5;
    
        for (int i = 0; i < TRAIL_LENGTH; i++) {
            float fi = float(i);
            vec2 pointerTrail = uPointerTrail[i] * uResolution / min(uResolution.x, uResolution.y);
    
            float sphere = sdSphere(
                    translate(p, vec3(pointerTrail, .0)),
                    radius - baseRadius * fi
                );
    
            d = smoothMin(d, sphere, k);
        }
    
        float sphere = sdSphere(translate(p, vec3(1.0, -0.25, 0.0)), 0.55);
        d = smoothMin(d, sphere, k);
    
        return d;
    }

    Conclusion

    In this tutorial, we explored how to create a dynamic, droplet-like effect using ray marching and shading techniques. Here’s what we covered:

    1. Used ray marching to render spheres in 3D space.
    2. Applied smoothMin to blend the spheres into seamless metaballs.
    3. Added surface noise to give the spheres a more organic appearance.
    4. Simulated stretchy motion by arranging spheres along the mouse trail.

    By combining these techniques, we achieved a soft, fluid visual that responds to user interaction.

    Thanks for following along—I hope you find these techniques useful in your own projects!



    Source link

  • Motion Highlights #9 | Codrops

    Motion Highlights #9 | Codrops


    The

    New

    Collective

    🎨✨💻 Stay ahead of the curve with handpicked, high-quality frontend development and design news, picked freshly every single day. No fluff, no filler—just the most relevant insights, inspiring reads, and updates to keep you in the know.

    Prefer a weekly digest in your inbox? No problem, we got you covered. Just subscribe here.



    Source link

  • Try It On: A Playful Drag-and-Drop Styling UI

    Try It On: A Playful Drag-and-Drop Styling UI


    I recently helped my friends with their brand, www.laughwithtic.com, and wanted to create something distinctive for their pre-launch. My design drew inspiration from classic dress-up games, focusing on a playful, interactive element.Initially, we featured a Rat character as the main model. Users could simply drag-and-drop a selection of t-shirts onto the rat. This approach was effective and added a fresh element to the site.

    Evolving the Design: From Rat to Human

    A few weeks later, I saw a video by @samdape on X, showcasing a similar UI layout, but enhanced with a real human character at an angle. This immediately inspired me to redesign our pre-launch experience, transitioning to a human model in that dynamic pose.

    To further enhance the interaction, I integrated several subtle refinements. A slight shadow behind the character adds depth. When a T-shirt is dragged, it subtly skews and shakes, making the interaction feel more tactile. Perhaps the most engaging detail is how the model raises her hand as you drag a t-shirt nearby, signaling readiness for the change. These small touches contribute to an experience that feels immersive and unexpected. This entire system is built with vanilla JS, HTML, and CSS, operating on the simple principle of changing PNG images based on drag-and-drop collisions.

    The Tech Behind the Interaction

    The core of this experience is a vanilla JavaScript-driven drag-and-drop mechanism, designed to allow users to visually try different t-shirts on a central model.

    Here’s a breakdown of its key phases:

    • Initiation: When a user clicks or touches a t-shirt, it becomes the active element. Its zIndex is raised, and a grabbed CSS class is applied for immediate visual feedback.
    • Dragging: The active t-shirt’s position continuously updates to follow the cursor.
      • Skewing Effect: Horizontal dragging applies CSS classes that subtly skew the t-shirt, adding a dynamic feel. These classes are removed if movement pauses.
      • Model Readiness: The system constantly checks for collision with the model. If the t-shirt hovers over the model, the model’s image changes to a “ready” version (e.g., raising a hand), providing clear feedback.
    • Dropping: Upon release, collision with the model is checked.
      • On Model: If dropped on the model, the model’s image updates to wear the new t-shirt. The dragged t-shirt then resets to its original layout position.
      • Off Model: If dropped elsewhere, the t-shirt animates back to its initial position. The model reverts to its default state if it was in a “ready” pose.
    • Image Preloading: All t-shirt and model images (including hover states) are preloaded on page load using a dedicated function, ensuring smooth visual transitions without flickers.

    This combination of event handling, CSS for nuanced visual effects, and dynamic image swapping creates an engaging and interactive try-on experience. You can check out the full website at www.laughwithtic.com.

    I hope you find the interaction both fun and inspiring!

    Check out the GitHub repo here.



    Source link

  • How to Create Responsive and SEO-friendly WebGL Text

    How to Create Responsive and SEO-friendly WebGL Text


    Responsive text article cover image

    Rendering text in WebGL opens up a whole new world of unique visual effects and creative possibilities that are often
    impossible to achieve with traditional HTML, CSS and JavaScript alone. By the end of this tutorial, we’ll have created
    WebGL-rendered text elements that perfectly mimic the underlying HTML structure. Since we’ll be taking an HTML-first
    approach, it will be responsive, SEO-friendly and accessible. We’ll be doing every step manually so you’ll gain a
    solid understanding of the principles behind merging HTML and WebGL, and text-specific stuff like how to translate CSS
    styles into the 3D world.

    We’ll be creating the below demo:

    We’ll start off with a standard styled HTML setup. Then we’ll recreate the text elements we have inside a 3D world.
    From there, we’ll position, scale and make the text responsive with the 3D space. Next, we’ll replicate the “mask
    reveal effect” in WebGL. And finally, we’ll apply some scroll-driven post processing effects to the scene.

    Below are the core steps we’ll follow to achieve the final result:

    1. Create the text as a HTML element and style it regularly using CSS
    2. Create a 3D world and recreate the text element within it
    3. Merge the 3D and 2D world, so that we can style the 3D text by using our viewport’s dimensions
    4. Sync the key properties like position, size and font — from the HTML element to the WebGL text element
    5. Hide the original HTML element
    6. Be left with only the 3D text, fully styled and positioned in sync with the hidden HTML structure
    7. Apply animations and post-processing to enhance our 3D scene

    Necessities and Prerequisites

    We’ll be using the Three.js library to create the 3D world, so you should already be familiar with its basics. For the
    creation of text meshes, we’ll be using the
    troika-three-text
    library, but you don’t have to be familiar with the library beforehand. If you’ve used HTML, CSS and JavaScript, know
    the basics of Three.JS,
    you’re good to go.

    Let’s get started.

    1. Creating the Regular HTML and Making it Responsive

    Before diving into the WebGL and Three.js implementation, we first need to create the HTML structure that we’ll later
    mimic in the 3D world. I’ve set up a very simple page with some quick responsive content — you can find the
    setup content
    in the demo repository under
    index.html
    and
    styles.css
    .

    HTML
    :

    <div class="content">
      <div class="container">
        <section class="section__heading">
          <h3 data-animation="webgl-text" class="text__2">THREE.JS</h3>
          <h2 data-animation="webgl-text" class="text__1">
            RESPONSIVE AND ACCESSIBLE TEXT
          </h2>
        </section>
        <section class="section__main__content">
          <p data-animation="webgl-text" class="text__2">
            THIS TEXT IS STYLED TO LOOK LIKE A TYPICAL BLOCK OF TEXT ON A STANDARD
            WEBSITE. BUT UNDER THE SURFACE, IT'S BEING RENDERED WITH WEBGL INSTEAD
            OF TRADITIONAL HTML.
          </p>
          <p data-animation="webgl-text" class="text__2">
            THIS OPENS THE DOOR TO CUSTOM SHADER EFFECTS AND INTERACTIONS THAT GO
            BEYOND WHAT'S POSSIBLE WITH TRADITIONAL HTML.
          </p>
          <p data-animation="webgl-text" class="text__2">
            WE KEEP THE UNDERYLING HTML STRUCTURE PRESENT IN THE DOM. RATHER THAN
            CREATING MESHES DIRECTLY IN THREE.JS, THE SCENE IS BUILT BY READING FROM
            THE EXISTING HTML CONTENT. THIS WAY, SCREEN READERS, SEARCH ENGINES, AND
            OTHER TOOLS CAN STILL INTERPRET THE PAGE AS EXPECTED.
          </p>
        </section>
        <section class="section__footer">
          <p data-animation="webgl-text" class="text__3">
            NOW GO CRAZY WITH THE SHADERS :)
          </p>
        </section>
      </div>
    </div>
    

    styles.css

    :root {
      --clr-text: #fdcdf9;
      --clr-selection: rgba(255, 156, 245, 0.3);
      --clr-background: #212720;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Black.ttf") format("truetype");
      font-weight: 900;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Bold.ttf") format("truetype");
      font-weight: 700;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-ExtraBold.ttf") format("truetype");
      font-weight: 800;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-ExtraLight.ttf") format("truetype");
      font-weight: 200;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Light.ttf") format("truetype");
      font-weight: 300;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Medium.ttf") format("truetype");
      font-weight: 500;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Regular.ttf") format("truetype");
      font-weight: 400;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-SemiBold.ttf") format("truetype");
      font-weight: 600;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Thin.ttf") format("truetype");
      font-weight: 100;
      font-style: normal;
      font-display: swap;
    }
    
    body {
      background: var(--clr-background);
    }
    
    canvas {
      position: fixed;
      top: 0;
      left: 0;
      width: 100vw;
      height: 100vh;
      pointer-events: none;
    }
    
    ::selection {
      background-color: var(--clr-selection);
      color: var(--clr-text);
    }
    
    ::-moz-selection {
      background-color: var(--clr-selection);
      color: var(--clr-text);
    }
    
    .text__1,
    .text__2,
    .text__3 {
      color: var(--clr-text);
      text-align: center;
      margin-block-start: 0;
      margin-block-end: 0;
    }
    
    .content {
      width: 100%;
      font-family: Humane;
      font-size: 0.825vw;
    
      @media (max-width: 768px) {
        font-size: 2vw;
      }
    }
    .container {
      display: flex;
      flex-direction: column;
      align-items: center;
    
      width: 70em;
      gap: 17.6em;
      padding: 6em 0;
    
      @media (max-width: 768px) {
        width: 100%;
      }
    }
    
    .container section {
      display: flex;
      flex-direction: column;
      align-items: center;
      height: auto;
    }
    
    .section__main__content {
      gap: 5.6em;
    }
    
    .text__1 {
      font-size: 19.4em;
      font-weight: 700;
      max-width: 45em;
    
      @media (max-width: 768px) {
        font-size: 13.979em;
      }
    }
    
    .text__2 {
      font-size: 4.9em;
      max-width: 7.6em;
      letter-spacing: 0.01em;
    }
    
    .text__3 {
      font-size: 13.979em;
      max-width: 2.4em;
    }
    

    A Few Key Notes about the Setup

    • The
      <canvas>
      element is set to cover the entire screen, fixed in place behind the main content. We want a full screen canvas
      covering the entire screen behind our main content at all times.
    • All text elements intended for WebGL rendering are marked with data-animation=”webgl-text” for clarity and easy
      selection when we begin scripting.

    The purpose of this setup is to function as the “placeholder” that we can mimic in our 3D implementation. So, it’s
    important to
    position and style your text at this stage
    to ensure it matches the final sizing and positioning that you want to achieve. All text formatting properties like
    font-size, letter-spacing, line-height etc. are the properties you want to focus on, because we’ll later read these
    computed styles directly from the DOM during the WebGL phase. Color is optional here, as we can handle text coloring
    later with shaders inside WebGL.

    That’s it for the HTML and CSS setup! It’s all we need for the foundation to move onto our JavaScript and WebGL
    implementation.

    2. Initial 3D World Setup

    Let’s move onto the JavaScript and WebGL implementation. I’ll be using TypeScript, but you can easily follow along
    with vanilla JavaScript if you prefer. I’m assuming you’re already familiar with the basics of Three.js, so I’ll focus
    on explaining the high-level setup rather than covering every detail.

    Below is the starter TypeScript and Three.JS base that I’ll be using for this demo.

    // main.ts
    
    import Commons from "./classes/Commons";
    import * as THREE from "three";
    
    /**
     * Main entry-point.
     * Creates Commons and Scenes
     * Starts the update loop
     * Eventually creates Postprocessing and Texts.
      */
    class App {
      private commons!: Commons;
    
      scene!: THREE.Scene;
    
      constructor() {
        document.addEventListener("DOMContentLoaded", async () => {
          await document.fonts.ready; // Important to wait for fonts to load when animating any texts.
    
          this.commons = Commons.getInstance();
          this.commons.init();
    
          this.createScene();
          
          this.addEventListeners();
    
          this.update();
        });
      }
    
      private createScene() {
        this.scene = new THREE.Scene();
      }
    
      /**
       * The main loop handler of the App
       * The update function to be called on each frame of the browser.
       * Calls update on all other parts of the app
       */
      private update() {
        this.commons.update();
    
        this.commons.renderer.render(this.scene, this.commons.camera);
    
        window.requestAnimationFrame(this.update.bind(this));
      }
    
      private addEventListeners() {
        window.addEventListener("resize", this.onResize.bind(this));
      }
    
      private onResize() {
        this.commons.onResize();
      }
    }
    
    export default new App();
    
    // Commons.ts
    
    import { PerspectiveCamera, WebGLRenderer, Clock } from "three";
    
    import Lenis from "lenis";
    
    export interface Screen {
      width: number;
      height: number;
      aspect: number;
    }
    
    export interface Sizes {
      screen: Screen;
      pixelRatio: number
    }
    
    /**
     * Singleton class for Common stuff.
     * Camera
     * Renderer
     * Lenis
     * Time
     */
    export default class Commons {
      private constructor() {}
      
      private static instance: Commons;
    
      lenis!: Lenis;
      camera!: PerspectiveCamera;
      renderer!: WebGLRenderer;
    
      private time: Clock = new Clock();
      elapsedTime!: number;
    
      sizes: Sizes = {
        screen: {
          width: window.innerWidth,
          height: window.innerHeight,
          aspect: window.innerWidth / window.innerHeight,
        },
        pixelRatio: this.getPixelRatio(),
      };
    
      private distanceFromCamera: number = 1000;
    
      /**
       * Function to be called to either create Commons Singleton instance, or to return existing one.
       * TODO AFTER: Call instances init() function.
       * @returns Commons Singleton Instance.
       */
      static getInstance() {
        if (this.instance) return this.instance;
    
        this.instance = new Commons();
        return this.instance;
      }
    
      /**
       * Initializes all-things Commons. To be called after instance is set.
       */
      init() {
        this.createLenis();
        this.createCamera();
        this.createRenderer();
      }
    
      /**
       * Creating Lenis instance.
       * Sets autoRaf to true so we don't have to manually update Lenis on every frame.
       * Resets possible saved scroll position.
       */
      private createLenis() {
        this.lenis = new Lenis({ autoRaf: true, duration: 2 });
      }
    
      private createCamera() {
        this.camera = new PerspectiveCamera(
          70,
          this.sizes.screen.aspect,
          200,
          2000
        );
        this.camera.position.z = this.distanceFromCamera;
        this.camera.updateProjectionMatrix();
      }
    
      /**
       * createRenderer(): Creates the common WebGLRenderer to be used.
       */
      private createRenderer() {
        this.renderer = new WebGLRenderer({
          alpha: true, // Sets scene background to transparent, so our body background defines the background color
        });
    
        this.renderer.setSize(this.sizes.screen.width, this.sizes.screen.height);
    
        this.renderer.setPixelRatio(this.sizes.pixelRatio);
    
    	  // Creating canvas element and appending to body element.
        document.body.appendChild(this.renderer.domElement); 
      }
    
      /**
       * Single source of truth to get pixelRatio.
       */
      getPixelRatio() {
        return Math.min(window.devicePixelRatio, 2);
      }
    
      /**
       * Resize handler function is called from the entry-point (main.ts)
       * Updates the Common screen dimensions.
       * Updates the renderer.
       * Updates the camera.
       */
      onResize() {
        this.sizes.screen = {
          width: window.innerWidth,
          height: window.innerHeight,
          aspect: window.innerWidth / window.innerHeight,
        };
        this.sizes.pixelRatio = this.getPixelRatio();
    
        this.renderer.setSize(this.sizes.screen.width, this.sizes.screen.height);
        this.renderer.setPixelRatio(this.sizes.pixelRatio);
    
        this.onResizeCamera();
      }
    
      /**
       * Handler function that is called from onResize handler.
       * Updates the perspective camera with the new adjusted screen dimensions
       */
      private onResizeCamera() {
        this.camera.aspect = this.sizes.screen.aspect;
        this.camera.updateProjectionMatrix();
      }
    
      /**
       * Update function to be called from entry-point (main.ts)
       */
      update() {
        this.elapsedTime = this.time.getElapsedTime();
      }
    }
    

    A Note About Smooth Scroll

    When syncing HTML and WebGL worlds,
    you should use a custom scroll
    . This is because the native scroll in browsers updates the scroll position at irregular intervals and does not thus
    guarantee frame-perfect updates with our requestAnimationFrame loop and our WebGL world, causing a
    jittery and unsynchronized movement
    .

    By integrating a custom scroll (Lenis in this case), we ensure our scroll updates perfectly match the frame updates of
    our WebGL world.

    Right now we are seeing an empty 3D world, continuously being rendered.

    We’re only missing one thing to get something visible in our 3D world: the creation of the text elements. So let’s
    move onto creating our WebGLText class next.

    3. Creating WebGLText Class and Texts Meshes

    For the creation of the text meshes, we’ll be using
    troika-three-text
    library.

    npm i troika-three-text

    We’ll now create a reusable

    WebGLText
    class

    . This will handle turning each HTML element into a 3D text mesh, using Troika and our Three.js scene.

    Here’s the basic setup:

    // WebGLText.ts
    
    import Commons from "./Commons";
    import * as THREE from "three";
    
    // @ts-ignore
    import { Text } from "troika-three-text";
    
    interface Props {
      scene: THREE.Scene;
      element: HTMLElement;
    }
    
    export default class WebGLText {
      commons: Commons;
    
      scene: THREE.Scene;
      element: HTMLElement;
    
      computedStyle: CSSStyleDeclaration;
      font!: string; // Path to our .ttf font file.
      bounds!: DOMRect;
      color!: THREE.Color;
      material!: THREE.ShaderMaterial;
      mesh!: Text;
    
      // We assign the correct font bard on our element's font weight from here
      weightToFontMap: Record<string, string> = {
        "900": "/fonts/Humane-Black.ttf",
        "800": "/fonts/Humane-ExtraBold.ttf",
        "700": "/fonts/Humane-Bold.ttf",
        "600": "/fonts/Humane-SemiBold.ttf",
        "500": "/fonts/Humane-Medium.ttf",
        "400": "/fonts/Humane-Regular.ttf",
        "300": "/fonts/Humane-Light.ttf",
        "200": "/fonts/Humane-ExtraLight.ttf",
        "100": "/fonts/Humane-Thin.ttf",
      };
      
      private y: number = 0; // Scroll-adjusted bounds.top
      
      private isVisible: boolean = false;
    
      constructor({ scene, element }: Props) {
        this.commons = Commons.getInstance();
    
        this.scene = scene;
        this.element = element;
    
        this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style.
      }
    }
    

    We have access to the
    Text class
    from Troika, which allows us to create text meshes elements and apply styling to it using familiar properties like
    fontSize, letterSpacing, and font. I’ll cover everything you need to style your text responsively in this tutorial,
    but I implore you to take a look at the full documentation and its possibilities
    here
    .

    Troika doesn’t ship with TypeScript definitions out of the box, so if you’re using TS, you can quickly get around this
    by creating a type declaration file in the root of your project. It’s not pretty, but it gets the job done while
    keeping TypeScript happy.

    // troika.d.ts
    
    declare module "troika-three-text" {
      const value: any;
      export default value;
    }

    Let’s start by creating new methods called createFont(), createColor() and createMesh().

    createFont()
    : Selects the appropriate .ttf file based on the DOM element’s font-weight. If a match isn’t found, we fall back to
    the regular weight. Adjust the mapping to match your own font files and multiple font families if needed.

    // WebGLText.ts 
    
    private createFont() {
        this.font =
          this.weightToFontMap[this.computedStyle.fontWeight] ||
          "/fonts/Humane-Regular.ttf";
    }

    createColor()
    : Converts the computed CSS color into a THREE.Color instance:

    // WebGLText.ts 
    
    private createColor() {
        this.color = new THREE.Color(this.computedStyle.color);
    }

    createMesh():
    Instantiates the text mesh and sets some basic properties. Copies the text’s inner text and sets it onto the mesh.
    Adds the mesh to our Three.JS scene. We anchor the text from the left-center to match typical HTML layout
    expectations.

    // WebGLText.ts 
    private createMesh() {
      this.mesh = new Text();
    
      this.mesh.text = this.element.innerText; // Copying HTML content over to the mesh
      this.mesh.font = this.font;
    
      // Anchor the text to the left-center (instead of center-center)
      this.mesh.anchorX = "0%";
      this.mesh.anchorY = "50%";
    
      this.mesh.color = this.color;
    
      this.scene.add(this.mesh);
    }

    ⚠️ When copying text contents over to the Mesh, avoid using innerHTML or textContent and use innerText instead as it
    gives the most layout-accurate and consistent results.

    setStaticValues
    (): Let’s also create a baby setStaticValues() method which will set the critical properties of our text mesh based on
    the computedStyle.

    We sets values like font size based on computed CSS. We’ll expand this more as we sync more styles down the line.

    We want to call all these methods in the constructor like this:

    // WebGLText.ts 
     constructor({ scene, element }: Props) {
      this.commons = Commons.getInstance();
    
      this.scene = scene;
      this.element = element;
    
      this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style.
    
      this.createFont();
      this.createColor();
      this.createMesh();
      this.setStaticValues();
    }

    Instantiating Text Elements from DOM

    Finally, let’s update our App class (main.ts), and hook this all up by scanning for DOM elements with a
    data-animation=”webgl-text” attribute — creating a WebGLText instance for each one:

    // main.ts
    
    texts!: Array<WebGLText>;
    
    // ...
    
    private createWebGLTexts() {
      const texts = document.querySelectorAll('[data-animation="webgl-text"]');
    
      if (texts) {
        this.texts = Array.from(texts).map((el) => {
          const newEl = new WebGLText({
            element: el as HTMLElement,
            scene: this.scene,
          });
    
          return newEl;
        });
      }
    }
    

    Make sure to call this method in the constructor on initialization. This will populate our scene with styled text
    meshes based on our DOM content.

    That’s all we need to have our text meshes visible, it’s not the prettiest sight to behold, but at least we got
    everything working:

    Next Challenge: Screen vs. 3D Space Mismatch

    Even though we copy the font size directly from the DOM, the scale looks different in 3D. That’s because
    WebGL units don’t map 1:1 with screen pixels
    , and they operate in different coordinate systems. This mismatch will become even more obvious if we start
    positioning and animating elements.

    To get true visual parity between our DOM elements and WebGL text, we need to bridge the gap between screen space and
    3D space. Let’s tackle that next so our text sizes and positions actually match with what we see in the browser.

    4. Syncing Dimensions

    The major problem when syncing HTML and WebGL dimensions is that things between them aren’t exactly pixel-perfect.
    This is because the DOM and WebGL don’t “speak the same units” by default.

    • Web browsers work in screen pixels.
    • WebGL uses arbitrary units

    Our goal is simple:

    💡 Make one unit in the WebGL scene equal one pixel on the screen.

    To achieve this, we’ll adjust the camera’s field of view (FOV) so that visible area through the camera exactly matches
    the dimensions of the browser window in pixels.

    So, we’ll create a
    syncDimensions()
    function under our Commons class, which calculates our camera’s field of view such that 1 unit in the WebGL scene
    corresponds to 1 pixel on the screen —
    at a given distance from the camera.

     // Commons.ts 
    /**
      * Helper function that is called upon creation and resize
      * Updates the camera's fov according to the new dimensions such that the window's pixels match with that of WebGL scene
      */
    private syncDimensions() {
      this.camera.fov =
        2 *
        Math.atan(this.sizes.screen.height / 2 / this.distanceFromCamera) *
        (180 / Math.PI);
    }

    This function will be called once when we create the camera, and every time that the screen is resized.

    
    //Commons.ts
    
    private createCamera() {
      this.camera = new PerspectiveCamera(
        70,
        this.sizes.screen.aspect,
        200,
        2000
      );
      this.camera.position.z = this.distanceFromCamera;
      this.syncDimensions(); // Syncing dimensions
      this.camera.updateProjectionMatrix();
    }
    
    // ...
    
    private onResizeCamera() {
      this.syncDimensions(); // Syncing dimensions
    
      this.camera.aspect = this.sizes.screen.aspect;
      this.camera.updateProjectionMatrix();
    }

    Let’s break down what’s actually going on here using the image below:

    We know:

    • The height of the screen
    • The distance from camera (Z)
    • The FOV of the camera is the vertical angle (fov y in the image)

    So our main goal is to set how wide (vertical angle) we see according to our screen height.

    Because the Z (distance from camera) and half of the screen height
    forms a right triangle
    (distance + height), we can solve for the angle using some basic trigonometry, and compute the FOV using the inverse
    tangent (
    atan
    ) of this triangle.

    Step-by-step Breakdown of the Formula

    this.sizes.screen.height / 2

    → This gives us half the screen’s pixel height — the opposite side of our triangle.

    this.distanceFromCamera

    → This is the adjacent side of the triangle — the distance from the camera to the 3D scene.

    Math.atan(opposite / adjacent)

    → Calculates half of the vertical FOV (in radians).

    *2

    → Since atan only gives half of the angle, we multiply it by 2 to get the full FOV.

    * (180 / Math.PI)

    → Converts the angle from radians to degrees (Three.js expects degrees for PerspectiveCamera’s fov)

    So the final formula comes down to:

    this.camera.fov =
      2 *
      Math.atan(this.sizes.screen.height / 2 / this.distanceFromCamera) *
      (180 / Math.PI);

    That’s all we need to sync our dimensions, and this setup ensures that 1 unit in WebGL = 1 pixel on screen.

    Let’s move back to the text implementation.

    5. Setting Text Properties and Positioning

    Now that we’ve synced the WebGL scene to our screen’s pixel dimensions, we can start mapping HTML styles to our 3D
    text.

    If everything’s wired up correctly, you should see that the font size of the WebGL-rendered text matches the size of
    the underlying HTML, although the positioning is still off.

    Let’s sync more styling properties and positioning.

    Before we can position the 3D text, we need to get the DOM element’s position and size. We’ll create a new method in
    the WebGLText class called
    createBounds()
    ,
    and use the browser’s built-in getBoundingClientRect() method:

    // WebGLText.ts
    
    private createBounds() {
      this.bounds = this.element.getBoundingClientRect();
      this.y = this.bounds.top + this.commons.lenis.actualScroll;
    }

    And call this in the constructor:

      // WebGLText.ts
    
    constructor({ scene, element }: Props) {
      this.commons = Commons.getInstance();
    
      this.scene = scene;
      this.element = element;
    
      this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style.
    
      this.createFont();
      this.createColor();
      this.createBounds(); // Creating bounds
      this.createMesh();
      this.setStaticValues();
    }

    Next, we’ll pull important typographic properties from the DOM (computed style) and pass them to the 3D mesh, so that
    it behaves like our native HTML text. (Again, you can see the full documentation and possible properties of troika
    here
    ). Below I’ve included the most important ones.

      // WebGLText.ts 
    
    private setStaticValues() {
      const { fontSize, letterSpacing, lineHeight, whiteSpace, textAlign } =
        this.computedStyle;
    
      const fontSizeNum = window.parseFloat(fontSize);
    
      this.mesh.fontSize = fontSizeNum;
    
      this.mesh.textAlign = textAlign;
    
      // Troika defines letter spacing in em's, so we convert to them
      this.mesh.letterSpacing = parseFloat(letterSpacing) / fontSizeNum;
    
      // Same with line height
      this.mesh.lineHeight = parseFloat(lineHeight) / fontSizeNum;
    
      // Important to define maxWidth for the mesh, so that our text doesn't overflow
      this.mesh.maxWidth = this.bounds.width;
    
      // Match whiteSpace behavior (e.g., 'pre', 'nowrap')
      this.mesh.whiteSpace = whiteSpace;
    }

    Troika accepts some of the properties in local em units, so we have to convert pixels into em’s by dividing the pixel
    values by the font size.

    Also, it’s important to set a maximum width (in pixels) to constrain the mesh’s layout — this prevents text from
    overflowing and ensures proper text wrapping.

    And finally, let’s create an
    update()
    function to be called on each frame that consistently positions our mesh according to the underlying DOM position.

    This is what it looks like:

    //WebGLText.ts
    
    update() {
      this.mesh.position.y =
        -this.y +
        this.commons.lenis.animatedScroll +
        this.commons.sizes.screen.height / 2 -
        this.bounds.height / 2;
    
      this.mesh.position.x =
        this.bounds.left - this.commons.sizes.screen.width / 2;
    }

    Breakdown:

    • this.y
      shifts the mesh upward by the element’s absolute Y offset.
    • lenis.animatedScroll
      re-applies the live animated scroll position.
    • Together, they give the current relative position inside the viewport.

    Since our WebGL coordinate system is centered in the middle of the screen (Y = 0 is center), we also:

    • Add half the screen height (to convert from DOM top-left origin to WebGL center origin)
    • Subtract half the text height to vertically center the text
    • Subtract half the screen width

    Now, we call this update function for each of the text instances in our entry-file:

      // main.ts
    
    private update() {
      this.commons.update();
    
      this.commons.renderer.render(this.scene, this.commons.camera);
    
    
      if (this.texts) {
        this.texts.forEach((el) => el.update());
      }
    
      window.requestAnimationFrame(this.update.bind(this));
    }

    And now, the
    texts will perfectly follow DOM counterparts
    , even as the user scrolls.

    Let’s finalize our base text class implementation before diving into effects:

    Resizing

    We need to ensure that our WebGL text updates correctly on window resize events. This means
    recreating the computedStyle, bounds, and static values
    whenever the window size changes.

    Here’s the resize event handler:

     // WebGLText.ts
    
    onResize() {
      this.computedStyle = window.getComputedStyle(this.element);
      this.createBounds();
      this.setStaticValues();
    }

    And, call it in the entry-point for each of the text instances:

      // main.ts
    
    private onResize() {
      this.commons.onResize();
    
      // Resizing texts
      if (this.texts) {
        this.texts.forEach((el) => el.onResize());
      }
    }

    Once everything is working responsively and perfectly synced with the DOM, we can finally
    hide the original HTML text by setting it transparent
    — but we’ll keep it in place so it’s still selectable and accessible to the user.

    // WebGLText.ts
    
    this.createFont();
    this.createColor();
    this.createBounds();
    this.createMesh();
    this.setStaticValues();
    
    this.element.style.color = "transparent"; // Hide DOM element

    We should now have our perfectly responsive text meshes, and the user only sees the rendered WebGL text, while the DOM
    element remains fully intact for accessibility.

    Let’s add some effects!

    6. Adding a Custom shader and Replicating Mask Reveal Animations

    Troika also lets us use custom shader materials for meshes, giving us the flexibility to create complex effects beyond
    just setting colors.

    Let’s set up our initial custom shaders:

    Fragment Shader:

    // text.frag
    
    uniform vec3 uColor;
    varying vec2 vUv;
    
    void main() {
      gl_FragColor = vec4(uColor, 1.0); // Applying our custom color.
    }

    The fragment shader defines the color of the text using the uColor uniform.

    Vertex Shader:

    // text.vert
    varying vec2 vUv;
    
    void main() {
      vUv = uv;
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    }

    The vertex shader passes the texture coordinates (uv) to the fragment shader for the text rendering.

    Shader File Imports using Vite

    To handle shader files more easily, we can use the
    vite-plugin-glsl
    plugin together with Vite to directly import shader files like .frag and .vert in code:

    npm i vite-plugin-glsl -D
    // vite.config.ts
    
    import { defineConfig } from "vite";
    import glsl from "vite-plugin-glsl";
    
    export default defineConfig({
      plugins: [
        glsl({
          include: [
            "**/*.glsl",
            "**/*.wgsl",
            "**/*.vert",
            "**/*.frag",
            "**/*.vs",
            "**/*.fs",
          ],
          warnDuplicatedImports: true,
          defaultExtension: "glsl",
          watch: true,
          root: "/",
        }),
      ],
    });
    

    If you’re using TypeScript, you also need to declare the modules for shader files so TypeScript can understand how to
    import them:

    // shaders.d.ts
    
    declare module "*.frag" {
      const value: string;
      export default value;
    }
    
    declare module "*.vert" {
      const value: string;
      export default value;
    }
    
    declare module "*.glsl" {
      const value: string;
      export default value;
    }

    Creating Custom Shader Materials

    Let’s now create our custom ShaderMaterial and apply it to our mesh:

    // WebGLText.ts
    
    // Importing shaders
    import fragmentShader from "../../shaders/text/text.frag";
    import vertexShader from "../../shaders/text/text.vert";
    
    //...
    
    this.createFont();
    this.createColor();
    this.createBounds();
    this.createMaterial(); // Creating material
    this.createMesh();
    this.setStaticValues();
    
    //...
    
    private createMaterial() {
       this.material = new THREE.ShaderMaterial({
         fragmentShader,
         vertexShader
           uniforms: {
           uColor: new THREE.Uniform(this.color), // Passing our color to the shader
         },
       });
     }

    In the
    createMaterial()
    method, we define the
    ShaderMaterial
    using the imported shaders and pass in the uColor uniform, which allows us to dynamically control the color of the
    text based on our DOM-element.

    And now, instead of setting the color directly on the default mesh material, we apply our new custom material:

      // WebGLText.ts
    
    private createMesh() {
      this.mesh = new Text();
    
      this.mesh.text = this.element.innerText; // Always use innerText (not innerHTML or textContent).
      this.mesh.font = this.font;
    
      this.mesh.anchorX = "0%";
      this.mesh.anchorY = "50%";
    
      this.mesh.material = this.material; //Using custom material instead of color
    }

    At this point, we are using our custom shader material, but so far, nothing in our output has changed. Let’s now setup
    show and hide animations using our custom shader, and replicate the mask reveal effect.

    Setting up Reveal Animations

    We’ll create an animation that uses a progress uniform (uProgress) to control the visibility and reveal progress of
    the text. The animation will be controlled using the motion library.

    First, we must install
    motion
    and import its
    animate
    and
    inView
    functions to our WebGLText class.

    npm i motion
    // WebGLText.ts
    
    import { inView, animate } from "motion";

    Now, let’s configure our class so that when the text steps into view,
    the show() function is called
    , and when it steps away,
    the hide() function is called
    . These methods also control the current visibility variable
    this.isVisible
    . These functions will control the uProgress variable, and animate it between 0 and 1.

    For this, we also must setup an addEventListeners() function:

     // WebGLText.ts
    
    /**
      * Inits visibility tracking using motion's inView function.
      * Show is called when the element steps into view, and hide is called when the element steps out of view
      */
    private addEventListeners() {
      inView(this.element, () => {
        this.show();
    
        return () => this.hide();
      });
    }
    
    show() {
      this.isVisible = true;
    
      animate(
        this.material.uniforms.uProgress,
        { value: 1 },
        { duration: 1.8, ease: [0.25, 1, 0.5, 1] }
      );
    }
    
    hide() {
      animate(
        this.material.uniforms.uProgress,
        { value: 0 },
        { duration: 1.8, onComplete: () => (this.isVisible = false) }
      );
    }

    Just make sure to call addEventListeners() in your constructor after setting up the class.

    Updating the Shader Material for Animation

    We’ll also add two additional uniform variables in our material for the animations:

    • uProgress
      : Controls the reveal progress (from 0 to 1).
    • uHeight
      : Used by the vertex shader to calculate vertical position offset.

    Updated
    createMaterial()
    method:

     // WebGLText.ts
    
    private createMaterial() {
      this.material = new THREE.ShaderMaterial({
        fragmentShader,
        vertexShader,
        uniforms: {
          uProgress: new THREE.Uniform(0),
          uHeight: new THREE.Uniform(this.bounds.height),
          uColor: new THREE.Uniform(this.color),
        },
      });
    }

    Since the uHeight is dependent on bounds, we also want to update the uniform variable upon resizing:

      // WebGLText.ts
    
    onResize() {
      this.computedStyle = window.getComputedStyle(this.element);
      this.createBounds();
      this.setStaticValues();
      this.material.uniforms.uHeight.value = this.bounds.height;
    }

    We now have the text class instance automatically calling show() and hide(), and animating the uProgress according to
    the visibility of our underlying DOM-element.

    For performance, you might want to update the update() method to only calculate a new position when the mesh is
    visible:

    update() {
      if (this.isVisible) {
        this.mesh.position.y =
          -this.y +
          this.commons.lenis.animatedScroll +
          this.commons.sizes.screen.height / 2 -
          this.bounds.height / 2;
    
        this.mesh.position.x =
          this.bounds.left - this.commons.sizes.screen.width / 2;
      }
    }

    Mask Reveal Theory and Shader Implementation

    Creating a mask reveal effect with custom shaders in WebGL is surprisingly simple when we break it down into two
    separate movements: one happening in the fragment shader and the other in the vertex shader. You might’ve seen this
    effect happen in WebGL on the page of
    Zajno
    , for example.

    Instead of overcomplicating the concept with complex masks or thinking about “lifting it up” behind a window (as we do
    in traditional HTML), we can think of it as two distinct actions that work together.

    1. Fragment Shader
      : We clip the text vertically, revealing it gradually from top to bottom.
    2. Vertex Shader
      : We translate the text’s position from the bottom to the top by its height.

    Together these two movements create the illusion of the text lifting itself up from behind a mask.

    Let’s update our fragment shader code:

    //text.frag
    
    uniform float uProgress; // Our progress value between 0 and 1
    
    uniform vec3 uColor;
    varying vec2 vUv;
    
    void main() {
      // Calculate the reveal threshold (bottom to top reveal)
      float reveal = 1.0 - vUv.y;
      
      // Discard fragments above the reveal threshold based on progress
      if (reveal > uProgress) discard;
    
      // Apply the color to the visible parts of the text
      gl_FragColor = vec4(uColor, 1.0);
    }
    
    • When uProgress is 0, the mesh is fully clipped out, and nothing is visible
    • When uProgress increases towards 1, the mesh reveals itself from top to bottom.

    For the vertex shader, we can simply pass the new uniform called uHeight, which stands for the height of our
    DOM-element (this.bounds.height), and translate the output vertically according to it and uProgress.

    //text.vert
    
    uniform float uProgress;
    uniform float uHeight; // Total height of the mesh passed in from JS
    
    varying vec2 vUv;
    
    void main() {
      vUv = uv;
      
      vec3 transformedPosition = position;
    
      // Push the mesh upward as it reveals
      transformedPosition.y -= uHeight * (1.0 - uProgress);
      
      gl_Position = projectionMatrix * modelViewMatrix * vec4(transformedPosition, 1.0);
    }
    • uHeight
      : Total height of the DOM-element (and mesh), passed in from JS.
    • When
      uProgress
      is
      0
      , the mesh is fully pushed down.
    • As
      uProgress
      reaches
      1
      , it resolves to its natural position.

    Now, we should have a beautifully on-scroll animating scene, where the texts reveal themselves as in regular HTML when
    they scroll into view.

    To spice things up, let’s add some scroll-velocity based post processing effects to our scene as the final step!

    7. Adding Post-processing

    Now that we’ve built our animated WebGL text with custom shaders and scroll-triggered reveals, we can push the visuals
    further with
    post-processing
    .

    Post-processing allows us to apply full-screen visual effects after the scene has been rendered. This is done by
    passing the final image through a series of custom shader passes.

    So, in this final section, we’ll:

    • Set up a PostProcessing class using Three.js’s EffectComposer
    • Add a custom RGB shift and wave distortion effect
    • Drive the distortion strength dynamically using the scroll velocity from our Lenis custom scroll instance

    Creating a PostProcessing class with EffectComposer

    Let’s create a PostProcessing class that will be intialized from our entry-point, and which will handle everything
    regarding postprocessing using Three.JS’s EffectComposer. Read more about the EffectComposer class
    here from Three.js’s documentation
    . We’ll also create new fragment and vertex shaders for the postprocessing class to use.

    // PostProcessing.ts
    
    import {
      EffectComposer,
      RenderPass,
      ShaderPass,
    } from "three/examples/jsm/Addons.js";
    import Commons from "./Commons";
    import * as THREE from "three";
    
    // Importing postprocessing shaders
    import fragmentShader from "../../shaders/postprocessing/postprocessing.frag";
    import vertexShader from "../../shaders/postprocessing/postprocessing.vert";
    
    interface Props {
      scene: THREE.Scene;
    }
    
    export default class PostProcessing {
      // Scene and utility references
      private commons: Commons;
      private scene: THREE.Scene;
    
      private composer!: EffectComposer;
    
      private renderPass!: RenderPass;
      private shiftPass!: ShaderPass;
    
      constructor({ scene }: Props) {
        this.commons = Commons.getInstance();
    
        this.scene = scene;
    
        this.createComposer();
        this.createPasses();
      }
    
      private createComposer() {
        this.composer = new EffectComposer(this.commons.renderer);
        this.composer.setPixelRatio(this.commons.sizes.pixelRatio);
        this.composer.setSize(
          this.commons.sizes.screen.width,
          this.commons.sizes.screen.height
        );
      }
    
      private createPasses() {
        // Creating Render Pass (final output) first.
        this.renderPass = new RenderPass(this.scene, this.commons.camera);
        this.composer.addPass(this.renderPass);
    
        // Creating Post-processing shader for wave and RGB-shift effect.
        const shiftShader = {
          uniforms: {
            tDiffuse: { value: null },      // Default input from previous pass
            uVelocity: { value: 0 },        // Scroll velocity input
            uTime: { value: 0 },            // Elapsed time for animated distortion
          },
          vertexShader,
          fragmentShader,
        };
    
        this.shiftPass = new ShaderPass(shiftShader);
        this.composer.addPass(this.shiftPass);
      }
    
      /**
       * Resize handler for EffectComposer, called from entry-point.
       */
      onResize() {
        this.composer.setPixelRatio(this.commons.sizes.pixelRatio);
        this.composer.setSize(
          this.commons.sizes.screen.width,
          this.commons.sizes.screen.height
        );
      }
    
      update() {
        this.shiftPass.uniforms.uTime.value = this.commons.elapsedTime;
    
        this.composer.render();
      }
    }
    

    Since we don’t have our postprocessing shaders created yet, make sure you create placeholder postprocessing.frag and
    postprocessing.vert shaders so the imports don’t fail.

    Example placeholders below:

    //postprocessing.frag
    
    uniform sampler2D tDiffuse;
    uniform float uVelocity;
    uniform float uTime;
    
    varying vec2 vUv;
    
    void main() {
        gl_FragColor = texture2D(tDiffuse, vUv);
    }
    
    //postprocessing.vert
    varying vec2 vUv;
    
    void main() {
        vUv = uv;
            
        gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    }

    Breakdown of the PostProcessing class

    Constructor:
    Initializes the class by storing the provided scene, grabbing the shared Commons instance, and then calling
    createComposer()
    and
    createPasses()
    .

    createComposer():
    Sets up the EffectComposer with the correct pixel ratio and canvas size:

    • EffectComposer wraps the WebGL renderer and allows chaining of multiple render passes.
    • Sized according to current viewport dimensions and pixel ratio

    createPasses():
    This method sets up all rendering passes applied to the scene.

    • RenderPass
      : The first pass that simply renders the scene with the main camera as regular.
    • ShaderPass (shiftPass)
      : A custom full-screen shader pass that we’ll create and which will create the RGB shift and wavy distortion
      effects.

    update():
    Method called on every frame. Updates the uTime uniform so we can animate effects over time, and renders the final
    post-processed image using
    composer.render()

    Initializing Post-processing

    To wire the post-processing system into our existing app, we update our main.ts:

      //main.ts
    private postProcessing!: PostProcessing;
    
    //....
    
    constructor() {
      document.addEventListener("DOMContentLoaded", async () => {
        await document.fonts.ready;
    
        this.commons = Commons.getInstance();
        this.commons.init();
    
        this.createScene();
        this.createWebGLTexts();
        this.createPostProcessing(); // Creating post-processing
        this.addEventListeners();
    
        this.update();
      });
    }
    
    // ...
    
    private createPostProcessing() {
      this.postProcessing = new PostProcessing({ scene: this.scene });
    }
    
    // ...
    
    private update() {
      this.commons.update();
      
      if (this.texts) {
        this.texts.forEach((el) => el.update());
      }
      
      // Don't need line below as we're rendering everything using EffectComposer.
      // this.commons.renderer.render(this.scene, this.commons.camera);
      
      this.postProcessing.update(); // Post-processing class handles rendering of output from now on
    
      
      window.requestAnimationFrame(this.update.bind(this));
    }
    
    
    private onResize() {
      this.commons.onResize();
    
      if (this.texts) {
        this.texts.forEach((el) => el.onResize());
      }
    
      this.postProcessing.onResize(); // Resize post-processing
    }

    So in the new update() function, instead of rendering directly from there, we now hand off rendering responsibility to
    the PostProcessing class.

    Creating Post-processing Shader and Wiring Scroll Velocity

    We want to modify the PostProcessing class further, so that we update the postprocessing fragment shader with the
    current scroll velocity from Lenis.

    For this, I’m adding a new property lerpedVelocity and lerpFactor, which control the smoothed out velocity. The raw
    velocity values from lenis can be spiky and sudden, especially with fast scrolling or scroll jumps. If we pass that
    raw value directly into a shader, it can cause a really jittery output.

    private lerpedVelocity = 0; // Smoothed scroll velocity for post-processing.
    private lerpFactor = 0.05; // Controls how quickly lerpedVelocity follows the real velocity
    
    // ...
    
    update() {
        this.shiftPass.uniforms.uTime.value = this.commons.elapsedTime;
    
      // Reading current velocity form lenis instance.
      const targetVelocity = this.commons.lenis.velocity;
    
      // We use the lerped velocity as the actual velocity for the shader, just for a smoother experience.
      this.lerpedVelocity +=
        (targetVelocity - this.lerpedVelocity) * this.lerpFactor;
    
      this.shiftPass.uniforms.uVelocity.value = this.lerpedVelocity;
    
      this.composer.render();
    }

    Post-processing Shaders

    For the vertex shader, we can keep all things default, we pass the texture coordinates to the fragment shader.

    //postprocessing.vert
    
    varying vec2 vUv;
    
    void main() {
      vUv = uv;
            
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    }

    And for the fragment shader:

    //postprocessing.frag
    
    uniform sampler2D tDiffuse;
    uniform float uVelocity;
    uniform float uTime;
    
    varying vec2 vUv;
    
    void main() {
      vec2 uv = vUv;
      
      // Calculating wave distortion based on velocity
      float waveAmplitude = uVelocity * 0.0009;
      float waveFrequency = 4.0 + uVelocity * 0.01;
      
      // Applying wave distortion to the UV coordinates
      vec2 waveUv = uv;
      waveUv.x += sin(uv.y * waveFrequency + uTime) * waveAmplitude;
      waveUv.y += sin(uv.x * waveFrequency * 5. + uTime * 0.8) * waveAmplitude;
      
      // Applying the RGB shift to the wave-distorted coordinates
      float r = texture2D(tDiffuse, vec2(waveUv.x, waveUv.y + uVelocity * 0.0005)).r;
      vec2 gb = texture2D(tDiffuse, waveUv).gb;
    
      gl_FragColor = vec4(r, gb, r);
    }

    Breakdown

    // Calculating wave distortion based on velocity
    float waveAmplitude = uVelocity * 0.0009;
    float waveFrequency = 4.0 + uVelocity * 0.01;

    Wave amplitude controls how strongly the wave effect distorts the screen according to our scroll velocity.

    Wave frequency controls how frequently the waves occur.

    Next, we distort the UV-coordinates using sin functions and the uTime uniform:

    // Applying wave distortion to the UV coordinates
    vec2 waveUv = uv;
    waveUv.x += sin(uv.y * waveFrequency + uTime) * waveAmplitude;
    waveUv.y += sin(uv.x * waveFrequency * 5. + uTime * 0.8) * waveAmplitude;

    The red channel is offset slightly based on the velocity, creating the RGB shift effect.

    // Applying the RGB shift to the wave-distorted coordinates
    float r = texture2D(tDiffuse, vec2(waveUv.x, waveUv.y + uVelocity * 0.0005)).r;
    vec2 gb = texture2D(tDiffuse, waveUv).gb;
    
    gl_FragColor = vec4(r, gb, r);

    This will create a subtle color separation in the final image that shifts according to our scroll velocity.

    Finally, we combine red, green, blue, and alpha into the output color.

    8. Final Result

    And there you have it! We’ve created a responsive text scene, with scroll-triggered mask reveal animations and
    wavy/rgb shifted post-processing.

    This setup provides a solid, modular foundation for building more advanced WebGL text effects. If you’re curious to explore further, consider adding particles, fluid simulations, audio reactivity, or more complex materials and shaders. If you’re interested in breakdowns of any of these, feel free to reach out on X.

    Thanks so much for following along 🙌



    Source link

  • Motion Highlights: Rive Special | Codrops

    Motion Highlights: Rive Special | Codrops


    The

    New

    Collective

    🎨✨💻 Stay ahead of the curve with handpicked, high-quality frontend development and design news, picked freshly every single day. No fluff, no filler—just the most relevant insights, inspiring reads, and updates to keep you in the know.

    Prefer a weekly digest in your inbox? No problem, we got you covered. Just subscribe here.



    Source link