دسته: برنامه‌نویسان

  • When Cells Collide: The Making of an Organic Particle Experiment with Rapier & Three.js

    When Cells Collide: The Making of an Organic Particle Experiment with Rapier & Three.js



    Every project begins with a spark of curiosity. It often emerges from exploring techniques outside the web and imagining how they might translate into interactive experiences. In this case, inspiration came from a dive into particle simulations.

    The Concept

    The core idea for this project came after watching a tutorial on creating cell-like particles using the xParticles plugin for Cinema 4D. The team often draws inspiration from 3D motion design techniques, and the question frequently arises in the studio: “Wouldn’t this be cool if it were interactive?” That’s where the idea was born.

    After building our own set up in C4D based on the example, we created a general motion prototype to demonstrate the interaction. The result was a kind of repelling effect, where the cells displaced according to the cursor’s position. To create the demo, we added a simple sphere and gave it a collider tag so that the particles would be pushed away as the sphere moved through the simulation, emulating the mouse movement. An easy way to add realistic movement is to add a vibrate tag to the collider, and play around with the movement levels and frequency until it looks good.

    Art Direction

    With the base particle and interaction demo sorted, we rendered out the sequence and moved in After Effects to start playing around with the look and feel. We knew we wanted to give the particles a unique quality, one that felt more stylised as opposed to ultra realistic or scientific. After some exploration we landed on a lo-fi gradient mapped look, which felt like an interesting direction to move forward with. We achieved this by layer up a few effects:

    • Effect > Generate > 4 Colour Gradient: Add this to a new shape layer. This black and white gradient will act as a mask to control the blur intensities.
    • Effect > Blur > Camera Blur: Add this to a new adjustment layer. This general blur will smooth out the particles.
    • Effect > Blur > Compound Blur: Add this to the same adjustment layer as above. Set the blur layer to use the same shape layer we applied to the 4 colour gradient as its mask, make sure it is set to “Effects & Mask” mode in the drop down.
    • Effect > Color Correction > Colorama: Add this as a new adjustment layer. This is where the fun starts! You can add custom gradients into the output cycle and play around with the phase shift to customise the look according to your preference.

    Next, we designed a simple UI to match the futuristic cell-based visual direction. A concept we felt would work well for a bio-tech company – so created a simple brand with key messaging to fit and voila! That’s the concept phase complete.

    (Hot tip: If you’re doing an interaction concept in 3d software like C4D, create a plane with a cursor texture on and parent it to your main interaction component – in the case, the sphere collider. Render that out as a sequence so that it matches up perfectly with your simulation – you can then layer it over text, etc, and UI in After Effects)

    Technical Approach and Tools

    As this was a simple one page static site without need of a backend, we used our in-house boilerplate using Astro with Vite and Three.js. For the physics, we went with Rapier as it handles collision detection efficiently and is compatible with Three.js. That was our main requirement, since we didn’t need simulations or soft-body calculations. 

    For the Cellular Technology project, we specifically wanted to show how you can achieve a satisfying result without overcrowding the screen with tons of features or components. Our key focus was the visuals and interactivity – to make this satisfying for the user, it needed to feel smooth and seamless. A fluid-like simulation is a good way to achieve this. At Unseen, we often implement this effect as an added interaction component. For this project, we wanted to take a slightly different approach that would still achieve a similar result.

    Based on the concept from our designers, there were a couple of directions for the implementation to consider. To keep the experience optimised, even at a large scale, having the GPU handle the majority of the calculations is usually the best approach. For this, we’d need the effect to be in a shader, and use more complicated implementations such as packing algorithms and custom voronoi-like patterns. However, after testing the Rapier library, we realised that simple rigid body object collision would suffice in re-creating the concept in real-time. 

    Physics Implementation

    To do so, we needed to create a separate physics world next to our 3D rendered world, as the Rapier library only handles the physics calculations, and the graphics are left for the implementation of the developer’s choosing. 

    Here’s a snippet from the part were we create the rigid bodies:

    for (let i = 0; i < this.numberOfBodies; i++) {
      const x = Math.random() * this.bounds.x - this.bounds.x * 0.5
      const y = Math.random() * this.bounds.y - this.bounds.y * 0.5
      const z = Math.random() * (this.bounds.z * 0.95) - (this.bounds.z * 0.95) * 0.5
    
      const bodyDesc = RAPIER.RigidBodyDesc.dynamic().setTranslation(x, y, z)
      bodyDesc.setGravityScale(0.0) // Disable gravity
      bodyDesc.setLinearDamping(0.7)
      const body = this.physicsWorld.createRigidBody(bodyDesc)
    
      const radius = MathUtils.mapLinear(Math.random(), 0.0, 1.0, this._cellSizeRange[0], this._cellSizeRange[1])
      const colliderDesc = RAPIER.ColliderDesc.ball(radius)
      const collider = this.physicsWorld.createCollider(colliderDesc, body)
      collider.setRestitution(0.1) // bounciness 0 = no bounce, 1 = full bounce
    
      this.bodies.push(body)
      this.colliders.push(collider)
    }

    The meshes that represent the bodies are created separately, and on each tick, their transforms get updated by those from the physics engine. 

    // update mesh positions
    for (let i = 0; i < this.numberOfBodies; i++) {
      const body = this.bodies[i]
      const position = body.translation()
    
      const collider = this.colliders[i]
      const radius = collider.shape.radius
    
      this._dummy.position.set(position.x, position.y, position.z)
      this._dummy.scale.setScalar(radius)
      this._dummy.updateMatrix()
    
      this.mesh.setMatrixAt(i, this._dummy.matrix)
    }
    
    this.mesh.instanceMatrix.needsUpdate = true

    With performance in mind, we first decided to try the 2D version of the Rapier library, however it soon became clear that with cells distributed only in one plane, the visual was not convincing enough. The performance impact of additional calculations in the Z plane was justified by the improved result. 

    Building the Visual with Post Processing

    Evidently, the post processing effects play a big role in this project. By far the most important is the blur, which makes the cells go from clear simple rings to a fluid, gooey mass. We implemented the Kawase blur, which is similar to Gaussian blur, but uses box blurring instead of the Gaussian function and is more performant at higher levels of blur. We applied it to only some parts of the screen to keep visual interest. 

    This already brought the implementation closer to the concept. Another vital part of the experience is the color-grading, where we mapped the colours to the luminosity of elements in the scene. We couldn’t resist adding our typical fluid simulation, so the colours get slightly offset based on the fluid movement. 

    if (uFluidEnabled) {
        fluidColor = texture2D(tFluid, screenCoords);
    
        fluid = pow(luminance(abs(fluidColor.rgb)), 1.2);
        fluid *= 0.28;
    }
    
    vec3 color1 = uColor1 - fluid * 0.08;
    vec3 color2 = uColor2 - fluid * 0.08;
    vec3 color3 = uColor3 - fluid * 0.08;
    vec3 color4 = uColor4 - fluid * 0.08;
    
    if (uEnabled) {
        // apply a color grade
        color = getColorRampColor(brightness, uStops.x, uStops.y, uStops.z, uStops.w, color1, color2, color3, color4);
    }
    
    color += color * fluid * 1.5;
    color = clamp(color, 0.0, 1.0);
    
    color += color * fluidColor.rgb * 0.09;
    
    gl_FragColor = vec4(color, 1.0);
    

    Performance Optimisation

    With the computational power required for the physics engine increasing quickly due to the number of calculations required, we aimed to make the experience as optimised as possible. The first step was to find the minimum number of cells without affecting the visual too much, i.e. without making the cells too sparse. To do so, we minimised the area in which the cells get created and made the cells slightly larger. 

    Another important step was to make sure no calculation is redundant, meaning each calculation must be justified by a result visible on the screen. To make sure of that, we limited the area in which cells get created to only just cover the screen, regardless of the screen size. This basically means that all cells in the scene are visible in the camera. Usually this approach involves a slightly more complex derivation of the bounding area, based on the camera field of view and distance from the object, however, for this project, we used an orthographic camera, which simplifies the calculations.

    this.camera._width = this.camera.right - this.camera.left
    this.camera._height = this.camera.top - this.camera.bottom
    
    // .....
    
    this.bounds = {
      x: (this.camera._width / this.options.cameraZoom) * 0.5,
      y: (this.camera._height / this.options.cameraZoom) * 0.5,
      z: 0.5
    }

    Check out the live demo.

    We’ve also exposed some of the settings on the live demo so you can adjust colours yourself here.

    Thanks for reading our break down of this experiment! If you have any questions don’t hesitate to write to us @uns__nstudio.





    Source link

  • Reality meets Emotion: The 3D Storytelling of Célia Lopez

    Reality meets Emotion: The 3D Storytelling of Célia Lopez


    Hi, my name is Célia. I’m a French 3D designer based in Paris, with a special focus on color harmony, refined details, and meticulous craftsmanship. I strive to tell stories through ground breaking interactivity and aim to create designs that truly touch people’s hearts. I collaborate with renowned agencies and always push for exemplary quality in everything I do. I love working with people who share the same dedication and passion for their craft—because that’s when results become something we can all be truly proud of.

    Featured Projects

    Aether1

    This project was carried out with the OFF+BRAND team, with whom I’ve collaborated regularly since February 2025. They wanted to use this product showcase to demonstrate to their future clients how brilliantly they combine storytelling, WebGL, AI integration, and a highly polished UI, and flawlessly coded.

    I loved working on this project not only because of the intense team effort in fine-tuning the details, but also because of the creative freedom I was given. In collaboration with Gilles Tossoukpé and Ross Anderson, we built the concept entirely from scratch, each bringing our own expertise. I’m very proud of the result.

    We have done a full case study explaining our workflow on Codrops

    aether1.ai

    My collaboration with OFF+BRAND began thanks to a recommendation from Paul Guilhem Repaux, with whom I had worked on one of the biggest projects of my career: the Dubai World Expo.

    Dubai World Expo

    We recreated over 200 pavilions from 192 countries, delivering a virtual experience for more than 2 million viewers during the inauguration of the Dubai World Expo in 2020.

    This unique experience allowed users to attend countless events, conferences, and performances without traveling to Dubai.

    To bring this to life, we worked as a team of six 3D designers and two developers, under the leadership of the project manager at DOGSTUDIO. I’m truly proud to have contributed to this website, which showcased one of the world’s most celebrated events.

    virtualexpodubai.com/

    Heidelbarg CCUS

    The following website was created with Ashfall Studio, another incredible studio whose meticulous work, down to the way they present their projects, inspires me tremendously.

    Here, our mission was nothing short of magic: transforming a website with a theme that, at first glance, wasn’t exactly appealing—tar production—into an experiential site that evokes emotion! I mean, come on, we actually managed to make tar sexy!

    ccus.heidelbergmaterials.com/en/

    Jacquemus

    Do you know the law of attraction? This principle is based on the idea that we can attract what we focus our attention and emotions on. I fell in love with the Jacquemus brand—the story of Simon, its creator, resonates deeply with me because we both grew up in the same place: the beautiful South of France!

    I wanted to create a project for Jacquemus, so I first made it a personal one. I wanted to explore the bridges between reality, 3D, photography, and motion design in one cohesive whole—which you can actually see on my Instagram, where I love mixing 3D and fashion in a harmonious and colorful feed.

    I went to their boutique on Avenue Montaigne and integrated my bag into the space using virtual reality. I also created a motion piece and did a photoshoot with a photographer.

    Céramique

    Last year, a friend of mine gave me a ceramics workshop where I created plates and cups. I loved it! Then in 2025, I decided I wanted to improve my animation skills—so I needed a subject to practice on. I was inspired by that workshop and created a short animation based on the steps involved in making my cups.

    Philosophy

    Are you one of those people who dream big—sometimes too big—and, once they commit to something, push it to the extreme of what it could become? Well, I am. If I make a ceramic plate once, I want to launch my own brand. If I invest in real estate, I want to become financially independent. If I spend my life in stylish cafés or designer gyms I discover on ClassPass, I start imagining opening a coffee shop–fitness space. When I see excellence somewhere, I think: why not me? And I give myself the means to reach my goals. But of course, one has to be realistic: to be truly high-quality, you need to focus on one thing at a time. So yes, I have many future projects—but first, let’s finish the ones already in progress.

    My next steps

    I recently launched my Airbnb in Paris, for which I’ll be creating some content, building a brand identity, and promoting it as much as I can.

    I’ve also launched my lifestyle/furniture brand called LABEGE named after the village where I grew up. For now, it’s a digital brand, but my goal is to develop it for commercialization. I have no idea how to make that happen just yet.

    Background & Career highlights


    Awwwards class

    There have been many defining moments in my career—or at least, I treat every opportunity as a potential turning point, which is why I invest so much in every project.

    But two moments, in particular, stand out for me. The first was when Awwwards invited me to create a course explaining my 3D WebGL workflow. Today, I might update it with some new insights, but at the time it was extremely valuable because there was nothing like it available online. Combined with the fact that it was one of the first four courses they launched, it gave me great visibility within our community.

    My Awwwards Class

    Spline

    Another milestone was when I joined the Spline team. Back then, the software was still unstable—it was frustrating to spend days creating only to lose all my work to a bug. But over time, the tool became incredibly powerful. The combination of Spline’s excellent social media presence and the growing strength of the software helped it grow from 5K to 75K Twitter followers in just two years, along with thousands of new users.

    Thanks to the tool’s early popularity and the small number of people who mastered it at first, I was able to build a strong reputation in the interactive 3D web field. I shared a lot about Spline on my social channels and even launched a YouTube channel dedicated to tutorials.

    It was fascinating to see how a tool is built, showcase new features to the community, and watch the enthusiasm grow. Being part of such a close-knit, human team—led by founder Alejandro, whose visionary talent inspires me—was an unforgettable experience.

    Tools & Techniques

    • Cinema 4D
    • Redshift
    • Blender
    • Figma
    • Pinterest
    • Marvelous Designer
    • Spline Tool
    • PeachWeb

    Final Thoughts

    Life is short—know your limits and your worth. Set non-negotiable boundaries with anything or anyone that drags you down: no second chances, no comebacks. Be good to people and to the world, but also be selfish in the best way—do what makes you feel alive, happy, and full of magic. Surround yourself with people who are worth your attention, who value you as much as you value them.

    Put yourself in the main role of your own life, dream big, and be grateful to be here.

    LOVE!

    Contact

    Thanks a lot for taking the time to read about me!

    Let’s connect!

    Instagram
    X (Twitter)
    LinkedIn
    Email for new inquiries: hello@celialopez.fr 💌





    Source link

  • Design Has Never Been More Important: Inside Shopify’s Acquisition of Molly

    Design Has Never Been More Important: Inside Shopify’s Acquisition of Molly


    When the conversation turns to artificial intelligence, many assume that design is one of the professions most at risk of automation. But Shopify’s latest move sends a very different message. The e-commerce giant has revived the role of Chief Design Officer earlier this year and acquired Brooklyn-based creative studio Molly — signaling that, far from being diminished, design will sit at the center of its AI strategy.

    At the helm is Carl Rivera, Shopify’s Chief Design Officer, who believes this moment is an inflection point not just for the company, but for the design industry as a whole.

    “At a time when the market is saying maybe you don’t need designers anymore,” Rivera told me, “we’re saying the opposite. They’ve never been more important than they are right now.”

    A Statement of Intent

    Shopify has a long history of treating design as a strategic advantage. In its early days, co-founder Daniel Weinand held the title of Chief Design Officer and helped shape Shopify’s user-first approach. But when Weinand left the company, the role disappeared — until now.

    Bringing it back, Rivera argues, is both symbolic and practical. “It’s really interesting to consider that the moment Shopify decides to reinstate the Chief Design Officer role is at the dawn of AI,” he said. “That’s not a coincidence.”

    For Rivera, design is the best tool for navigating uncertainty. “When you face ambiguity and don’t know where the world is going, there’s no better way to imagine that future than through design,” he explained. “Design turns abstract ideas into something you can hold and touch, so everyone can align on the same vision.”

    Why Molly?

    Central to Shopify’s announcement is the acquisition of Molly, the Brooklyn-based design studio co-founded by Jaytel and Marvin Schwaibold. Known for their experimental but disciplined approach, Molly has collaborated with Shopify in the past.

    Rivera recalled how the deal came together almost organically. “I was having dinner with Marvin, and we were talking about the future I wanted to build at Shopify. The alignment was immediate. It was like — of course we should do this together. We could go faster, go further, and it would be more fun.”

    The studio will operate as an internal agency, but Rivera is careful to stress that Molly won’t exist in isolation. “What attracted me to Molly is not just their output, but their culture,” he said. “That culture is exactly the one we want to spread across Shopify. They’ll be a cultural pillar that helps manifest the ways of working we want everyone to embrace.”

    Importantly, the internal agency won’t replace Shopify’s existing design teams. Instead, it will augment them in moments that call for speed, experimentation, or tackling problems shaped by AI. “If something changes in the market and we need to respond quickly, Molly can embed with a team for a few months, supercharging their generative process,” Rivera explained.

    Redefining AI + Design

    Rivera is energized by the possibilities of AI and how it can transform the way people interact with technology. While today’s implementations often serve as early steps in that journey, he believes the real opportunity lies in what comes next.

    He acknowledges that many current products still treat AI as an add-on. “You have the product, which looks the same as it has for ten years, and then a little panel next to it that says AI. That can’t be the future,” Rivera said.

    For him, these early patterns are just the beginning — a foundation to build on. He envisions AI woven deeply into user experiences, reshaping interaction patterns themselves. “If AI had existed ten years ago, I don’t believe products would look the way they do today. We need to move beyond chat as the default interface and create experiences where AI feels native, invisible, and context-aware.”

    That, he argues, is where design proves indispensable. “It’s designers who will define the interaction patterns of AI in commerce. This is our role: to make the abstract real, to imagine the future, and to bring it into the present.”

    Measuring Success: Subjective by Design

    In a world obsessed with metrics, Rivera offers a refreshingly contrarian view of how design success should be measured.

    “Designers have often felt insecure, so they chase numbers to prove their value,” he said. “But to me, the most important measure isn’t a KPI. It’s whether the work feels right. Are we proud of it? Did it accelerate our vision? Does it make the product more delightful? I’m comfortable leaning on instinct.”

    That doesn’t mean ignoring business outcomes. But Rivera wants his teams to be guided first by craft, ambition, and impact on user experience — not by dashboards.

    Advice for Designers in an AI Era

    For independent designers and studio owners — many of whom worry that AI might disrupt their livelihoods — Rivera offers encouragement.

    He believes the most valuable skill today is adaptability: “The best trait a designer can have right now is the ability to quickly learn a new problem and generate many different options. That’s what the agency world trains you to do, and it’s exactly what big companies like Shopify need.”

    In fact, Rivera sees agency and freelance experience as increasingly attractive in large-scale design hiring. “People who have jumped between many problems quickly bring a unique skill set. That adaptability is crucial when technology and user expectations are changing so fast.”

    The Ambition at Shopify

    Rivera is clear about his mandate. He sums it up in three goals:

    1. Build the place where the world’s best designers choose to work.
    2. Enable them to do the best work of their careers.
    3. Define the future interaction patterns of AI in commerce.

    It’s an ambitious vision, but one he believes is within reach. “Ambition begets ambition,” he told his team in a recent message. “By raising expectations for ourselves and each other, we’ll attract people who want that environment, and they’ll keep raising the bar.”

    For Shopify, investing in design now goes beyond aesthetics. It is about shaping the future of commerce itself. As Rivera put it:

    “We don’t need to dream up sci-fi scenarios. The future is already here — just unevenly distributed. Our job is to bring it into the hands of entrepreneurs and make it usable for everyone.”

    Borrowing from William Gibson’s famous line, Rivera frames Shopify’s bet on Molly and design as a way of redistributing that future, through creativity, craft, and culture.





    Source link

  • Between Strategy and Story: Thierry Chopain’s Creative Path

    Between Strategy and Story: Thierry Chopain’s Creative Path


    Hello I’m Thierry Chopain, a freelance interactive art director, co-founder of type8 studio and a UX/UI design instructor at SUP de PUB (Lyon).

    Based near Saint-Étienne, I cultivate a balance between creative ambition and local grounding, between high-level design and a more human pace of life. I work remotely with a close-knit team spread between Lyon, Montpellier, and Paris, where we design custom projects that blend strategy, brand identity, and digital experience.

    My approach is deeply collaborative. I believe in lasting relationships built on trust, mutual listening, and the value of each perspective. Beyond aesthetics, my role is to bring clarity, meaning, and visual consistency to every project. Alongside my design practice, I teach at SUP de PUB, where I support students not only in mastering UX/UI concepts, but also in shaping their path as independent designers. Sharing what I’ve learned on the ground the wins, the struggles, and the lessons is a mission that matters deeply to me.

    My day-to-day life is a mix of slow living and agility. This hybrid rhythm allows me to stay true to my values while continuing to grow in a demanding and inspiring industry. I collaborate with a trusted network of creatives including Jeremy Fagis, Marine Ferrari ,Thomas Aufresne, Jordan Thiervoz, Alexandre Avram, Benoit Drigny and Olivier Marmillon to enrich every project with a shared, high-level creative vision.

    Featured Projects

    OVA INVESTMENT

    It’s an investment fund built around a strong promise: to invest disruptively in the most valuable assets of our time. Type8 studio partnered collaboration with DEPARTMENT Maison de Création and Paul Barbin to design a fully reimagined website that lives up to its bold vision and distinctive positioning. Site structure, visual direction, tone of voice, and user experience were all redefined to reflect the strategic precision, elegance, and forward-thinking nature of the fund.

    The goal of this project: Position OVA as a benchmark combining financial performance, innovation, and rarity, through refined design, a seamless interface, and custom development, in order to strengthen its credibility with a discerning audience and strategic partners.

    Discover the website

    Hocus Pocus Studio

    Hocus Pocus is a Lyon based animation studio specialized in creation of CGI and visual effects for television, cinema and video game industry. The studio offer the best quality services with an always higher technical and artistic level of requirement. I worked on this project in collaboration with the Lyon-based studio AKARU which specializes in tailored and meticulously crafted projects.

    Instagram post HP

    The goal of this project: Develop a coherent and professional digital brand image that highlights visual effects, while boosting visibility and online presence to attract and inspire trust in customers.

    Discover the website

    21 TSI

    21 TSI isn’t your typical sports holding company. Overseeing a portfolio of brands in the sports equipment space the team set out to break from the mold of the standard corporate website. Instead, they envisioned a digital experience that would reflect their DNA where innovation, design, and technology converge into a rich, immersive journey. We collaborated with DEPARTMENT Maison de Création and Paul Barbin to create something truly unique.

    The goal of this project: A website that embodies the DNA of 21TSI: innovation, technology, minimalism. An immersive and aesthetic experience, a clean design, and an approach that explores new ways of engaging with sport through AI.

    Discover the website

    Teria

    TERIA is a system that provides real-time centimeter-level positioning. It is an innovative tool that allows the localization and georeferencing. We set out to create an intuitive and innovative experience that perfectly reflects Teria’s precision and forward-thinking vision. A major part of the work focused on a clean, minimalist design that allows for smooth navigation making space to highlight the incredible work of Alexandre Avram, showcasing the products through Spline and 3D motion design.

    The goal of this project: Develop a clear and professional digital brand that reflects the brand’s identity and values, showcases product innovation, and boosts visibility to build trust and attract customers.

    Discover the website

    Creating visual identities for musical artists

    In a dense and ever-evolving music scene, standing out requires more than just great sound it also takes a strong and cohesive visual presence. Whether it’s the cinematic intensity of Lecomte de Brégeot, the raw emotion of Élimane my approach remains the same: to craft a visual universe that extends and enhances the essence of each artist, regardless of the medium.

    AFFICHE POST SQ
    Visual recap – Cover design for “Sequences” (Lecomte de Brégeot)
    Élimane – Weaver of Sounds, Sculptor of Emotions.

    A Defining Moment in My Career

    A turning point in my journey was the transition from working as an independent designer to founding a structured creative studio, type8 Studio. For more than ten years, I worked solo or within informal networks, juggling projects, constantly adapting, and learning how to shape my own freedom. That period gave me a lot—not only in terms of experience, but also in understanding what I truly wanted… and what I no longer wanted.

    Creating a studio was never a predefined goal. It came together progressively, through encounters, shared values, and the growing need to give form to something more collective and sustainable. Type8 was born from this shared intention: bringing together skills and creative ambitions while preserving individual freedom.

    This change was not a rupture but a natural evolution. I didn’t abandon my three identities—independent designer, studio art director, and educator. On the contrary, I integrated them into a more fluid and conscious ecosystem. Today, I can choose the most relevant role depending on the project: sometimes the studio takes the lead, sometimes it’s the freelance spirit that fits best, and at other times, it’s the educator in me who comes forward.

    This hybrid model, which some might see as unstable, is for me a tailor-made balance, deeply aligned with how I envision work: adaptive, intentional, and guided by respect for the project’s purpose and values.

    My Design Philosophy

    I see design as a tool serving meaning, people, and impact beyond mere aesthetics. It’s about creating connection, clarity, and relevance between intention and users. This approach was shaped through my collaboration with my wife, an expert in digital accessibility, who raised my awareness of inclusion and real user needs often overlooked.

    Today, I bring ethics, care, and respect into every project, focusing on accessible design and core human values: kindness, clarity, usefulness, and respecting user constraints. I prioritize human collaboration, tailoring each solution to the client’s context and values, even if it means going against trends. My design blends strategic thinking, creativity, and personal commitment to create enriching and socially valuable experiences.

    Tools and Techniques

    • Figma: To design, create, and gather ideas collaboratively.
    • Jitter: For crafting smooth and engaging motion designs.
    • Loom: To exchange feedback efficiently with clients.

    Tools evolve but they’re just means to an end. What really matters is your ability to think and create. If you’re a good designer, you’ll know how to adapt, no matter the tool.

    My Inspirations

    My imagination was shaped somewhere between a game screen, a sketchbook. Among all my influences, narrative video games hold a special place. Titles like “The Last of Us” have had a deep impact on me not just for their striking art direction, but for their ability to tell a story in an immersive, emotional, and sensory way. What inspires me in these universes isn’t just the gameplay, but how they create atmosphere, build meaningful moments, and evoke emotion without words. Motion design, sound, typography, lighting all of it is composed like a language. And that’s exactly how I approach interactive design: orchestrating visual and experiential elements to convey a message, an intention, or a feeling.

    But my inspirations go beyond the digital world. They lie at the intersection of street art, furniture design, and sneakers. My personal environment also plays a crucial role in fueling my creativity. Living in a small village close to nature, surrounded by calm and serenity, gives me the mental space I need to create. It’s often in these quiet moments, a walk through the woods, a shared silence, the way light plays on a path that my strongest ideas emerge.

    INSPIRATIONS

    I’m a creative who exists at the crossroads: between storytelling and interaction, between city and nature, between aesthetics and purpose. That’s where my work finds its balance.

    Final Thoughts

    For me, design has always been more than a craft it’s a way to connect ideas, people, and emotions. Every project is an opportunity to tell a story, to create something that feels both meaningful and timeless. Stay curious, stay human, and don’t be afraid to push boundaries. Because the most memorable work is born when passion meets purpose.

    Contact

    Thanks for taking the time to read this article.

    If you’re a brand, studio, or institution looking for a strong and distinctive digital identity. I’d be happy to talk whether it’s about a project, a potential collaboration, or just sharing a few ideas.





    Source link

  • From Zero to MCP: Simplifying AI Integrations with xmcp

    From Zero to MCP: Simplifying AI Integrations with xmcp



    The AI ecosystem is evolving rapidly, and Anthropic releasing the Model Context Protocol on November 25th, 2024 has certainly shaped how LLM’s connect with data. No more building custom integrations for every data source: MCP provides one protocol to connect them all. But here’s the challenge: building MCP servers from scratch can be complex.

    TL;DR: What is MCP?

    Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect devices to various peripherals, MCP provides a standardized way to connect AI models to different data sources, tools, and services. It’s an open protocol that enables AI applications to safely and efficiently access external context – whether that’s your company’s database, file systems, APIs, or custom business logic.

    Source: https://modelcontextprotocol.io/docs/getting-started/intro

    In practice, this means you can hook LLMs into the things you already work with every day. To name a few examples, you could query databases to visualize trends, pull and resolve issues from GitHub, fetch or update content to a CMS, and so on. Beyond development, the same applies to broader workflows: customer support agents can look up and resolve tickets, enterprise search can fetch and read content scattered across wikis and docs, operations can monitor infrastructure or control devices.

    But there’s more to it, and that’s when you really unlock the power of MCP. It’s not just about single tasks, but rethinking entire workflows. Suddenly, we’re shaping our way to interact with products and even our own computers: instead of adapting ourselves to the limitations of software, we can shape the experience around our own needs.

    That’s where xmcp comes in: a TypeScript framework designed with DX in mind, for developers who want to build and ship MCP servers without the usual friction. It removes the complexity and gets you up and running in a matter of minutes.

    A little backstory

    xmcp was born out of necessity at Basement Studio, where we needed to build internal tools for our development processes. As we dove deeper into the protocol, we quickly discovered how fragmented the tooling landscape was and how much time we were spending on setup, configuration, and deployment rather than actually building the tools our team needed.

    That’s when we decided to consolidate everything we’d learned into a framework. The philosophy was simple: developers shouldn’t have to become experts just to build AI tools. The focus should be on creating valuable functionality, not wrestling with boilerplate code and all sorts of complexities.

    Key features & capabilities

    xmcp shines in its simplicity. With just one command, you can scaffold a complete MCP server:

    npx create-xmcp-app@latest

    The framework automatically discovers and registers tools. No extra setup needed.

    All you need is tools/

    xmcp abstracts the original tool syntax from the TypeScript SDK and follows a SOC principle, following a simple three-exports structure:

    • Implementation: The actual tool logic.
    • Schema: Define input parameters using Zod schemas with automatic validation
    • Metadata: Specify tool identity and behavior hints for AI models
    // src/tools/greet.ts
    import { z } from "zod";
    import { type InferSchema } from "xmcp";
    
    // Define the schema for tool parameters
    export const schema = {
      name: z.string().describe("The name of the user to greet"),
    };
    
    // Define tool metadata
    export const metadata = {
      name: "greet",
      description: "Greet the user",
      annotations: {
        title: "Greet the user",
        readOnlyHint: true,
        destructiveHint: false,
        idempotentHint: true,
      },
    };
    
    // Tool implementation
    export default async function greet({ name }: InferSchema<typeof schema>) {
      return `Hello, ${name}!`;
    }

    Transport Options

    • HTTP: Perfect for server deployments, enabling tools that fetch data from databases or external APIs
    • STDIO: Ideal for local operations, allowing LLMs to perform tasks directly on your machine

    You can tweak the configuration to your needs by modifying the xmcp.config.ts file in the root directory. Among the options you can find the transport type, CORS setup, experimental features, tools directory, and even the webpack config. Learn more about this file here.

    const config: XmcpConfig = {
      http: {
        port: 3000,
        // The endpoint where the MCP server will be available
        endpoint: "/my-custom-endpoint",
        bodySizeLimit: 10 * 1024 * 1024,
        cors: {
          origin: "*",
          methods: ["GET", "POST"],
          allowedHeaders: ["Content-Type"],
          credentials: true,
          exposedHeaders: ["Content-Type"],
          maxAge: 600,
        },
      },
    
      webpack: (config) => {
        // Add raw loader for images to get them as base64
        config.module?.rules?.push({
          test: /\.(png|jpe?g|gif|svg|webp)$/i,
          type: "asset/inline",
        });
    
        return config;
      },
    };
    

    Built-in Middleware & Authentication

    For HTTP servers, xmcp provides native solutions to add Authentication (JWT, API Key, OAuth). You can always leverage your application by adding custom middlewares, which can even be an array.

    import { type Middleware } from 'xmcp';
    
    const middleware: Middleware = async (req, res, next) => {
      // Custom processing
      next();
    };
    
    export default middleware;
    

    Integrations

    While you can bootstrap an application from scratch, xmcp can also work on top of your existing Next.js or Express project. To get started, run the following command:

    npx init-xmcp@latest

    on your initialized application, and you are good to go! You’ll find a tools directory with the same discovery capabilities. If you’re using Next.js the handler is set up automatically. If you’re using Express, you’ll have to configure it manually.

    From zero to prod

    Let’s see this in action by building and deploying an MCP server. We’ll create a Linear integration that fetches issues from your backlog and calculates completion rates, perfect for generating project analytics and visualizations.

    For this walkthrough, we’ll use Cursor as our MCP client to interact with the server.

    Setting up the project

    The fastest way to get started is by deploying the xmcp template directly from Vercel. This automatically initializes the project and creates an HTTP server deployment in one click.

    Alternative setup: If you prefer a different platform or transport method, scaffold locally with npx create-xmcp-app@latest

    Once deployed, you’ll see this project structure:

    Building our main tool

    Our tool will accept three parameters: team name, start date, and end date. It’ll then calculate the completion rate for issues within that timeframe.

    Head to the tools directory, create a file called get-completion-rate.ts and export the three main elements that construct the syntax:

    import { z } from "zod";
    import { type InferSchema, type ToolMetadata } from "xmcp";
    
    export const schema = {
      team: z
        .string()
        .min(1, "Team name is required")
        .describe("The team to get completion rate for"),
      startDate: z
        .string()
        .min(1, "Start date is required")
        .describe("Start date for the analysis period (YYYY-MM-DD)"),
      endDate: z
        .string()
        .min(1, "End date is required")
        .describe("End date for the analysis period (YYYY-MM-DD)"),
    };
    
    export const metadata: ToolMetadata = {
      name: "get-completion-rate",
      description: "Get completion rate analytics for a specific team over a date range",
    };
    
    export default async function getCompletionRate({
      team,
      startDate,
      endDate,
    }: InferSchema<typeof schema>) {
    // tool implementation we'll cover in the next step
    };

    Our basic structure is set. We now have to add the client functionality to actually communicate with Linear and get the data we need.

    We’ll be using Linear’s personal API Key, so we’ll need to instantiate the client using @linear/sdk . We’ll focus on the tool implementation now:

    export default async function getCompletionRate({
      team,
      startDate,
      endDate,
    }: InferSchema<typeof schema>) {
    
        const linear = new LinearClient({
            apiKey: // our api key
        });
    
    };

    Instead of hardcoding API keys, we’ll use the native headers utilities to accept the Linear API key securely from each request:

    export default async function getCompletionRate({
      team,
      startDate,
      endDate,
    }: InferSchema<typeof schema>) {
    
        // API Key from headers
        const apiKey = headers()["linear-api-key"] as string;
    
        if (!apiKey) {
            return "No linear-api-key header provided";
        }
    
        const linear = new LinearClient({
            apiKey: apiKey,
        });
        
        // rest of the implementation
    }

    This approach allows multiple users to connect with their own credentials. Your MCP configuration will look like:

    "xmcp-local": {
      "url": "http://127.0.0.1:3001/mcp",
      "headers": {
        "linear-api-key": "your api key"
      }
    }

    Moving forward with the implementation, this is what our complete tool file will look like:

    import { z } from "zod";
    import { type InferSchema, type ToolMetadata } from "xmcp";
    import { headers } from "xmcp/dist/runtime/headers";
    import { LinearClient } from "@linear/sdk";
    
    export const schema = {
      team: z
        .string()
        .min(1, "Team name is required")
        .describe("The team to get completion rate for"),
      startDate: z
        .string()
        .min(1, "Start date is required")
        .describe("Start date for the analysis period (YYYY-MM-DD)"),
      endDate: z
        .string()
        .min(1, "End date is required")
        .describe("End date for the analysis period (YYYY-MM-DD)"),
    };
    
    export const metadata: ToolMetadata = {
      name: "get-completion-rate",
      description: "Get completion rate analytics for a specific team over a date range",
    };
    
    export default async function getCompletionRate({
      team,
      startDate,
      endDate,
    }: InferSchema<typeof schema>) {
    
        // API Key from headers
        const apiKey = headers()["linear-api-key"] as string;
    
        if (!apiKey) {
            return "No linear-api-key header provided";
        }
    
        const linear = new LinearClient({
            apiKey: apiKey,
        });
    
        // Get the team by name
        const teams = await linear.teams();
        const targetTeam = teams.nodes.find(t => t.name.toLowerCase().includes(team.toLowerCase()));
    
        if (!targetTeam) {
            return `Team "${team}" not found`
        }
    
        // Get issues created in the date range for the team
        const createdIssues = await linear.issues({
            filter: {
                team: { id: { eq: targetTeam.id } },
                createdAt: {
                    gte: startDate,
                    lte: endDate,
                },
            },
        });
    
        // Get issues completed in the date range for the team (for reporting purposes)
        const completedIssues = await linear.issues({
            filter: {
                team: { id: { eq: targetTeam.id } },
                completedAt: {
                    gte: startDate,
                    lte: endDate,
                },
            },
        });
    
        // Calculate completion rate: percentage of created issues that were completed
        const totalCreated = createdIssues.nodes.length;
        const createdAndCompleted = createdIssues.nodes.filter(issue => 
            issue.completedAt !== undefined && 
            issue.completedAt >= new Date(startDate) && 
            issue.completedAt <= new Date(endDate)
        ).length;
        const completionRate = totalCreated > 0 ? (createdAndCompleted / totalCreated * 100).toFixed(1) : "0.0";
    
        // Structure data for the response
        const analytics = {
            team: targetTeam.name,
            period: `${startDate} to ${endDate}`,
            totalCreated,
            totalCompletedFromCreated: createdAndCompleted,
            completionRate: `${completionRate}%`,
            createdIssues: createdIssues.nodes.map(issue => ({
                title: issue.title,
                createdAt: issue.createdAt,
                priority: issue.priority,
                completed: issue.completedAt !== null,
                completedAt: issue.completedAt,
            })),
            allCompletedInPeriod: completedIssues.nodes.map(issue => ({
                title: issue.title,
                completedAt: issue.completedAt,
                priority: issue.priority,
            })),
        };
    
        return JSON.stringify(analytics, null, 2);
    }

    Let’s test it out!

    Start your development server by running pnpm dev (or the package manager you’ve set up)

    The server will automatically restart whenever you make changes to your tools, giving you instant feedback during development. Then, head to Cursor Settings → Tools & Integrations and toggle the server on. You should see it’s discovering one tool file, which is our only file in the directory.

    Let’s now use the tool by querying to “Get the completion rate of the xmcp project between August 1st 2025 and August 20th 2025”.

    Let’s try using this tool in a more comprehensive way: we want to understand the project’s completion rate in three separate months, June, July and August, and visualize the tendency. So we will ask Cursor to retrieve the information for these months, and generate a tendency chart and a monthly issue overview:

    Once we’re happy with the implementation, we’ll push our changes and deploy a new version of our server.

    Pro tip: use Vercel’s branch deployments to test new tools safely before merging to production.

    Next steps

    Nice! We’ve built the foundation, but there’s so much more you can do with it.

    • Expand your MCP toolkit with a complete workflow automation. Take this MCP server as a starting point and add tools that generate weekly sprint reports and automatically save them to Notion, or build integrations that connect multiple project management platforms.
    • Leverage the application by adding authentication. You can use the OAuth native provider to add Linear’s authentication instead of using API Keys, or use the Better Auth integration to handle custom authentication paths that fit your organization’s security requirements.
    • For production workloads, you may need to add custom middlewares, like rate limiting, request logging, and error tracking. This can be easily set up by creating a middleware.ts file in the source directory. You can learn more about middlewares here.

    Final thoughts

    The best part of what you’ve built here is that xmcp handled all the protocol complexity for you. You didn’t have to learn the intricacies of the Model Context Protocol specification or figure out transport layers: you just focused on solving your actual business problem. That’s exactly how it should be.

    Looking ahead, xmcp’s roadmap includes full MCP specification compliance, bringing support for resources, prompts and elicitation. More importantly, the framework is evolving to bridge the gap between prototype and production, with enterprise-grade features for authentication, monitoring, and scalability.

    If you wish to learn more about the framework, visit xmcp.dev, read the documentation and check out the examples!



    Source link

  • Interactive Video Projection Mapping with Three.js

    Interactive Video Projection Mapping with Three.js



    Projection mapping has long fascinated audiences in the physical world, turning buildings, sculptures, and entire cityscapes into moving canvases. What if you could recreate that same sense of spectacle directly inside the browser?

    With WebGL and Three.js, you can project video not onto walls or monuments but onto dynamic 3D grids made of hundreds of cubes, each one carrying a fragment of the video like a digital mosaic. Many will surely recognize this effect from Rogier de Boevé’s portfolio, which gained wide attention for showcasing exactly this approach.

    In this tutorial we’ll explore how to simulate video projection mapping in a purely digital environment, from building a grid of cubes, to UV-mapping video textures, to applying masks that determine which cubes appear. The demo for this tutorial is inspired by Rogier’s work, which he breaks down beautifully in his case study for anyone interested in the concept behind it.

    The result is a mesmerizing effect that feels both sculptural and cinematic, perfect for interactive installations, portfolio showcases, or simply as a playground to push your creative coding skills further.

    What is Video Projection Mapping in the Real World?

    When describing video projection mapping, it’s easiest to think of huge buildings lit up with animations during festivals, or art installations where a moving image is “painted” onto sculptures.

    Here are some examples of real-world video projections:

    Bringing it to our 3D World

    In 3D graphics, we can do something similar: instead of shining a physical projector, we map a video texture onto objects in a scene.

    Therefore, let’s build a grid of cubes using a mask image that will determine which cubes are visible. A video texture is UV-mapped so each cube shows the exact video fragment that corresponds to its grid cell—together they reconstruct the video, but only where the mask is dark.

    Prerequesites:

    • Three.js r155+
    • A small, high-contrast mask image (e.g. a heart silhouette).
    • A video URL with CORS enabled.

    Our Boilerplate and Starting Point

    Here is a basic starter setup, i.e. the minimum amount of code and structure you need to get a scene rendering in the browser, without worrying about the specific creative content yet.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            const geometry = new THREE.BoxGeometry( 1, 1, 1 );
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            const cube = new THREE.Mesh( geometry, this.material );
            this.group.add( cube );
            this.is_ready = true
        }
        
        ...
    }

    The result is a spinning red cube:

    Creating the Grid

    A centered grid of cubes (10×10 by default). Every cube has the same size and material. The grid spacing and overall scale are configurable.

    export default class Models {
    	constructor(gl_app) {
            ...
    
    		this.gridSize = 10;
            this.spacing = 0.75;
            this.createGrid()
        }
    
        createGrid() {
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            
            // Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    const mesh = new THREE.Mesh(geometry, this.material);
                    mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.z = 0;
    
                    this.group.add(mesh);
                }
            }
            this.group.scale.setScalar(0.5)
            ...
        }   
        ...
    }

    Key parameters

    World-space distance between cube centers. Increase for larger gaps, decrease to pack tighter.

    How many cells per side. A 10×10 grid ⇒ 100 cubes

    Creating the Video Texture

    This function creates a video texture in Three.js so you can use a playing HTML <video> as the texture on 3D objects.

    • Creates an HTML <video> element entirely in JavaScript (not added to the DOM).
    • We’ll feed this element to Three.js to use its frames as a texture.
    • loop = true → restarts automatically when it reaches the end.
    • muted = true → most browsers block autoplay for unmuted videos, so muting ensures it plays without user interaction.
    • .play() → starts playback.
    • ⚠️ Some browsers still need a click/touch before autoplay works — you can add a fallback listener if needed.
    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createVideoTexture() {
    		this.video = document.createElement('video')
    		this.video.src = 'https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4'
    		this.video.crossOrigin = 'anonymous'
    		this.video.loop = true
    		this.video.muted = true
    		this.video.play()
    
    		// Create video texture
    		this.videoTexture = new THREE.VideoTexture(this.video)
    		this.videoTexture.minFilter = THREE.LinearFilter
    		this.videoTexture.magFilter = THREE.LinearFilter
    		this.videoTexture.colorSpace = THREE.SRGBColorSpace
    		this.videoTexture.wrapS = THREE.ClampToEdgeWrap
    		this.videoTexture.wrapT = THREE.ClampToEdgeWrap
    
    		// Create material with video texture
    		this.material = new THREE.MeshBasicMaterial({ 
    			map: this.videoTexture,
    			side: THREE.FrontSide
    		})
        }
    
        createGrid() {
            this.createVideoTexture()
            ...
        }
        ...
    }

    This is the video we are using: Big Buck Bunny (without CORS)

    All the meshes have the same texture applied:

    Attributing Projection to the Grid

    We will be turning the video into a texture atlas split into a gridSize × gridSize lattice.
    Each cube in the grid gets its own little UV window (sub-rectangle) of the video so, together, all cubes reconstruct the full frame.

    Why per-cube geometry? Because we can create a new BoxGeometry for each cube since the UVs must be unique per cube. If all cubes shared one geometry, they’d also share the same UVs and show the same part of the video.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            ...
    		// Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    
    				// Create individual geometry for each box to have unique UV mapping
    				// Calculate UV coordinates for this specific box
    				const uvX = x / this.gridSize
    				const uvY = y / this.gridSize // Remove the flip to match correct orientation
    				const uvWidth = 1 / this.gridSize
    				const uvHeight = 1 / this.gridSize
    				
    				// Get the UV attribute
    				const uvAttribute = geometry.attributes.uv
    				const uvArray = uvAttribute.array
    				
    				// Map each face of the box to show the same portion of video
    				// We'll focus on the front face (face 4) for the main projection
    				for (let i = 0; i < uvArray.length; i += 2) {
    					// Map all faces to the same UV region for consistency
    					uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
    					uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
    				}
    				
    				// Mark the attribute as needing update
    				uvAttribute.needsUpdate = true
                    ...
                }
            }
            ...
        }
        ...
    }

    The UV window for cell (x, y)
    For a grid of size N = gridSize:

    • UV origin of this cell:
      – uvX = x / N
      – uvY = y / N
    • UV size of each cell:
      – uvWidth = 1 / N
      – uvHeight = 1 / N

    Result: every face of the box now samples the same sub-region of the video (and we noted “focus on the front face”; this approach maps all faces to that region for consistency).

    Creating Mask

    We need to create a canvas using a mask that determines which cubes are visible in the grid.

    • Black (dark) pixels → cube is created.
    • White (light) pixels → cube is skipped.

    To do this, we need to:

    1. Load the mask image.
    2. Scale it down to match our grid size.
    3. Read its pixel color data.
    4. Pass that data into the grid-building step.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            // Create a canvas to read mask pixel data
            const canvas = document.createElement('canvas')
            const ctx = canvas.getContext('2d')
    
            const maskImage = new Image()
            maskImage.crossOrigin = 'anonymous'
            maskImage.onload = () => {
                // Get original image dimensions to preserve aspect ratio
                const originalWidth = maskImage.width
                const originalHeight = maskImage.height
                const aspectRatio = originalWidth / originalHeight
    
                // Calculate grid dimensions based on aspect ratio
                this.gridWidth
    			this.gridHeight
                if (aspectRatio > 1) {
                    // Image is wider than tall
                    this.gridWidth = this.gridSize
                    this.gridHeight = Math.round(this.gridSize / aspectRatio)
                } else {
                    // Image is taller than wide or square
                    this.gridHeight = this.gridSize
                    this.gridWidth = Math.round(this.gridSize * aspectRatio)
                }
    
                canvas.width = this.gridWidth
                canvas.height = this.gridHeight
                ctx.drawImage(maskImage, 0, 0, this.gridWidth, this.gridHeight)
    
                const imageData = ctx.getImageData(0, 0, this.gridWidth, this.gridHeight)
                this.data = imageData.data
    			this.createGrid()
    		}
    
            maskImage.src = '../images/heart.jpg'
    	}
        ...
    }

    Match mask resolution to grid

    • We don’t want to stretch the mask — this keeps it proportional to the grid.
    • gridWidth and gridHeight are how many mask pixels we’ll sample horizontally and vertically.
    • This matches the logical cube grid, so each cube can correspond to one pixel in the mask.

    Applying the Mask to the Grid

    Let’s combines mask-based filtering with custom UV mapping to decide where in the grid boxes should appear, and how each box maps to a section of the projected video.
    Here’s the concept step by step:

    • Loops through every potential (x, y) position in a virtual grid.
    • At each grid cell, it will decide whether to place a box and, if so, how to texture it.
    • flippedY: Flips the Y-axis because image coordinates start from the top-left, while the grid’s origin starts from the bottom-left.
    • pixelIndex: Locates the pixel in the this.data array.
    • Each pixel stores 4 values: red, green, blue, alpha.
    • Extracts the R, G, and B values for that mask pixel.
    • Brightness is calculated as the average of R, G, B.
    • If the pixel is dark enough (brightness < 128), a cube will be created.
    • White pixels are ignored → those positions stay empty.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            ...
    	}
    
        createGrid() {
            ...
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
    
                    // Get pixel color from mask (sample at grid position)
                    // Flip Y coordinate to match image orientation
                    const flippedY = this.gridHeight - 1 - y
                    const pixelIndex = (flippedY * this.gridWidth + x) * 4
                    const r = this.data[pixelIndex]
                    const g = this.data[pixelIndex + 1]
                    const b = this.data[pixelIndex + 2]
    
                    // Calculate brightness (0 = black, 255 = white)
                    const brightness = (r + g + b) / 3
    
                    // Only create box if pixel is dark (black shows, white hides)
                    if (brightness < 128) { // Threshold for black vs white
    
                        // Create individual geometry for each box to have unique UV mapping
                        // Calculate UV coordinates for this specific box
                        const uvX = x / this.gridSize
                        const uvY = y / this.gridSize // Remove the flip to match correct orientation
                        const uvWidth = 1 / this.gridSize
                        const uvHeight = 1 / this.gridSize
                        
                        // Get the UV attribute
                        const uvAttribute = geometry.attributes.uv
                        const uvArray = uvAttribute.array
                        
                        // Map each face of the box to show the same portion of video
                        // We'll focus on the front face (face 4) for the main projection
                        for (let i = 0; i < uvArray.length; i += 2) {
                            // Map all faces to the same UV region for consistency
                            uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
                            uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
                        }
                        
                        // Mark the attribute as needing update
                        uvAttribute.needsUpdate = true
                        
                        const mesh = new THREE.Mesh(geometry, this.material);
    
                        mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.z = 0;
    
                        this.group.add(mesh);
                    }
                }
            }
            ...
        }
        ...
    }

    Further steps

    • UV mapping is the process of mapping 2D video pixels onto 3D geometry.
    • Each cube gets its own unique UV coordinates corresponding to its position in the grid.
    • uvWidth and uvHeight are how much of the video texture each cube covers.
    • Modifies the cube’s uv attribute so all faces display the exact same portion of the video.

    Here is the result with the mask applied:

    Adding Some Depth and Motion to the Grid

    Adding subtle motion along the Z-axis brings the otherwise static grid to life, making the projection feel more dynamic and dimensional.

    update() {
        if (this.is_ready) {
            this.group.children.forEach((model, index) => {
                model.position.z = Math.sin(Date.now() * 0.005 + index * 0.1) * 0.6
            })
        }
    }

    It’s the time for Multiple Grids

    Up until now we’ve been working with a single mask and a single video, but the real fun begins when we start layering multiple projections together. By combining different mask images with their own video sources, we can create a collection of independent grids that coexist in the same scene. Each grid can carry its own identity and motion, opening the door to richer compositions, transitions, and storytelling effects.

    1. A Playlist of Masks and Videos

    export default class Models {
    	constructor(gl_app) {
            ...
            this.grids_config = [
                {
                    id: 'heart',
                    mask: `heart.jpg`,
                    video: `fruits_trail_squared-transcode.mp4`
                },
                {
                    id: 'codrops',
                    mask: `codrops.jpg`,
                    video: `KinectCube_1350-transcode.mp4`
                },
                {
                    id: 'smile',
                    mask: `smile.jpg`,
                    video: `infinte-grid_squared-transcode.mp4`
                },
            ]
            this.grids_config.forEach((config, index) => this.createMask(config, index))
            this.grids = []
        }
    ...
    }

    Instead of one mask and one video, we now have a list of mask-video pairs.

    Each object defines:

    • id → name/id for each grid.
    • mask → the black/white image that controls which cubes appear.
    • video → the texture that will be mapped onto those cubes.

    This allows you to have multiple different projections in the same scene.

    2. Looping Over All Grids

    Once we have our playlist of mask–video pairs defined, the next step is to go through each item and prepare it for rendering.

    For every configuration in the list we call createMask(config, index), which takes care of loading the mask image, reading its pixels, and then passing the data along to build the corresponding grid.

    At the same time, we keep track of all the grids by storing them in a this.grids array, so later on we can animate them, show or hide them, and switch between them interactively.

    3. createMask(config, index)

    createMask(config, index) {
        ...
        maskImage.onload = () => {
            ...
            this.createGrid(config, index)
        }
        maskImage.src = `../images/${config.mask}`
    }
    • Loads the mask image for the current grid.
    • When the image is loaded, runs the mask pixel-reading logic (as explained before) and then calls createGrid() with the same config and index.
    • The mask determines which cubes are visible for this specific grid.

    4. createVideoTexture(config, index)

    createVideoTexture(config, index) {
        this.video = document.createElement('video')
        this.video.src = `../videos/${config.video}`
        ...
    }
    • Creates a <video> element using the specific video file for this grid.
    • The video is then converted to a THREE.VideoTexture and assigned as the material for the cubes in this grid.
    • Each grid can have its own independent video playing.

    5. createGrid(config, index)

    createGrid(config, index) {
            this.createVideoTexture(config, index)
            const grid_group = new THREE.Group()
            this.group.add(grid_group)
    
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                        ...
                        grid_group.add(mesh);
                }
            }
            grid_group.name = config.id
            this.grids.push(grid_group);
            grid_group.position.z = - 2 * index 
            ...
        }
    • Creates a new THREE.Group for this grid so all its cubes can be moved together.
    • This keeps each mask/video projection isolated.
    • grid_group.name: Assigns a name (you might later use config.id here).
    • this.grids.push(grid_group): Stores this grid in an array so you can control it later (e.g., show/hide, animate, change videos).
    • grid_group.position.z: Offsets each grid further back in Z-space so they don’t overlap visually.

    And here is the result for the multiple grids:

    And finally: Interaction & Animations

    Let’s start by creating a simple UI with some buttons on our HTML:

    <ul class="btns">
    	<li class="btns__item">
    		<button class="active" data-id="heart">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="codrops">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="smile">
    			...
    		</button>
    	</li>
    </ul>

    We’ll also create a data-current="heart" to our canvas element, it will be necessary to change its background-color depending on which button was clicked.

    <canvas id="sketch" data-current="heart"></canvas>

    Let’s not create some colors for each grid using CSS:

    [data-current="heart"] {
    	background-color: #e19800;
    }
    
    [data-current="codrops"] {
    	background-color: #00a00b
    }
    
    [data-current="smile"] {
    	background-color: #b90000;
    }

    Time to apply to create the interactions:

    createGrid(config, index) {
        ...
        this.initInteractions()
    }

    1. this.initInteractions()

    initInteractions() {
        this.current = 'heart'
        this.old = null
        this.is_animating = false
        this.duration = 1
    
        this.DOM = {
            $btns: document.querySelectorAll('.btns__item button'),
            $canvas: document.querySelector('canvas')
        }
        this.grids.forEach(grid => {
            if(grid.name != this.current) {
                grid.children.forEach(mesh => mesh.scale.setScalar(0))
            }
        })
        this.bindEvents()
    }
    • this.current → The currently active grid ID. Starts as "heart" so the "heart" grid will be visible by default.
    • this.old → Used to store the previous grid ID when switching between grids.
    • this.is_animating → Boolean flag to prevent triggering a new transition while one is still running.
    • this.duration → How long the animation takes (in seconds).
    • $btns → Selects all the buttons inside .btns__item. Each button likely corresponds to a grid you can switch to.
    • $canvas → Selects the main <canvas> element where the Three.js scene is rendered.

    Loops through all the grids in the scene.

    • If the grid is not the current one (grid.name != this.current),
    • → It sets all of that grid’s cubes (mesh) to scale = 0 so they are invisible at the start.
    • This means only the "heart" grid will be visible when the scene first loads.

    2. bindEvents()

    bindEvents() {
        this.DOM.$btns.forEach(($btn, index) => {
            $btn.addEventListener('click', () => {
                if (this.is_animating) return
                this.is_animating = true
                this.DOM.$btns.forEach(($btn, btnIndex) => {
                    btnIndex === index ? $btn.classList.add('active') : $btn.classList.remove('active')
                })
                this.old = this.current
                this.current = `${$btn.dataset.id}`
                this.revealGrid()
                this.hideGrid()
            })
        })
    }

    This bindEvents() method wires up the UI buttons so that clicking one will trigger switching between grids in the 3D scene.

    • For each button, attach a click event handler.
    • If an animation is already running, do nothing — this prevents starting multiple transitions at the same time.
    • Sets is_animating to true so no other clicks are processed until the current switch finishes.

    Loops through all buttons again:

    • If this is the clicked button → add the active CSS class (highlight it).
    • Otherwise → remove the active class (unhighlight).
    • this.old → keeps track of which grid was visible before the click.
    • this.current → updates to the new grid’s ID based on the button’s data-id attribute.
      • Example: if the button has data-id="heart", this.current becomes "heart".

    Calls two separate methods:

    • revealGrid() → makes the newly selected grid appear (by scaling its cubes from 0 to full size).
    • hideGrid() → hides the previous grid (by scaling its cubes back down to 0).

    3. revealGrid() & hideGrid()

    revealGrid() {
        // Filter the current grid based on this.current value
        const grid = this.grids.find(item => item.name === this.current);
        
        this.DOM.$canvas.dataset.current = `${this.current}` 
        const tl = gsap.timeline({ delay: this.duration * 0.25, defaults: { ease: 'power1.out', duration: this.duration } })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 1, y: 1, z: 1, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, { z: 0 }, '<')
        })
    }
    
    hideGrid() {
        // Filter the current grid based on this.old value
        const grid = this.grids.find(item => item.name === this.old);
        const tl = gsap.timeline({
            defaults: { ease: 'power1.out', duration: this.duration },
            onComplete: () => { this.is_animating = false }
        })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 0, y: 0, z: 0, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, {
                    z: 6, onComplete: () => {
                        gsap.set(child.scale, { x: 0, y: 0, z: 0 })
                        gsap.set(child.position, { z: - 6 })
                    }
                }, '<')
        })
    }

    And that is it! A full animated and interactive Video Projection Slider, made with hundreds of small cubes (meshes).

    ⚠️ Perfomance considerations

    The approach used in this tutorial, is the simplest and more digestable way to apply the projection concept; However, it can create too many draw calls: 100–1,000 cubes might fine; tens of thousands can be slow. If you need more detailed grid or more meshes on it, consider InstancedMesh and Shaders.

    Going further

    This a fully functional and versatile concept; Therefore, it opens so many possibilities.
    Which can be applied in some really cool ways, like scrollable story-telling, exhibition simulation, intro animations, portfolio showcase and etc.

    Here are some links for you to get inspired:

    Final Words

    I hope you’ve enjoyed this tutorial, and give a try on your projects or just explore the possibilities by changing the grid parameters, masks and videos.

    And talking about the videos, those used on this example are screen-recording of the Creative Code lessons contained in my Web Animations platform vwlab.io, where you can learn how to create more interactions and animations like this one.

    Come join us, you will be more than welcome! ☺️❤️



    Source link

  • Interactive Video Projection Mapping with Three.js

    Interactive Video Projection Mapping with Three.js



    Projection mapping has long fascinated audiences in the physical world, turning buildings, sculptures, and entire cityscapes into moving canvases. What if you could recreate that same sense of spectacle directly inside the browser?

    With WebGL and Three.js, you can project video not onto walls or monuments but onto dynamic 3D grids made of hundreds of cubes, each one carrying a fragment of the video like a digital mosaic. Many will surely recognize this effect from Rogier de Boevé’s portfolio, which gained wide attention for showcasing exactly this approach.

    In this tutorial we’ll explore how to simulate video projection mapping in a purely digital environment, from building a grid of cubes, to UV-mapping video textures, to applying masks that determine which cubes appear. The demo for this tutorial is inspired by Rogier’s work, which he breaks down beautifully in his case study for anyone interested in the concept behind it.

    The result is a mesmerizing effect that feels both sculptural and cinematic, perfect for interactive installations, portfolio showcases, or simply as a playground to push your creative coding skills further.

    What is Video Projection Mapping in the Real World?

    When describing video projection mapping, it’s easiest to think of huge buildings lit up with animations during festivals, or art installations where a moving image is “painted” onto sculptures.

    Here are some examples of real-world video projections:

    Bringing it to our 3D World

    In 3D graphics, we can do something similar: instead of shining a physical projector, we map a video texture onto objects in a scene.

    Therefore, let’s build a grid of cubes using a mask image that will determine which cubes are visible. A video texture is UV-mapped so each cube shows the exact video fragment that corresponds to its grid cell—together they reconstruct the video, but only where the mask is dark.

    Prerequesites:

    • Three.js r155+
    • A small, high-contrast mask image (e.g. a heart silhouette).
    • A video URL with CORS enabled.

    Our Boilerplate and Starting Point

    Here is a basic starter setup, i.e. the minimum amount of code and structure you need to get a scene rendering in the browser, without worrying about the specific creative content yet.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            const geometry = new THREE.BoxGeometry( 1, 1, 1 );
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            const cube = new THREE.Mesh( geometry, this.material );
            this.group.add( cube );
            this.is_ready = true
        }
        
        ...
    }

    The result is a spinning red cube:

    Creating the Grid

    A centered grid of cubes (10×10 by default). Every cube has the same size and material. The grid spacing and overall scale are configurable.

    export default class Models {
    	constructor(gl_app) {
            ...
    
    		this.gridSize = 10;
            this.spacing = 0.75;
            this.createGrid()
        }
    
        createGrid() {
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            
            // Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    const mesh = new THREE.Mesh(geometry, this.material);
                    mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.z = 0;
    
                    this.group.add(mesh);
                }
            }
            this.group.scale.setScalar(0.5)
            ...
        }   
        ...
    }

    Key parameters

    World-space distance between cube centers. Increase for larger gaps, decrease to pack tighter.

    How many cells per side. A 10×10 grid ⇒ 100 cubes

    Creating the Video Texture

    This function creates a video texture in Three.js so you can use a playing HTML <video> as the texture on 3D objects.

    • Creates an HTML <video> element entirely in JavaScript (not added to the DOM).
    • We’ll feed this element to Three.js to use its frames as a texture.
    • loop = true → restarts automatically when it reaches the end.
    • muted = true → most browsers block autoplay for unmuted videos, so muting ensures it plays without user interaction.
    • .play() → starts playback.
    • ⚠️ Some browsers still need a click/touch before autoplay works — you can add a fallback listener if needed.
    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createVideoTexture() {
    		this.video = document.createElement('video')
    		this.video.src = 'https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4'
    		this.video.crossOrigin = 'anonymous'
    		this.video.loop = true
    		this.video.muted = true
    		this.video.play()
    
    		// Create video texture
    		this.videoTexture = new THREE.VideoTexture(this.video)
    		this.videoTexture.minFilter = THREE.LinearFilter
    		this.videoTexture.magFilter = THREE.LinearFilter
    		this.videoTexture.colorSpace = THREE.SRGBColorSpace
    		this.videoTexture.wrapS = THREE.ClampToEdgeWrap
    		this.videoTexture.wrapT = THREE.ClampToEdgeWrap
    
    		// Create material with video texture
    		this.material = new THREE.MeshBasicMaterial({ 
    			map: this.videoTexture,
    			side: THREE.FrontSide
    		})
        }
    
        createGrid() {
            this.createVideoTexture()
            ...
        }
        ...
    }

    This is the video we are using: Big Buck Bunny (without CORS)

    All the meshes have the same texture applied:

    Attributing Projection to the Grid

    We will be turning the video into a texture atlas split into a gridSize × gridSize lattice.
    Each cube in the grid gets its own little UV window (sub-rectangle) of the video so, together, all cubes reconstruct the full frame.

    Why per-cube geometry? Because we can create a new BoxGeometry for each cube since the UVs must be unique per cube. If all cubes shared one geometry, they’d also share the same UVs and show the same part of the video.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            ...
    		// Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    
    				// Create individual geometry for each box to have unique UV mapping
    				// Calculate UV coordinates for this specific box
    				const uvX = x / this.gridSize
    				const uvY = y / this.gridSize // Remove the flip to match correct orientation
    				const uvWidth = 1 / this.gridSize
    				const uvHeight = 1 / this.gridSize
    				
    				// Get the UV attribute
    				const uvAttribute = geometry.attributes.uv
    				const uvArray = uvAttribute.array
    				
    				// Map each face of the box to show the same portion of video
    				// We'll focus on the front face (face 4) for the main projection
    				for (let i = 0; i < uvArray.length; i += 2) {
    					// Map all faces to the same UV region for consistency
    					uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
    					uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
    				}
    				
    				// Mark the attribute as needing update
    				uvAttribute.needsUpdate = true
                    ...
                }
            }
            ...
        }
        ...
    }

    The UV window for cell (x, y)
    For a grid of size N = gridSize:

    • UV origin of this cell:
      – uvX = x / N
      – uvY = y / N
    • UV size of each cell:
      – uvWidth = 1 / N
      – uvHeight = 1 / N

    Result: every face of the box now samples the same sub-region of the video (and we noted “focus on the front face”; this approach maps all faces to that region for consistency).

    Creating Mask

    We need to create a canvas using a mask that determines which cubes are visible in the grid.

    • Black (dark) pixels → cube is created.
    • White (light) pixels → cube is skipped.

    To do this, we need to:

    1. Load the mask image.
    2. Scale it down to match our grid size.
    3. Read its pixel color data.
    4. Pass that data into the grid-building step.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            // Create a canvas to read mask pixel data
            const canvas = document.createElement('canvas')
            const ctx = canvas.getContext('2d')
    
            const maskImage = new Image()
            maskImage.crossOrigin = 'anonymous'
            maskImage.onload = () => {
                // Get original image dimensions to preserve aspect ratio
                const originalWidth = maskImage.width
                const originalHeight = maskImage.height
                const aspectRatio = originalWidth / originalHeight
    
                // Calculate grid dimensions based on aspect ratio
                this.gridWidth
    			this.gridHeight
                if (aspectRatio > 1) {
                    // Image is wider than tall
                    this.gridWidth = this.gridSize
                    this.gridHeight = Math.round(this.gridSize / aspectRatio)
                } else {
                    // Image is taller than wide or square
                    this.gridHeight = this.gridSize
                    this.gridWidth = Math.round(this.gridSize * aspectRatio)
                }
    
                canvas.width = this.gridWidth
                canvas.height = this.gridHeight
                ctx.drawImage(maskImage, 0, 0, this.gridWidth, this.gridHeight)
    
                const imageData = ctx.getImageData(0, 0, this.gridWidth, this.gridHeight)
                this.data = imageData.data
    			this.createGrid()
    		}
    
            maskImage.src = '../images/heart.jpg'
    	}
        ...
    }

    Match mask resolution to grid

    • We don’t want to stretch the mask — this keeps it proportional to the grid.
    • gridWidth and gridHeight are how many mask pixels we’ll sample horizontally and vertically.
    • This matches the logical cube grid, so each cube can correspond to one pixel in the mask.

    Applying the Mask to the Grid

    Let’s combines mask-based filtering with custom UV mapping to decide where in the grid boxes should appear, and how each box maps to a section of the projected video.
    Here’s the concept step by step:

    • Loops through every potential (x, y) position in a virtual grid.
    • At each grid cell, it will decide whether to place a box and, if so, how to texture it.
    • flippedY: Flips the Y-axis because image coordinates start from the top-left, while the grid’s origin starts from the bottom-left.
    • pixelIndex: Locates the pixel in the this.data array.
    • Each pixel stores 4 values: red, green, blue, alpha.
    • Extracts the R, G, and B values for that mask pixel.
    • Brightness is calculated as the average of R, G, B.
    • If the pixel is dark enough (brightness < 128), a cube will be created.
    • White pixels are ignored → those positions stay empty.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            ...
    	}
    
        createGrid() {
            ...
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
    
                    // Get pixel color from mask (sample at grid position)
                    // Flip Y coordinate to match image orientation
                    const flippedY = this.gridHeight - 1 - y
                    const pixelIndex = (flippedY * this.gridWidth + x) * 4
                    const r = this.data[pixelIndex]
                    const g = this.data[pixelIndex + 1]
                    const b = this.data[pixelIndex + 2]
    
                    // Calculate brightness (0 = black, 255 = white)
                    const brightness = (r + g + b) / 3
    
                    // Only create box if pixel is dark (black shows, white hides)
                    if (brightness < 128) { // Threshold for black vs white
    
                        // Create individual geometry for each box to have unique UV mapping
                        // Calculate UV coordinates for this specific box
                        const uvX = x / this.gridSize
                        const uvY = y / this.gridSize // Remove the flip to match correct orientation
                        const uvWidth = 1 / this.gridSize
                        const uvHeight = 1 / this.gridSize
                        
                        // Get the UV attribute
                        const uvAttribute = geometry.attributes.uv
                        const uvArray = uvAttribute.array
                        
                        // Map each face of the box to show the same portion of video
                        // We'll focus on the front face (face 4) for the main projection
                        for (let i = 0; i < uvArray.length; i += 2) {
                            // Map all faces to the same UV region for consistency
                            uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
                            uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
                        }
                        
                        // Mark the attribute as needing update
                        uvAttribute.needsUpdate = true
                        
                        const mesh = new THREE.Mesh(geometry, this.material);
    
                        mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.z = 0;
    
                        this.group.add(mesh);
                    }
                }
            }
            ...
        }
        ...
    }

    Further steps

    • UV mapping is the process of mapping 2D video pixels onto 3D geometry.
    • Each cube gets its own unique UV coordinates corresponding to its position in the grid.
    • uvWidth and uvHeight are how much of the video texture each cube covers.
    • Modifies the cube’s uv attribute so all faces display the exact same portion of the video.

    Here is the result with the mask applied:

    Adding Some Depth and Motion to the Grid

    Adding subtle motion along the Z-axis brings the otherwise static grid to life, making the projection feel more dynamic and dimensional.

    update() {
        if (this.is_ready) {
            this.group.children.forEach((model, index) => {
                model.position.z = Math.sin(Date.now() * 0.005 + index * 0.1) * 0.6
            })
        }
    }

    It’s the time for Multiple Grids

    Up until now we’ve been working with a single mask and a single video, but the real fun begins when we start layering multiple projections together. By combining different mask images with their own video sources, we can create a collection of independent grids that coexist in the same scene. Each grid can carry its own identity and motion, opening the door to richer compositions, transitions, and storytelling effects.

    1. A Playlist of Masks and Videos

    export default class Models {
    	constructor(gl_app) {
            ...
            this.grids_config = [
                {
                    id: 'heart',
                    mask: `heart.jpg`,
                    video: `fruits_trail_squared-transcode.mp4`
                },
                {
                    id: 'codrops',
                    mask: `codrops.jpg`,
                    video: `KinectCube_1350-transcode.mp4`
                },
                {
                    id: 'smile',
                    mask: `smile.jpg`,
                    video: `infinte-grid_squared-transcode.mp4`
                },
            ]
            this.grids_config.forEach((config, index) => this.createMask(config, index))
            this.grids = []
        }
    ...
    }

    Instead of one mask and one video, we now have a list of mask-video pairs.

    Each object defines:

    • id → name/id for each grid.
    • mask → the black/white image that controls which cubes appear.
    • video → the texture that will be mapped onto those cubes.

    This allows you to have multiple different projections in the same scene.

    2. Looping Over All Grids

    Once we have our playlist of mask–video pairs defined, the next step is to go through each item and prepare it for rendering.

    For every configuration in the list we call createMask(config, index), which takes care of loading the mask image, reading its pixels, and then passing the data along to build the corresponding grid.

    At the same time, we keep track of all the grids by storing them in a this.grids array, so later on we can animate them, show or hide them, and switch between them interactively.

    3. createMask(config, index)

    createMask(config, index) {
        ...
        maskImage.onload = () => {
            ...
            this.createGrid(config, index)
        }
        maskImage.src = `../images/${config.mask}`
    }
    • Loads the mask image for the current grid.
    • When the image is loaded, runs the mask pixel-reading logic (as explained before) and then calls createGrid() with the same config and index.
    • The mask determines which cubes are visible for this specific grid.

    4. createVideoTexture(config, index)

    createVideoTexture(config, index) {
        this.video = document.createElement('video')
        this.video.src = `../videos/${config.video}`
        ...
    }
    • Creates a <video> element using the specific video file for this grid.
    • The video is then converted to a THREE.VideoTexture and assigned as the material for the cubes in this grid.
    • Each grid can have its own independent video playing.

    5. createGrid(config, index)

    createGrid(config, index) {
            this.createVideoTexture(config, index)
            const grid_group = new THREE.Group()
            this.group.add(grid_group)
    
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                        ...
                        grid_group.add(mesh);
                }
            }
            grid_group.name = config.id
            this.grids.push(grid_group);
            grid_group.position.z = - 2 * index 
            ...
        }
    • Creates a new THREE.Group for this grid so all its cubes can be moved together.
    • This keeps each mask/video projection isolated.
    • grid_group.name: Assigns a name (you might later use config.id here).
    • this.grids.push(grid_group): Stores this grid in an array so you can control it later (e.g., show/hide, animate, change videos).
    • grid_group.position.z: Offsets each grid further back in Z-space so they don’t overlap visually.

    And here is the result for the multiple grids:

    And finally: Interaction & Animations

    Let’s start by creating a simple UI with some buttons on our HTML:

    <ul class="btns">
    	<li class="btns__item">
    		<button class="active" data-id="heart">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="codrops">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="smile">
    			...
    		</button>
    	</li>
    </ul>

    We’ll also create a data-current="heart" to our canvas element, it will be necessary to change its background-color depending on which button was clicked.

    <canvas id="sketch" data-current="heart"></canvas>

    Let’s not create some colors for each grid using CSS:

    [data-current="heart"] {
    	background-color: #e19800;
    }
    
    [data-current="codrops"] {
    	background-color: #00a00b
    }
    
    [data-current="smile"] {
    	background-color: #b90000;
    }

    Time to apply to create the interactions:

    createGrid(config, index) {
        ...
        this.initInteractions()
    }

    1. this.initInteractions()

    initInteractions() {
        this.current = 'heart'
        this.old = null
        this.is_animating = false
        this.duration = 1
    
        this.DOM = {
            $btns: document.querySelectorAll('.btns__item button'),
            $canvas: document.querySelector('canvas')
        }
        this.grids.forEach(grid => {
            if(grid.name != this.current) {
                grid.children.forEach(mesh => mesh.scale.setScalar(0))
            }
        })
        this.bindEvents()
    }
    • this.current → The currently active grid ID. Starts as "heart" so the "heart" grid will be visible by default.
    • this.old → Used to store the previous grid ID when switching between grids.
    • this.is_animating → Boolean flag to prevent triggering a new transition while one is still running.
    • this.duration → How long the animation takes (in seconds).
    • $btns → Selects all the buttons inside .btns__item. Each button likely corresponds to a grid you can switch to.
    • $canvas → Selects the main <canvas> element where the Three.js scene is rendered.

    Loops through all the grids in the scene.

    • If the grid is not the current one (grid.name != this.current),
    • → It sets all of that grid’s cubes (mesh) to scale = 0 so they are invisible at the start.
    • This means only the "heart" grid will be visible when the scene first loads.

    2. bindEvents()

    bindEvents() {
        this.DOM.$btns.forEach(($btn, index) => {
            $btn.addEventListener('click', () => {
                if (this.is_animating) return
                this.is_animating = true
                this.DOM.$btns.forEach(($btn, btnIndex) => {
                    btnIndex === index ? $btn.classList.add('active') : $btn.classList.remove('active')
                })
                this.old = this.current
                this.current = `${$btn.dataset.id}`
                this.revealGrid()
                this.hideGrid()
            })
        })
    }

    This bindEvents() method wires up the UI buttons so that clicking one will trigger switching between grids in the 3D scene.

    • For each button, attach a click event handler.
    • If an animation is already running, do nothing — this prevents starting multiple transitions at the same time.
    • Sets is_animating to true so no other clicks are processed until the current switch finishes.

    Loops through all buttons again:

    • If this is the clicked button → add the active CSS class (highlight it).
    • Otherwise → remove the active class (unhighlight).
    • this.old → keeps track of which grid was visible before the click.
    • this.current → updates to the new grid’s ID based on the button’s data-id attribute.
      • Example: if the button has data-id="heart", this.current becomes "heart".

    Calls two separate methods:

    • revealGrid() → makes the newly selected grid appear (by scaling its cubes from 0 to full size).
    • hideGrid() → hides the previous grid (by scaling its cubes back down to 0).

    3. revealGrid() & hideGrid()

    revealGrid() {
        // Filter the current grid based on this.current value
        const grid = this.grids.find(item => item.name === this.current);
        
        this.DOM.$canvas.dataset.current = `${this.current}` 
        const tl = gsap.timeline({ delay: this.duration * 0.25, defaults: { ease: 'power1.out', duration: this.duration } })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 1, y: 1, z: 1, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, { z: 0 }, '<')
        })
    }
    
    hideGrid() {
        // Filter the current grid based on this.old value
        const grid = this.grids.find(item => item.name === this.old);
        const tl = gsap.timeline({
            defaults: { ease: 'power1.out', duration: this.duration },
            onComplete: () => { this.is_animating = false }
        })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 0, y: 0, z: 0, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, {
                    z: 6, onComplete: () => {
                        gsap.set(child.scale, { x: 0, y: 0, z: 0 })
                        gsap.set(child.position, { z: - 6 })
                    }
                }, '<')
        })
    }

    And that is it! A full animated and interactive Video Projection Slider, made with hundreds of small cubes (meshes).

    ⚠️ Perfomance considerations

    The approach used in this tutorial, is the simplest and more digestable way to apply the projection concept; However, it can create too many draw calls: 100–1,000 cubes might fine; tens of thousands can be slow. If you need more detailed grid or more meshes on it, consider InstancedMesh and Shaders.

    Going further

    This a fully functional and versatile concept; Therefore, it opens so many possibilities.
    Which can be applied in some really cool ways, like scrollable story-telling, exhibition simulation, intro animations, portfolio showcase and etc.

    Here are some links for you to get inspired:

    Final Words

    I hope you’ve enjoyed this tutorial, and give a try on your projects or just explore the possibilities by changing the grid parameters, masks and videos.

    And talking about the videos, those used on this example are screen-recording of the Creative Code lessons contained in my Web Animations platform vwlab.io, where you can learn how to create more interactions and animations like this one.

    Come join us, you will be more than welcome! ☺️❤️



    Source link

  • 7 Must-Know GSAP Animation Tips for Creative Developers

    7 Must-Know GSAP Animation Tips for Creative Developers


    Today we’re going to go over some of my favorite GSAP techniques that can bring you great results with just a little code.

    Although the GSAP documentation is among the best, I find that developers often overlook some of GSAP’s greatest features or perhaps struggle with finding their practical application. 

    The techniques presented here will be helpful to GSAP beginners and seasoned pros. It is recommended that you understand the basics of loading GSAP and working with tweens, timelines and SplitText. My free beginner’s course GSAP Express will guide you through everything you need for a firm foundation.

    If you prefer a video version of this tutorial, you can watch it here:

    https://www.youtube.com/watch?v=EKjYspj9MaM

    Tip 1: SplitText Masking

    GSAP’s SplitText just went through a major overhaul. It has 14 new features and weighs in at roughly 7kb.

    SplitText allows you to split HTML text into characters, lines, and words. It has powerful features to support screen-readers, responsive layouts, nested elements, foreign characters, emoji and more.

    My favorite feature is its built-in support for masking (available in SplitText version 3.13+).

    Prior to this version of SplitText you would have to manually nest your animated text in parent divs that have overflow set to hidden or clip in the css.

    SplitText now does this for you by creating “wrapper divs” around the elements that we apply masking to.

    Basic Implementation

    The code below will split the h1 tag into chars and also apply a mask effect, which means the characters will not be visible when they are outside their bounding box.

    const split = SplitText.create("h1", {
    	type:"chars",
    	mask:"chars"
    })

    Demo: Split Text Masking (Basic)

    See the Pen
    Codrops Tip 1: Split Text Masking – Basic by Snorkl.tv (@snorkltv)
    on CodePen.

    This simple implementation works great and is totally fine.

    However, if you inspect the DOM you will see that 2 new <div> elements are created for each character:

    • an outer div with overflow:clip
    • an inner div with text 

    With 17 characters to split this creates 34 divs as shown in the simplified DOM structure below

    <h1>SplitText Masking
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>S</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>p</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>l</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>i</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>t</div>
    	</div>	
    	...
    </h1>

    The More Efficient Approach

    If you want to minimize the amount of DOM elements created you can split your text into characters and lines. Then you can just set the masking on the lines element like so:

    const split = SplitText.create("h1", {
    	type:"chars, lines",
    	mask:"lines"
    })

    Demo: Split Text Masking (Better with chars and lines)

    See the Pen
    Codrops Tip 1: Split Text Masking – Better with chars and lines by Snorkl.tv (@snorkltv)
    on CodePen.

    Now if you inspect the DOM you will see that there is

    • 1 line wrapper div with overflow:clip
    • 1 line div
    • 1 div per character 

    With 17 to characters to split this creates only 19 divs in total:

    <h1>SplitText Masking
    	<div> <!-- line wrapper with overflow:clip -->
    		<div> <!-- line -->
    			<div>S</div>
    			<div>p</div>
    			<div>l</div>
    			<div>i</div>
    			<div>t</div>
    			...
    		</div> 
    	</div> 
    </h1>

    Tip 2: Setting the Stagger Direction

    From my experience 99% of stagger animations go from left to right. Perhaps that’s just because it’s the standard flow of written text.

    However, GSAP makes it super simple to add some animation pizzazz to your staggers.

    To change the direction from which staggered animations start you need to use the object-syntax for the stagger value

    Normal Stagger

    Typically the stagger value is a single number which specifies the amount of time between the start of each target element’s animation.

    gsap.to(targets, {x:100, stagger:0.2}) // 0.2 seconds between the start of each animation

    Stagger Object

    By using the stagger object we can specify multiple parameters to fine-tune our staggers such as each, amount, from, ease, grid and repeat. See the GSAP Stagger Docs for more details.
    Our focus today will be on the from property which allows us to specify from which direction our staggers should start.

    gsap.to(targets, {x:100,
       stagger: {
         each:0.2, // amount of time between the start of each animation
         from:”center” // animate from center of the targets array   
    }

    The from property in the stagger object can be any one of these string values

    • “start” (default)
    • “center”
    • “end”
    • “edges”
    • “random”

    Demo: Stagger Direction Timeline

    In this demo the characters animate in from center and then out from the edges.

    See the Pen
    Codrops Tip 2: Stagger Direction Timeline by Snorkl.tv (@snorkltv)
    on CodePen.

    Demo: Stagger Direction Visualizer

    See the Pen
    Codrops Tip 2: Stagger Direction Visualizer by Snorkl.tv (@snorkltv)
    on CodePen.

    Tip 3: Wrapping Array Values

    The gsap.utils.wrap() function allows you to pull values from an array and apply them to multiple targets. This is great for allowing elements to animate in from opposite directions (like a zipper), assigning a set of colors to multiple objects and many more creative applications.

    Setting Colors From an Array

    I love using gsap.utils.wrap() with a set() to instantly manipulate a group of elements.

    // split the header
    const split = SplitText.create("h1", {
    	type:"chars"
    })
    
    //create an array of colors
    const colors = ["lime", "yellow", "pink", "skyblue"]
    
    // set each character to a color from the colors array
    gsap.set(split.chars, {color:gsap.utils.wrap(colors)})

    When the last color in the array (skyblue) is chosen GSAP will wrap back to the beginning of the array and apply lime to the next element.

    Animating from Alternating Directions

    In the code below each target will animate in from alternating y values of -50 and 50. 

    Notice that you can define the array directly inside of the wrap() function.

    const tween = gsap.from(split.chars, {
    	y:gsap.utils.wrap([-50, 50]),
    	opacity:0,
    	stagger:0.1
    }) 

    Demo: Basic Wrap

    See the Pen
    Codrops Tip 3: Basic Wrap by Snorkl.tv (@snorkltv)
    on CodePen.

    Demo: Fancy Wrap

    In the demo below there is a timeline that creates a sequence of animations that combine stagger direction and wrap. Isn’t it amazing what GSAP allows you to do with just a few simple shapes and a few lines of code?

    See the Pen
    Codrops Tip 3: Fancy Wrap by Snorkl.tv (@snorkltv)
    on CodePen.

    As you watch the animation be sure to go through the GSAP code to see which tween is running each effect. 

    I strongly recommend editing the animation values and experimenting.

    Tip 4: Easy Randomization with the “random()” String Function

    GSAP has its own random utility function gsap.utils.random() that lets you tap into convenient randomization features anywhere in your JavaScript code.

    // generate a random number between 0 and 450
    const randomNumber = gsap.utils.random(0, 450)

    To randomize values in animations we can use the random string shortcut which saves us some typing.

    //animate each target to a random x value between 0 and 450
    gsap.to(targets, {x:"random(0, 450)"})
    
    //the third parameter sets the value to snap to
    gsap.to(targets, {x:"random(0, 450, 50)"}) // random number will be an increment of 50
    
    //pick a random value from an array for each target
    gsap.to(targets, fill:"random([pink, yellow, orange, salmon])" 

    Demo: Random String

    See the Pen
    Codrops Tip 4: Random String by Snorkl.tv (@snorkltv)
    on CodePen.

    TIP 5: repeatRefresh:true

    This next tip appears to be pure magic as it allows our animations to produce new results each time they repeat.

    GSAP internally stores the start and end values of an animation the first time it runs. This is a performance optimization so that each time it repeats there is no additional work to do. By default repeating tweens always produce the exact same results (which is a good thing).

    When dealing with dynamic or function-based values such as those generated with the random string syntax “random(0, 100)” we can tell GSAP to record new values on repeat by setting repeatRefresh:true

    You can set repeatRefresh:true in the config object of a single tween OR on a timeline.

    //use on a tween
    gsap.to(target, {x:”random(50, 100”, repeat:10, repeatRefresh:true})
    
    //use on a timeline
    const tl = gsap.timeline({repeat:10, repeatRefresh:true})

    Demo: repeatRefresh Particles

    The demo below contains a single timeline with repeatRefresh:true.

    Each time it repeats the circles get assigned a new random scale and a new random x destination.

    Be sure to study the JS code in the demo. Feel free to fork it and modify the values.

    See the Pen
    Codrops Tip 5: repeatRefresh Particles by Snorkl.tv (@snorkltv)
    on CodePen.

    TIP 6: Tween The TimeScale() of an Animation

    GSAP animations have getter / setter values that allow you to get and set properties of an animation.

    Common Getter / Setter methods:

    • paused() gets or sets the paused state
    • duration() gets or sets the duration
    • reversed() gets or sets the reversed state
    • progress() gets or sets the progress
    • timeScale() gets or sets the timeScale

    Getter Setter Methods in Usage

    animation.paused(true) // sets the paused state to true
    console.log(animation.paused()) // gets the paused state
    console.log(!animation.paused()) // gets the inverse of the paused state

    See it in Action

    In the demo from the previous tip there is code that toggles the paused state of the particle effect.

    //click to pause
    document.addEventListener("click", function(){
    	tl.paused(!tl.paused()) 
    })

    This code means “every time the document is clicked the timeline’s paused state will change to the inverse (or opposite) of what it currently is”.

    If the animation is paused, it will become “unpaused” and vice-versa.

    This works great, but I’d like to show you trick for making it less abrupt and smoothing it out.

    Tweening Numeric Getter/Setter Values

    We can’t tween the paused() state as it is either true or false.

    Where things get interesting is that we can tween numeric getter / setter properties of animations like progress() and timeScale().

    timeScale() represents a factor of an animation’s playback speed.

    • timeScale(1): playback at normal speed
    • timeScale(0.5) playback at half speed
    • timeScale(2) playback at double speed

    Setting timeScale()

    //create an animation with a duration of 5 seconds
    const animation = gsap.to(box, {x:500, duration:5})
    
    //playback at half-speed making it take 10 seconds to play
    animation.timeScale(0.5)

    Tweening timeScale()

    const animation = gsap.to(box, {x:500, duration:5}) // create a basic tween
    
    // Over the course of 1 second reduce the timeScale of the animation to 0.5
    gsap.to(animation, {timeScale:0.5, duration:1})

    Dynamically Tweening timeScale() for smooth pause and un-pause

    Instead of abruptly changing the paused state of animation as the particle demo above does we are now going to tween the timeScale() for a MUCH smoother effect.

    Demo: Particles with timeScale() Tween

    See the Pen
    Codrops Tip 6: Particles with timeScale() Tween by Snorkl.tv (@snorkltv)
    on CodePen.

    Click anywhere in the demo above to see the particles smoothly slow down and speed up on each click.

    The code below basically says “if the animation is currently playing then we will slow it down or else we will speed it up”. Every time a click happens the isPlaying value toggles between true and false so that it can be updated for the next click.

    Tip 7: GSDevTools Markers and Animation IDs

    Most of the demos in this article have used GSDevTools to help us control our animations. When building animations I just love being able to scrub at my own pace and study the sequencing of all the moving parts.

    However, there is more to this powerful tool than just scrubbing, playing and pausing.

    Markers

    The in and out markers allow us to loop ANY section of an animation. As an added bonus GSDevTools remembers the previous position of the markers so that each time we reload our animation it will start  and end at the same time.

    This makes it very easy to loop a particular section and study it.

    Image from GSDevTools Docs

    Markers are a huge advantage when building animations longer than 3 seconds.

    To explore, open The Fancy Wrap() demo in a new window, move the markers and reload.

    Important: The markers are only available on screens wider than 600px. On small screens the UI is minimized to only show basic controls.

    Setting IDs for the Animation Menu

    The animation menu allows us to navigate to different sections of our animation based on an animation id. When dealing with long-form animations this feature is an absolute life saver.

    Since GSAP’s syntax makes creating complex sequences a breeze, it is not un-common to find yourself working on animations that are beyond 10, 20 or even 60 seconds!

    To set an animation id:

    const tl = gsap.timeline({id:"fancy"})
    
    //Add the animation to GSDevTools based on variable reference
    GSDevTools.create({animation:tl})
    
    //OR add the animation GSDevTools based on id
    GSDevTools.create({animation:"fancy"})

    With the code above the name “fancy” will display in GSDevTools.

    Although you can use the id with a single timeline, this feature is most helpful when working with nested timelines as discussed below.

    Demo: GSAP for Everyone

    See the Pen
    Codrops Tip 7: Markers and Animation Menu by Snorkl.tv (@snorkltv)
    on CodePen.

    This demo is 26 seconds long and has 7 child timelines. Study the code to see how each timeline has a unique id that is displayed in the animation menu.

    Use the animation menu to navigate to and explore each section.

    Important: The animation menu is only available on screens wider than 600px.

    Hopefully you can see how useful markers and animation ids can be when working with these long-form, hand-coded animations!

    Want to Learn More About GSAP?

    I’m here to help. 

    I’ve spent nearly 5 years archiving everything I know about GSAP in video format spanning 5 courses and nearly 300 lessons at creativeCodingClub.com.

    I spent many years “back in the day” using GreenSock’s ActionScript tools as a Flash developer and this experience lead to me being hired at GreenSock when they switched to JavaScript. My time at GreenSock had me creating countless demos, videos and learning resources.

    Spending years answering literally thousands of questions in the support forums has left me with a unique ability to help developers of all skill levels avoid common pitfalls and get the most out of this powerful animation library.

    It’s my mission to help developers from all over the world discover the joy of animating with code through affordable, world-class training.

    Visit Creative Coding Club to learn more.



    Source link

  • Design as Rhythm and Rebellion: The Work of Enrico Gisana

    Design as Rhythm and Rebellion: The Work of Enrico Gisana


    My name is Enrico Gisana, and I’m a creative director, graphic and motion designer.

    I’m the co-founder of GG—OFFICE, a small independent visual arts studio based in Modica, Sicily. I consider myself a multidisciplinary designer because I bring together different skills and visual languages. I work across analog and digital media, combining graphic design, typography, and animation, often blending these elements through experimental approaches. My design approach aims to push the boundaries of traditional graphic conventions, constantly questioning established norms to explore new visual possibilities.

    My work mainly focuses on branding, typography, and motion design, with a particular emphasis on kinetic typography.

    Between 2017 and 2025, I led numerous graphic and motion design workshops at various universities and art academies in Italy, including Abadir (Catania), Accademia di Belle Arti di Frosinone, Accademia di Belle Arti di Roma, CFP Bauer (Milan), and UNIRSM (San Marino). Since 2020, I’ve been teaching motion design at Abadir Academy in Catania, and since 2025, kinetic typography at CFP Bauer in Milan.

    Featured work

    TYPEXCEL — Variable font

    I designed an online half-day workshop for high school students on the occasion of an open day at the Academy of Design and Visual Communication Abadir, held in 2021.

    The goal of this workshop was to create a first contact with graphic design, but most of all with typography, using an Excel spreadsheet as a modular grid composed of editable and variable cells, instead of professional software which requires specific knowledge.

    The cell pattern allowed the students to create letters, icons, and glyphs. It was a stimulating exercise that helped them discover and develop their own design and creative skills.

    This project was published in Slanted Magazine N°40 “Experimental Type”.

    DEMO Festival

    DEMO Festival (Design in Motion Festival) is one of the world’s most prominent motion design festivals, founded by the renowned Dutch studio Studio Dumbar. The festival takes over the entire digital screen network of Amsterdam Central Station, transforming public space into a 24-hour exhibition of cutting-edge motion work from around the globe.

    I’ve had the honor of being selected multiple times to showcase my work at DEMO: in 2019 with EYE SEQUENCE; in 2022 with ALIEN TYPE and VERTICAL; and again in 2025 with ALIEN TRIBE, HELLOCIAOHALLOSALUTHOLA, and FREE JAZZ.

    In the 2025 edition, ALIEN TRIBE and HELLOCIAOHALLOSALUTHOLA were also selected for the Special Screens program, which extended the festival’s presence beyond the Netherlands. These works were exhibited in digital spaces across cities including Eindhoven, Rotterdam, Tilburg, Utrecht, Hamburg, and Düsseldorf, reaching a broader international audience.

    MARCO FORMENTINI

    My collaboration with Italian footwear designer Marco Formentini, based in Amsterdam, began with the creation of his visual identity and gradually expanded into other areas, including apparel experiments and the design of his personal website.

    Each phase of the project reflects his eclectic and process-driven approach to design, while also allowing me to explore form, texture, and narrative through different media.

    Below is a closer look at the three main outputs of this collaboration: logo, t-shirt, and website.

    Logo

    Designed for Italian footwear designer Marco Formentini, this logo reflects his broad, exploratory approach to design. Rather than sticking to a traditional monogram, I fused the letters “M” and “F” into a single, abstract shape, something that feels more like a symbol than a set of initials. The result is a wild, otherworldly mark that evokes movement, edge, and invention, mirroring Marco’s ability to shift across styles and scales while always keeping his own perspective.

    Website

    I conceived Marco Formentini’s website as a container, a digital portfolio without a fixed structure. It gathers images, sketches, prototypes, and renderings not through a linear narrative but through a visual flow that embraces randomness.

    The layout is split into two vertical columns, each filled with different types of visual content. By moving the cursor left or right, the columns dynamically resize, allowing the user to shift focus and explore the material in an intuitive and fluid way. This interactive system reflects Marco’s eclectic approach to footwear design, a space where experimentation and process take visual form.

    Website development by Marco Buccolo.

    Check it out: marco-formentini.com

    T—Shirt

    Shortly after working on his personal brand, I shared with Marco Formentini a few early graphic proposals for a potential t-shirt design, while he happened to be traveling through the Philippines with his friend Jo.

    Without waiting for a full release, he spontaneously had a few pieces printed at a local shop he stumbled upon during the trip, mixing one of the designs on the front with a different proposal on the back. An unexpected real-world test run for the identity, worn into the streets before even hitting the studio.

    Ditroit

    This poster was created to celebrate the 15th anniversary of Ditroit, a motion design and 3D studio based in Milan.

    At the center is an expressive “15”, a tribute to the studio’s founder, a longtime friend and former graffiti companion. The design reconnects the present with our shared creative roots and the formative energy of those early years.

    Silver on black: a color pairing rooted in our early graffiti experiments, reimagined here to celebrate fifteen years of visual exploration.

    Tightype

    A series of typographic animations I created for the launch of Habitas, the typeface designed by Tightype and released in 2021.

    The project explores type in motion, not just as a vehicle for content but as a form of visual expression in itself. Shapes bounce, rotate and multiply, revealing the personality of the font through rhythm and movement.

    Jane Machine

    SH SH SH SH is the latest LP from Jane Machine.

    The cover is defined by the central element of the lips, directly inspired by the album’s title. The lips not only mimic the movement of the “sh” sound but also evoke the noise of tearing paper. I amplified this effect through the creative process by first printing a photograph of the lips and then tearing it, introducing a tactile quality that contrasts with and complements the more electronic aesthetic of the colors and typography.

    Background

    I’m a creative director and graphic & motion designer with a strong focus on typography.

    My visual journey started around the age of 12, shaped by underground culture: I was into graffiti, hip hop, breakdancing, and skateboarding.

    As I grew up, I explored other scenes, from punk to tekno, from drum and bass to more experimental electronic music. What always drew me in, beyond the music itself, was the visual world around it: free party flyers, record sleeves, logos, and type everywhere.

    Between 2004 and 2010, I produced tekno music, an experience that deeply shaped my approach to design. That’s where I first learned about timelines, beats, and rhythm, all elements that today are at the core of how I work with motion.

    Art has also played a major role in shaping my visual culture, from the primitive signs of hieroglyphs to Cubism, Dadaism, Russian Constructivism, and the expressive intensity of Antonio Ligabue.

    The aesthetics and attitude of those worlds continue to influence everything I do and how I see things.

    In 2013, I graduated in Graphic Design from IED Milano and started working with various agencies. In 2014, I moved back to Modica, Sicily, where I’m still based today.

    Some of my animation work has been featured at DEMO Festival, the international motion design event curated by Studio Dumbar, in the 2019, 2022, and 2025 editions.

    In 2022, I was published in Slanted Magazine #40 (EXPERIMENTAL TYPE) with TYPEXCEL, Variable font, a project developed for a typography workshop aimed at high school students, entirely built inside an Excel spreadsheet.

    Since 2020, I’ve been teaching Motion Design at Abadir, Academy of Design and Visual Communication in Catania, and in 2025 I started teaching Type in Motion at Bauer in Milan.

    In 2021, together with Francesca Giampiccolo, I founded GG—OFFICE, a small independent visual studio based in Modica, Sicily.

    GG—OFFICE is a design space where branding and motion meet through a tailored and experimental approach. Every project grows from dialogue, evolves through research, and aims to shape contemporary, honest, and visually forward identities.

    In 2025, Francesca and I gave a talk on the theme of madness at Desina Festival in Naples, a wild, fun, and beautifully chaotic experience.

    Design Philosophy

    My approach to design is rooted in thought, I think a lot, as well as in research, rhythm, and an almost obsessive production of drafts.

    Every project is a unique journey where form always follows meaning, and never simply does what the client says.

    This is not about being contrary; it’s about bringing depth, intention and a point of view to the process.

    I channel the raw energy and DIY mindset of the subcultures that shaped me early on. I’m referring to those gritty, visual sound-driven scenes that pushed boundaries and blurred the line between image and sound. I’m not talking about the music itself, but about the visual culture that surrounded it. That spirit still fuels my creative engine today.

    Typography is my playground, not just a visual tool but a way to express structure, rhythm and movement.

    Sometimes I push letterforms to their limit, to the point where they lose readability and become pure visual matter.

    Whether I’m building a brand identity or animating graphics, I’m always exploring new visual languages, narrative rhythms and spatial poetry.

    Tools and Techniques

    I work across analog and digital tools, but most of my design and animation takes shape in Adobe Illustrator, After Effects, InDesign and Photoshop. And sometimes even Excel 🙂 especially when I want to break the rules and rethink typography in unconventional ways.

    I’m drawn to processes that allow for exploration and controlled chaos. I love building visual systems, breaking them apart and reconstructing them with intention.

    Typography, to me, is a living structure, modular, dynamic and often influenced by visual or musical rhythm.

    My workflow starts with in-depth research and a large amount of hand sketching.

    I then digitize the material, print it, manipulate it manually by cutting, collaging and intervening physically, then scan it again and bring it back into the digital space.

    This back-and-forth between mediums helps me achieve a material quality and a sense of imperfection that pure digital work often lacks.

    Inspiration

    Beyond the underground scenes and art movements I mentioned earlier, my inspiration comes from everything around me. I’m a keen observer and deeply analytical. Since I was a kid, I’ve been fascinated by people’s gestures, movements, and subtle expressions.

    For example, when I used to go to parties, I would often stand next to the DJ, not just to watch their technique, but to study their body language, movements, and micro-expressions. Even the smallest gesture can spark an idea.

    I believe inspiration is everywhere. It’s about being present and training your eye to notice the details most people overlook.

    Future Goals

    I don’t have a specific goal or destination. My main aim is to keep doing things well and to never lose my curiosity. For me, curiosity is the fuel that drives creativity and growth, so I want to stay open, keep exploring, and enjoy the process without forcing a fixed outcome.

    Message to Readers

    Design is not art!

    Design is method, planning, and process. However, that method can, and sometimes should, be challenged, as long as you remain fully aware of what you are doing. It is essential that what you create can be reproduced consistently and, depending on the project, works effectively across different media and formats. I always tell my students that you need to know the rules before you can break them. To do good design, you need a lot of passion and a lot of patience.

    Contact



    Source link

  • A Behind-the-Scenes Look at the New Jitter Website

    A Behind-the-Scenes Look at the New Jitter Website



    If Jitter isn’t on your radar yet, it’s a motion design tool for creative teams that makes creating animated content, from social media assets and ads to product animations and interface mockups, easy and fun.

    Think of it as Figma meets After Effects: intuitive, collaborative, and built for designers who want to bring motion into their workflows without the steep learning curve of traditional tools.

    Why We Redesigned Our Website

    Our previous site had served us well, but it also remained mostly unchanged since we launched Jitter nearly two years ago. The old website focused heavily on the product’s features, but didn’t really communicate its value and use cases. In 2025, we decided it was time for a full refresh.

    The main goal? Not just to highlight what Jitter does, but articulate why it changes the game for motion design.

    We’ve had hundreds of conversations with creative professionals, from freelancers and brand designers to agencies and startups, and heard four key benefits mentioned consistently:

    1. Ease of use
    2. Creativity
    3. Speed
    4. Collaboration

    These became the pillars of the new site experience.

    We also wanted to make room for growth: a more cohesive brand, better storytelling, real-world customer examples, and educational content to help teams get the most out of Jitter.

    Another major shift was in our audience. The first version of the website was speaking to every designer, highlighting simplicity and familiarity. But as the product evolved, it became clear that Jitter shines the most when used collaboratively across teams. The new website reflects that focus.

    Shaping Our Positioning

    We didn’t define our “how, what, and why” in isolation. Throughout 2024, we spoke to dozens of creative teams, studios, and design leaders, and listened closely.

    We used this ongoing feedback to shape the way we talk about Jitter ourselves: which problems it solves, where it fits in the design workflow, and why teams love it. The new website is a direct result of that research.

    At the same time, we didn’t want Jitter to feel too serious or corporate. Even though it’s built for teams, we aimed to keep the brand light, fun, and relatable. Motion design should be exciting, not intimidating, and we wanted that to come through in the way Jitter sounds and feels.

    Designing With Jitter

    We also walked the talk, using Jitter to design all animations and prototype every interaction across the new site.

    From menu transitions to the way cards animate on scroll, all micro-interactions were designed in Jitter. It gave us speed, clarity, and a single source of truth, and eliminated a lot of the back-and-forth in the handoff process.

    Our development partners at Antinomy Studio and Ingamana used Jitter too. They prototyped transitions and UI motion directly in the tool to validate ideas and communicate back to our team. It was great to see developers using motion as a shared language, not a handoff artifact.

    Building Together with Antinomy Studio

    The development of the new site was handled in collaboration with the talented team at Antinomy Studio.

    The biggest technical challenge was the large horizontal scroll experience on the homepage. It needed to feel natural, responsive, and smooth across devices, and maintain high performance without compromising on the visuals.

    The site was built using React and GSAP for complex, timeline-based animations and transitions.

    “The large horizontal scroll was particularly complicated and required significant responsive changes. Instead of defining overly complex timelines where screen width values would change the logic of the animation in JavaScript, we used progress values as CSS variables. This allowed us to use calc() functions to translate and scale elements, while the GSAP timeline only updates values from 0 to 1. So easy to understand and maintain!

    — Baptiste Briel, Antinomy

    We’ve promoted the use of CSS as much as possible for high performances hover effects and transitions. We’ve even used the new linear() easing functions to bring a bouncy feeling to our CSS animations.

    There’s a great tool created by Jake Archibald on generating spring-like CSS easing functions that you can paste as CSS variables. It’s so much fun to play with, and it’s also something that the Jitter team has implemented in their software, so it was super easy to review and tweak for both design and engineering teams.

    Jitter animations were exported as Lottie files and integrated directly, making the experience dynamic and lightweight. It’s a modern stack that supports our need for speed and flexibility, both in the frontend and behind the scenes.

    — Baptiste Briel, Antinomy

    What We Learned

    This redesign taught us a few valuable lessons:

    • Start with benefits, not features. Users don’t care what your product does until they understand how it can help them.
    • Design with your real audience in mind. Jitter for solo designers and Jitter for teams are two different stories. Clarifying our audience helped us craft a stronger, clearer narrative.
    • Prototyping with Jitter helped us move faster, iterate more confidently, and keep design and development in sync.

    We’ve already seen an impact: a sharper brand perception, higher engagement and conversion across all pages, and a new wave of qualified inbound leads from the best brands in the world, including Microsoft, Dropbox, Anthropic, Lyft, Workday, United Airlines, and more. And this is just the beginning.

    What’s Next?

    We see our new website as a constantly evolving platform. In the coming months, we’ll be adding more:

    • Case studies and customer stories
    • Use case pages
    • Learning resources and motion design tutorials
    • Playful experiments and interactive demos

    Our mission remains the same: to make motion design accessible, collaborative, and fun. Our website is now better equipped to carry that message forward.

    Let us know what you think, and if there’s anything you’d love to see next.

    Thanks for reading, and stay in motion 🚀

    Give Jitter a Try

    Get started with Jitter for free and explore 300+ free templates to jumpstart your next project. Once you’re ready to upgrade, get 25% off the first year of paid annual plans with JITTERCODROPS25.



    Source link