دسته: برنامه‌نویسان

  • Designer Spotlight: Ivan Ermakov | Codrops

    Designer Spotlight: Ivan Ermakov | Codrops


    Hi, I’m Ivan—a Dubai-based designer focused on fintech products and branding. I run Moonsight, where we craft thoughtful digital experiences and sharp visual identities for financial companies around the world.

    Background

    My path into design wasn’t a childhood calling—I wasn’t drawing wireframes at age ten or dreaming of Helvetica (can you imagine XD). I just knew I didn’t want the typical office life. I wanted freedom, movement, and a way to create things that felt useful. Design turned out to be the sweet spot between independence and impact.

    So I studied design at university by day, and took on agency work by night—what you might call the full-stack student hustle. That rhythm—study, work, repeat—taught me discipline. I also kept learning on the side, exploring tools, trends, and techniques to sharpen my craft.

    Eventually, I found myself gravitating toward fintech.

    Why fintech? Because it’s real. It’s personal. Everyone interacts with money. And when you build something that helps them feel more in control of it—you’re not just improving UX, you’re improving lives.

    You’re designing trust. That’s a responsibility I take seriously.

    From there, I explored both sides of the industry: in-house roles at product companies, and fast-paced agency work. Later, I shifted into consultancy—partnering with fintechs across Europe, the Gulf, and Asia. That chapter taught me a lot—not just about design, but about people, culture, and how different teams think about trust and money.

    All of that led me to start Moonsight—a space where I could bring all those experiences together. Today, we partner with fintechs and financial companies to create sharp, useful, brand-led digital experiences. And while I still stay hands-on, I’m also building a team that’s just as obsessed with clarity, thoughtfulness, and execution as I am.

    Featured Work

    Monetto

    A game-changer in the world of freelancing. Designed to simplify and elevate the financial journey for freelancers, Monetto is more than just an app – it’s a holistic solution that empowers creatives like me to manage their finances with confidence.

    BlastUp

    Blastup’s mission is simple—help users grow their social media presence, fast. We crafted a bold, dynamic identity that reflects Blastup’s energetic and friendly personality as well is their website.

    Alinma Bank

    This project for Alinma Bank involved a comprehensive redesign across all brand touchpoints: the logo, physical cards, website, and mobile app. The goal was to modernize and streamline the visual identity while maintaining the bank’s core values.

    Coinly

    Coinly is more than just a banking app — it’s a full-fledged financial literacy ecosystem for kids, designed to empower the next generation with money skills that grow with them. Built around an engaging coin mascot and a colorful 3D world, Coinly blends gamification, interactive storytelling, and real financial tools.

    Design Philosophy

    Design should be highly functional and intuitive, solving both business and user problems while delivering an engaging experience that users want to return to.

    Design is clarity. And clarity builds trust.

    Especially in fintech—where most of my projects happen—you don’t have the luxury of vague. Your design has to work, first and foremost. It has to feel smart, trustworthy, smooth. When people trust your interface, they trust your product. And when they trust your product, they’re more likely to use it again. That’s where design really proves its value.

    My job is to make things useful first, beautiful second. But ideally, both at once.

    The way I approach projects is structured but adaptable.

    I start with full immersion—understanding the business, the audience, and the problem we’re solving. From there, I look for a unique angle, something that gives the product or brand a distinct voice. Then I push that idea as far as I can—visually, functionally, and emotionally.

    And no, I don’t believe in reinventing everything 🙂

    Use the patterns that work. But when something feels off or underwhelming, be bold enough to rethink it. That’s where the real creative work lives—not in chaos, but in considered evolution.

    I don’t want to be known for a style. I want to be known for range.

    For every project, I try to find a distinct visual language. That means experimenting—pulling in 3D, motion, illustration—whatever it takes to bring the concept to life.

    And I rarely do it alone.

    I collaborate closely with animators, developers, motion designers, illustrators—the kind of people who not only support the vision, but expand it. When everyone brings their strengths to the table, the result is always richer, sharper, more memorable.

    What matters most is that the end result has presence. That it feels alive, intentional, and built with care.

    And I care deeply about how work is presented. Every project—client or personal—is framed with context, rationale, and craft. Because good design solves problems, but great design tells a story.

    Process In Bits

    My process is structured, but not rigid. Usually, it looks something like this:

    Polish and present
    Clear storytelling. Clean handoff. Confident rationale.

    Understand the business
    What’s broken? What’s needed? What are we really solving?

    Understand the user
    What do they expect? What’s familiar to them? What do they fear?

    Explore the visual angle
    Moodboards, motion cues, layout patterns, unexpected directions

    Build and iterate
    Fast feedback loops with clients and the team

    One benchmark I use: if I don’t understand what I designed, how can I expect a user to?

    For me, good design starts with intention. Every screen, every button, every microinteraction—there should be a reason it exists. So when a feature’s built, I walk through it in my head as if I’ve never seen it before. What would I click? What would I expect next? Can I explain what each part does without second-guessing?

    After working on financial interfaces for so long, you start to internalize these flows—you almost know them by muscle memory. But that doesn’t mean you skip the test. You still go through each stage. You still assume nothing.

    Sometimes, the best insights come from a teammate asking, “Wait, what does this do?” That’s your cue to look closer.

    And when it comes to working with clients?

    I walk clients through every stage—from moodboards to microinteractions—so there are no surprises and no last-minute pivots.

    It’s about mutual trust: they trust my process, and I trust their vision.

    This structure helps me manage expectations, prevent scope drift, and deliver thoughtful work—on time, without the drama.

    What keeps me inspired? Looking outside the bubble.

    I don’t have a list of designers I religiously follow. What inspires me is great work—wherever it lives. Sometimes it’s a slick piece of web design, sometimes a brutalist poster on the street, art style from a video game, or the typography on a jazz record sleeve.

    Music plays a huge role in my creative life—I sing a bit, and I think that kind of rhythm and structure naturally finds its way into how I build interfaces.

    I’m also a huge gamer, and I’m fascinated by how game mechanics influence user behavior. There’s a lot designers can learn from how games guide, reward, and surprise users.

    Sometimes I’ll see a cool effect, a character design, or even just a motion detail and immediately think:

    That could be the anchor for a whole experience

    Not necessarily for the project I’m working on in the moment, but something I’d love to build around later. So I sort, I collect, I sketch.

    I’m often looking for inspiration for one project, but bookmarking ideas for two or three others. It’s not just moodboarding—it’s pattern recognition, and planting seeds for future concepts.

    Inspiration can come from anywhere—but only if you keep your eyes open.

    What’s Next

    Right now, I’m fully focused on building Moonsight into a studio known for bold, strategic fintech design—especially across the MENA region.

    On my personal radar:

    • Master 3D
    • Launch my own product
    • Speak at more design events
    • Make Moonsight’s design Conference in Dubai happen
    • Join awwwards jury panel
    • Do more meaningful work
    • Mostly? Just grow. As a designer, a founder, and a creative

    Parting Thoughts

    If I could give one piece of advice to younger designers, it would be this:

    Find what excites you. Stay obsessed with it. And don’t waste time comparing yourself to others.

    We’re overexposed to each other’s work these days. It’s easy to feel behind.

    But your only competition is yourself a year ago. That’s where growth lives.

    This industry moves fast. But if you move with intent, your work will always find its place.



    Source link

  • Three.js Instances: Rendering Multiple Objects Simultaneously

    Three.js Instances: Rendering Multiple Objects Simultaneously


    When building the basement studio site, we wanted to add 3D characters without compromising performance. We used instancing to render all the characters simultaneously. This post introduces instances and how to use them with React Three Fiber.

    Introduction

    Instancing is a performance optimization that lets you render many objects that share the same geometry and material simultaneously. If you have to render a forest, you’d need tons of trees, rocks, and grass. If they share the same base mesh and material, you can render all of them in a single draw call.

    A draw call is a command from the CPU to the GPU to draw something, like a mesh. Each unique geometry or material usually needs its own call. Too many draw calls hurt performance. Instancing reduces that by batching many copies into one.

    Basic instancing

    As an example, let’s start by rendering a thousand boxes in a traditional way, and let’s loop over an array and generate some random boxes:

    const boxCount = 1000
    
    function Scene() {
      return (
        <>
          {Array.from({ length: boxCount }).map((_, index) => (
            <mesh
              key={index}
              position={getRandomPosition()}
              scale={getRandomScale()}
            >
              <boxGeometry />
              <meshBasicMaterial color={getRandomColor()} />
            </mesh>
          ))}
        </>
      )
    }
    Live | Source

    If we add a performance monitor to it, we’ll notice that the number of “calls” matches our boxCount.

    A quick way to implement instances in our project is to use drei/instances.

    The Instances component acts as a provider; it needs a geometry and materials as children that will be used each time we add an instance to our scene.

    The Instance component will place one of those instances in a particular position/rotation/scale. Every Instance will be rendered simultaneously, using the geometry and material configured on the provider.

    import { Instance, Instances } from "@react-three/drei"
    
    const boxCount = 1000
    
    function Scene() {
      return (
        <Instances limit={boxCount}>
          <boxGeometry />
          <meshBasicMaterial />
          {Array.from({ length: boxCount }).map((_, index) => (
            <Instance
              key={index}
              position={getRandomPosition()}
              scale={getRandomScale()}
              color={getRandomColor()}
            />
          ))}
        </Instances>
      )
    }

    Notice how “calls” is now reduced to 1, even though we are showing a thousand boxes.

    Live | Source

    What is happening here? We are sending the geometry of our box and the material just once to the GPU, and ordering that it should reuse the same data a thousand times, so all boxes are drawn simultaneously.

    Notice that we can have multiple colors even though they use the same material because Three.js supports this. However, other properties, like the map, should be the same because all instances share the exact same material.

    We’ll see how we can hack Three.js to support multiple maps later in the article.

    Having multiple sets of instances

    If we are rendering a forest, we may need different instances, one for trees, another for rocks, and one for grass. However, the example from before only supports one instance in its provider. How can we handle that?

    The creteInstnace() function from drei allows us to create multiple instances. It returns two React components, the first one a provider that will set up our instance, the second, a component that we can use to position one instance in our scene.

    Let’s see how we can set up a provider first:

    import { createInstances } from "@react-three/drei"
    
    const boxCount = 1000
    const sphereCount = 1000
    
    const [CubeInstances, Cube] = createInstances()
    const [SphereInstances, Sphere] = createInstances()
    
    function InstancesProvider({ children }: { children: React.ReactNode }) {
      return (
        <CubeInstances limit={boxCount}>
          <boxGeometry />
          <meshBasicMaterial />
          <SphereInstances limit={sphereCount}>
            <sphereGeometry />
            <meshBasicMaterial />
            {children}
          </SphereInstances>
        </CubeInstances>
      )
    }

    Once we have our instance provider, we can add lots of Cubes and Spheres to our scene:

    function Scene() {
      return (
        <InstancesProvider>
          {Array.from({ length: boxCount }).map((_, index) => (
            <Cube
              key={index}
              position={getRandomPosition()}
              color={getRandomColor()}
              scale={getRandomScale()}
            />
          ))}
    
          {Array.from({ length: sphereCount }).map((_, index) => (
            <Sphere
              key={index}
              position={getRandomPosition()}
              color={getRandomColor()}
              scale={getRandomScale()}
            />
          ))}
        </InstancesProvider>
      )
    }

    Notice how even though we are rendering two thousand objects, we are just running two draw calls on our GPU.

    Live | Source

    Instances with custom shaders

    Until now, all the examples have used Three.js’ built-in materials to add our meshes to the scene, but sometimes we need to create our own materials. How can we add support for instances to our shaders?

    Let’s first set up a very basic shader material:

    import * as THREE from "three"
    
    const baseMaterial = new THREE.RawShaderMaterial({
      vertexShader: /*glsl*/ `
        attribute vec3 position;
        attribute vec3 instanceColor;
        attribute vec3 normal;
        attribute vec2 uv;
        uniform mat4 modelMatrix;
        uniform mat4 viewMatrix;
        uniform mat4 projectionMatrix;
    
        void main() {
          vec4 modelPosition = modelMatrix * vec4(position, 1.0);
          vec4 viewPosition = viewMatrix * modelPosition;
          vec4 projectionPosition = projectionMatrix * viewPosition;
          gl_Position = projectionPosition;
        }
      `,
      fragmentShader: /*glsl*/ `
        void main() {
          gl_FragColor = vec4(1, 0, 0, 1);
        }
      `
    })
    
    export function Scene() {
      return (
        <mesh material={baseMaterial}>
          <sphereGeometry />
        </mesh>
      )
    }

    Now that we have our testing object in place, let’s add some movement to the vertices:

    We’ll add some movement on the X axis using a time and amplitude uniform and use it to create a blob shape:

    const baseMaterial = new THREE.RawShaderMaterial({
      // some unifroms
      uniforms: {
        uTime: { value: 0 },
        uAmplitude: { value: 1 },
      },
      vertexShader: /*glsl*/ `
        attribute vec3 position;
        attribute vec3 instanceColor;
        attribute vec3 normal;
        attribute vec2 uv;
        uniform mat4 modelMatrix;
        uniform mat4 viewMatrix;
        uniform mat4 projectionMatrix;
    
        // Added this code to shift the vertices
        uniform float uTime;
        uniform float uAmplitude;
        vec3 movement(vec3 position) {
          vec3 pos = position;
          pos.x += sin(position.y + uTime) * uAmplitude;
          return pos;
        }
    
        void main() {
          vec3 blobShift = movement(position);
          vec4 modelPosition = modelMatrix * vec4(blobShift, 1.0);
          vec4 viewPosition = viewMatrix * modelPosition;
          vec4 projectionPosition = projectionMatrix * viewPosition;
          gl_Position = projectionPosition;
        }
      `,
      fragmentShader: /*glsl*/ `
        void main() {
          gl_FragColor = vec4(1, 0, 0, 1);
        }
      `,
    });
    
    export function Scene() {
      useFrame((state) => {
        // update the time uniform
        baseMaterial.uniforms.uTime.value = state.clock.elapsedTime;
      });
    
      return (
        <mesh material={baseMaterial}>
          <sphereGeometry args={[1, 32, 32]} />
        </mesh>
      );
    }
    

    Now, we can see the sphere moving around like a blob:

    Live | Source

    Now, let’s render a thousand blobs using instancing. First, we need to add the instance provider to our scene:

    import { createInstances } from '@react-three/drei';
    
    const [BlobInstances, Blob] = createInstances();
    
    function Scene() {
      useFrame((state) => {
        baseMaterial.uniforms.uTime.value = state.clock.elapsedTime;
      });
    
      return (
        <BlobInstances material={baseMaterial} limit={sphereCount}>
          <sphereGeometry args={[1, 32, 32]} />
          {Array.from({ length: sphereCount }).map((_, index) => (
            <Blob key={index} position={getRandomPosition()} />
          ))}
        </BlobInstances>
      );
    }
    

    The code runs successfully, but all spheres are in the same place, even though we added different positions.

    This is happening because when we calculated the position of each vertex in the vertexShader, we returned the same position for all vertices, all these attributes are the same for all spheres, so they end up in the same spot:

    vec3 blobShift = movement(position);
    vec4 modelPosition = modelMatrix * vec4(deformedPosition, 1.0);
    vec4 viewPosition = viewMatrix * modelPosition;
    vec4 projectionPosition = projectionMatrix * viewPosition;
    gl_Position = projectionPosition;

    To solve this issue, we need to use a new attribute called instanceMatrix. This attribute will be different for each instance that we are rendering.

      attribute vec3 position;
      attribute vec3 instanceColor;
      attribute vec3 normal;
      attribute vec2 uv;
      uniform mat4 modelMatrix;
      uniform mat4 viewMatrix;
      uniform mat4 projectionMatrix;
      // this attribute will change for each instance
      attribute mat4 instanceMatrix;
    
      uniform float uTime;
      uniform float uAmplitude;
    
      vec3 movement(vec3 position) {
        vec3 pos = position;
        pos.x += sin(position.y + uTime) * uAmplitude;
        return pos;
      }
    
      void main() {
        vec3 blobShift = movement(position);
        // we can use it to transform the position of the model
        vec4 modelPosition = instanceMatrix * modelMatrix * vec4(blobShift, 1.0);
        vec4 viewPosition = viewMatrix * modelPosition;
        vec4 projectionPosition = projectionMatrix * viewPosition;
        gl_Position = projectionPosition;
      }

    Now that we have used the instanceMatrix attribute, each blob is in its corresponding position, rotation, and scale.

    Live | Source

    Changing attributes per instance

    We managed to render all the blobs in different positions, but since the uniforms are shared across all instances, they all end up having the same animation.

    To solve this issue, we need a way to provide custom information for each instance. We actually did this before, when we used the instanceMatrix to move each instance to its corresponding location. Let’s debug the magic behind instanceMatrix, so we can learn how we can create own instanced attributes.

    Taking a look at the implementation of instancedMatrix we can see that it is using something called InstancedAttribute:

    https://github.com/mrdoob/three.js/blob/master/src/objects/InstancedMesh.js#L57

    InstancedBufferAttribute allows us to create variables that will change for each instance. Let’s use it to vary the animation of our blobs.

    Drei has a component to simplify this called InstancedAttribute that allows us to define custom attributes easily.

    // Tell typescript about our custom attribute
    const [BlobInstances, Blob] = createInstances<{ timeShift: number }>()
    
    function Scene() {
      useFrame((state) => {
        baseMaterial.uniforms.uTime.value = state.clock.elapsedTime
      })
    
      return (
        <BlobInstances material={baseMaterial} limit={sphereCount}>
          {/* Declare an instanced attribute with a default value */}
          <InstancedAttribute name="timeShift" defaultValue={0} />
          
          <sphereGeometry args={[1, 32, 32]} />
          {Array.from({ length: sphereCount }).map((_, index) => (
            <Blob
              key={index}
              position={getRandomPosition()}
              
              // Set the instanced attribute value for this instance
              timeShift={Math.random() * 10}
              
            />
          ))}
        </BlobInstances>
      )
    }

    We’ll use this time shift attribute in our shader material to change the blob animation:

    uniform float uTime;
    uniform float uAmplitude;
    // custom instanced attribute
    attribute float timeShift;
    
    vec3 movement(vec3 position) {
      vec3 pos = position;
      pos.x += sin(position.y + uTime + timeShift) * uAmplitude;
      return pos;
    }

    Now, each blob has its own animation:

    Live | Source

    Creating a forest

    Let’s create a forest using instanced meshes. I’m going to use a 3D model from SketchFab: Stylized Pine Tree Tree by Batuhan13.

    import { useGLTF } from "@react-three/drei"
    import * as THREE from "three"
    import { GLTF } from "three/examples/jsm/Addons.js"
    
    // I always like to type the models so that they are safer to work with
    interface TreeGltf extends GLTF {
      nodes: {
        tree_low001_StylizedTree_0: THREE.Mesh<
          THREE.BufferGeometry,
          THREE.MeshStandardMaterial
        >
      }
    }
    
    function Scene() {
    
      // Load the model
      const { nodes } = useGLTF(
        "/stylized_pine_tree_tree.glb"
      ) as unknown as TreeGltf
    
      return (
        <group>
          {/* add one tree to our scene */ }
          <mesh
            scale={0.02}
            geometry={nodes.tree_low001_StylizedTree_0.geometry}
            material={nodes.tree_low001_StylizedTree_0.material}
          />
        </group>
      )
    }
    

    (I added lights and a ground in a separate file.)

    Now that we have one tree, let’s apply instancing.

    const getRandomPosition = () => {
      return [
        (Math.random() - 0.5) * 10000,
        0,
        (Math.random() - 0.5) * 10000
      ] as const
    }
    
    const [TreeInstances, Tree] = createInstances()
    const treeCount = 1000
    
    function Scene() {
      const { scene, nodes } = useGLTF(
        "/stylized_pine_tree_tree.glb"
      ) as unknown as TreeGltf
    
      return (
        <group>
          <TreeInstances
            limit={treeCount}
            scale={0.02}
            geometry={nodes.tree_low001_StylizedTree_0.geometry}
            material={nodes.tree_low001_StylizedTree_0.material}
          >
            {Array.from({ length: treeCount }).map((_, index) => (
              <Tree key={index} position={getRandomPosition()} />
            ))}
          </TreeInstances>
        </group>
      )
    }

    Our entire forest is being rendered in only three draw calls: one for the skybox, another one for the ground plane, and a third one with all the trees.

    To make things more interesting, we can vary the height and rotation of each tree:

    const getRandomPosition = () => {
      return [
        (Math.random() - 0.5) * 10000,
        0,
        (Math.random() - 0.5) * 10000
      ] as const
    }
    
    function getRandomScale() {
      return Math.random() * 0.7 + 0.5
    }
    
    // ...
    <Tree
      key={index}
      position={getRandomPosition()}
      scale={getRandomScale()}
      rotation-y={Math.random() * Math.PI * 2}
    />
    // ...
    Live | Source

    Further reading

    There are some topics that I didn’t cover in this article, but I think they are worth mentioning:

    • Batched Meshes: Now, we can render one geometry multiple times, but using a batched mesh will allow you to render different geometries at the same time, sharing the same material. This way, you are not limited to rendering one tree geometry; you can vary the shape of each one.
    • Skeletons: They are not currently supported with instancing, to create the latest basement.studio site we managed to hack our own implementation, I invite you to read our implementation there.
    • Morphing with batched mesh: Morphing is supported with instances but not with batched meshes. If you want to implement it yourself, I’d suggest you read these notes.



    Source link

  • How To Create Kinetic Image Animations with React-Three-Fiber

    How To Create Kinetic Image Animations with React-Three-Fiber



    For the past few months, I’ve been exploring different kinetic motion designs with text and images. The style looks very intriguing, so I decided to create some really cool organic animations using images and React Three Fiber.

    In this article, we’ll learn how to create the following animation using Canvas2D and React Three Fiber.

    Setting Up the View & Camera

    The camera’s field of view (FOV) plays a huge role in this project. Let’s keep it very low so it looks like an orthographic camera. You can experiment with different perspectives later. I prefer using a perspective camera over an orthographic one because we can always try different FOVs. For more detailed implementation check source code.

    <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />

    Setting Up Our 3D Shapes

    First, let’s create and position 3D objects that will display our images. For this example, we need to make 2 components:

    Billboard.tsx – This is a cylinder that will show our stack of images

    'use client';
    
    import { useRef } from 'react';
    import * as THREE from 'three';
    
    function Billboard({ radius = 5, ...props }) {
        const ref = useRef(null);
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} />
                <meshBasicMaterial color="red" side={THREE.DoubleSide} />
            </mesh>
        );
    }

    Banner.tsx – This is another cylinder that will work like a moving banner

    'use client';
    
    import * as THREE from 'three';
    import { useRef } from 'react';
    
    function Banner({ radius = 1.6, ...props }) {
        const ref = useRef(null);
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry
                args={[radius, radius, radius * 0.07, radius * 80, radius * 10, true]}
                />
                <meshBasicMaterial
                color="blue"
                side={THREE.DoubleSide}
                />
            </mesh>
        );
    }
    
    export default Banner;

    Once we have our components ready, we can use them on our page.

    Now let’s build the whole shape:

    1. Create a wrapper group – We’ll make a group that wraps all our components. This will help us rotate everything together later.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} /> 
                    <group>
    
                    </group>
                </View>
            </div>
        );
    }

    2. Render Billboard and Banner components in the loop – Inside our group, we’ll create a loop to render our Billboards and Banners multiple times.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
                    <group>
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                            key={`billboard-${index}`}
                            radius={5}
                            />,
                            <Banner
                            key={`banner-${index}`}
                            radius={5}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    3. Stack them up – We’ll use the index from our loop and the y position to stack our items on top of each other. Here’s how it looks so far:

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    const COUNT = 10;
    const GAP = 3.2;
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
                    <group>
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                            key={`billboard-${index}`}
                            radius={5}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]}
                            />,
                            <Banner
                            key={`banner-${index}`}
                            radius={5}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0]}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    4. Add some rotation – Let’s rotate things a bit! First, I’ll hard-code the rotation of our banners to make them more curved and fit nicely with the Billboard component. We’ll also make the radius a bit bigger.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    const COUNT = 10;
    const GAP = 3.2;
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
                    <group>
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                            key={`billboard-${index}`}
                            radius={5}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]}
                            rotation={[0, index * Math.PI * 0.5, 0]} // <-- rotation of the billboard
                            />,
                            <Banner
                            key={`banner-${index}`}
                            radius={5}
                            rotation={[0, 0, 0.085]} // <-- rotation of the banner
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0]}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    5. Tilt the whole thing – Now let’s rotate our entire group to make it look like the Leaning Tower of Pisa.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    
    const COUNT = 10;
    const GAP = 3.2;
    
    export default function Home() {
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
                    <group rotation={[-0.15, 0, -0.2]}> // <-- rotate the group
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                            key={`billboard-${index}`}
                            radius={5}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]}
                            rotation={[0, index * Math.PI * 0.5, 0]}
                            />,
                            <Banner
                            key={`banner-${index}`}
                            radius={5}
                            rotation={[0, 0, 0.085]}
                            position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0]}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    6. Perfect! – Our 3D shapes are all set up. Now we can add our images to them.

    Creating a Texture from Our Images Using Canvas

    Here’s the cool part: we’ll put all our images onto a canvas, then use that canvas as a texture on our Billboard shape.

    To make this easier, I created some helper functions that simplify the whole process.

    getCanvasTexture.js

    import * as THREE from 'three';
    
    /**
    * Preloads an image and calculates its dimensions
    */
    async function preloadImage(imageUrl, axis, canvasHeight, canvasWidth) {
        const img = new Image();
    
        img.crossOrigin = 'anonymous';
    
        await new Promise((resolve, reject) => {
            img.onload = () => resolve();
            img.onerror = () => reject(new Error(`Failed to load image: ${imageUrl}`));
            img.src = imageUrl;
        });
    
        const aspectRatio = img.naturalWidth / img.naturalHeight;
    
        let calculatedWidth;
        let calculatedHeight;
    
        if (axis === 'x') {
            // Horizontal layout: scale to fit canvasHeight
            calculatedHeight = canvasHeight;
            calculatedWidth = canvasHeight * aspectRatio;
            } else {
            // Vertical layout: scale to fit canvasWidth
            calculatedWidth = canvasWidth;
            calculatedHeight = canvasWidth / aspectRatio;
        }
    
        return { img, width: calculatedWidth, height: calculatedHeight };
    }
    
    function calculateCanvasDimensions(imageData, axis, gap, canvasHeight, canvasWidth) {
        if (axis === 'x') {
            const totalWidth = imageData.reduce(
            (sum, data, index) => sum + data.width + (index > 0 ? gap : 0), 0);
    
            return { totalWidth, totalHeight: canvasHeight };
        } else {
            const totalHeight = imageData.reduce(
            (sum, data, index) => sum + data.height + (index > 0 ? gap : 0), 0);
    
            return { totalWidth: canvasWidth, totalHeight };
        }
    }
    
    function setupCanvas(canvasElement, context, dimensions) {
        const { totalWidth, totalHeight } = dimensions;
        const devicePixelRatio = Math.min(window.devicePixelRatio || 1, 2);
    
        canvasElement.width = totalWidth * devicePixelRatio;
        canvasElement.height = totalHeight * devicePixelRatio;
    
        if (devicePixelRatio !== 1) context.scale(devicePixelRatio, devicePixelRatio);
    
        context.fillStyle = '#ffffff';
        context.fillRect(0, 0, totalWidth, totalHeight);
    }
    
    function drawImages(context, imageData, axis, gap) {
        let currentX = 0;
        let currentY = 0;
    
        context.save();
    
        for (const data of imageData) {
            context.drawImage(data.img, currentX, currentY, data.width, data.height);
    
            if (axis === 'x') currentX += data.width + gap;
            else currentY += data.height + gap;
        }
    
        context.restore();
    }
    
    function createTextureResult(canvasElement, dimensions) {
        const texture = new THREE.CanvasTexture(canvasElement);
        texture.needsUpdate = true;
        texture.wrapS = THREE.RepeatWrapping;
        texture.wrapT = THREE.ClampToEdgeWrapping;
        texture.generateMipmaps = false;
        texture.minFilter = THREE.LinearFilter;
        texture.magFilter = THREE.LinearFilter;
    
        return {
            texture,
            dimensions: {
                width: dimensions.totalWidth,
                height: dimensions.totalHeight,
                aspectRatio: dimensions.totalWidth / dimensions.totalHeight,
            },
        };
    }
    
    export async function getCanvasTexture({
        images,
        gap = 10,
        canvasHeight = 512,
        canvasWidth = 512,
        canvas,
        ctx,
        axis = 'x',
    }) {
        if (!images.length) throw new Error('No images');
    
        // Create canvas and context if not provided
        const canvasElement = canvas || document.createElement('canvas');
        const context = ctx || canvasElement.getContext('2d');
    
        if (!context) throw new Error('No context');
    
        // Preload all images in parallel
        const imageData = await Promise.all(
            images.map((image) => preloadImage(image.url, axis, canvasHeight, canvasWidth))
        );
    
        // Calculate total canvas dimensions
        const dimensions = calculateCanvasDimensions(imageData, axis, gap, canvasHeight, canvasWidth);
    
        // Setup canvas
        setupCanvas(canvasElement, context, dimensions);
    
        // Draw all images
        drawImages(context, imageData, axis, gap);
    
        // Create and return texture result
        return createTextureResult(canvasElement, dimensions)
    }

    Then we can also create a useCollageTexture hook that we can easily use in our components.

    useCollageTexture.jsx

    import { useState, useEffect, useCallback } from 'react';
    import { getCanvasTexture } from '@/webgl/helpers/getCanvasTexture';
    
    export function useCollageTexture(images, options = {}) {
    const [textureResults, setTextureResults] = useState(null);
    const [isLoading, setIsLoading] = useState(true);
    const [error, setError] = useState(null);
    
    const { gap = 0, canvasHeight = 512, canvasWidth = 512, axis = 'x' } = options;
    
    const createTexture = useCallback(async () => {
        try {
            setIsLoading(true);
            setError(null);
    
            const result = await getCanvasTexture({
                images,
                gap,
                canvasHeight,
                canvasWidth,
                axis,
            });
    
            setTextureResults(result);
    
        } catch (err) {
            setError(err instanceof Error ? err : new Error('Failed to create texture'));
        } finally {
            setIsLoading(false);
        }
    }, [images, gap, canvasHeight, canvasWidth, axis]);
    
        useEffect(() => {
            if (images.length > 0) createTexture();
        }, [images.length, createTexture]);
    
        return {
            texture: textureResults?.texture || null,
            dimensions: textureResults?.dimensions || null,
            isLoading,
            error,
        };
    }

    Adding the Canvas to Our Billboard

    Now let’s use our useCollageTexture hook on our page. We’ll create some simple loading logic. It takes a second to fetch all the images and put them onto the canvas. Then we’ll pass our texture and dimensions of canvas into the Billboard component.

    page.jsx

    'use client';
    
    import styles from './page.module.scss';
    import Billboard from '@/components/webgl/Billboard/Billboard';
    import Banner from '@/components/webgl/Banner/Banner';
    import Loader from '@/components/ui/modules/Loader/Loader';
    import images from '@/data/images';
    import { View } from '@/webgl/View';
    import { PerspectiveCamera } from '@react-three/drei';
    import { useCollageTexture } from '@/hooks/useCollageTexture';
    
    const COUNT = 10;
    const GAP = 3.2;
    
    export default function Home() {
        const { texture, dimensions, isLoading } = useCollageTexture(images); // <-- getting the texture and dimensions from the useCollageTexture hook
    
        if (isLoading) return <Loader />; // <-- showing the loader when the texture is loading
    
        return (
            <div className={styles.page}>
                <View className={styles.view} orbit={false}>
                    <PerspectiveCamera makeDefault fov={7} position={[0, 0, 100]} near={0.01} far={100000} />
                    <group rotation={[-0.15, 0, -0.2]}>
                        {Array.from({ length: COUNT }).map((_, index) => [
                            <Billboard
                                key={`billboard-${index}`}
                                radius={5}
                                rotation={[0, index * Math.PI * 0.5, 0]}
                                position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]}
                                texture={texture} // <--passing the texture to the billboard
                                dimensions={dimensions} // <--passing the dimensions to the billboard
                            />,
                            <Banner
                                key={`banner-${index}`}
                                radius={5.035}
                                rotation={[0, 0, 0.085]}
                                position={[
                                    0,
                                    (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5,
                                    0,
                                ]}
                            />,
                        ])}
                    </group>
                </View>
            </div>
        );
    }

    Inside the Billboard component, we need to properly map this texture to make sure everything fits correctly. The width of our canvas will match the circumference of the cylinder, and we’ll center the y position of the texture. This way, all the images keep their resolution and don’t get squished or stretched.

    Billboard.jsx

    'use client';
    
    import * as THREE from 'three';
    import { useRef } from 'react';  
    
    function setupCylinderTextureMapping(texture, dimensions, radius, height) {
        const cylinderCircumference = 2 * Math.PI * radius;
        const cylinderHeight = height;
        const cylinderAspectRatio = cylinderCircumference / cylinderHeight;
    
        if (dimensions.aspectRatio > cylinderAspectRatio) {
            // Canvas is wider than cylinder proportionally
            texture.repeat.x = cylinderAspectRatio / dimensions.aspectRatio;
            texture.repeat.y = 1;
            texture.offset.x = (1 - texture.repeat.x) / 2;
        } else {
            // Canvas is taller than cylinder proportionally
            texture.repeat.x = 1;
            texture.repeat.y = dimensions.aspectRatio / cylinderAspectRatio;
        }
    
        // Center the texture
        texture.offset.y = (1 - texture.repeat.y) / 2;
    }
    
    function Billboard({ texture, dimensions, radius = 5, ...props }) {
        const ref = useRef(null);
    
        setupCylinderTextureMapping(texture, dimensions, radius, 2);
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} />
                <meshBasicMaterial map={texture} side={THREE.DoubleSide} />
            </mesh>
        );
    }
    
    export default Billboard;

    Now let’s animate them using the useFrame hook. The trick to animating these images is to just move the X offset of the texture. This gives us the effect of a rotating mesh, when really we’re just moving the texture offset.

    Billboard.jsx

    'use client';
    
    import * as THREE from 'three';
    import { useRef } from 'react';
    import { useFrame } from '@react-three/fiber';  
    
    function setupCylinderTextureMapping(texture, dimensions, radius, height) {
        const cylinderCircumference = 2 * Math.PI * radius;
        const cylinderHeight = height;
        const cylinderAspectRatio = cylinderCircumference / cylinderHeight;
    
        if (dimensions.aspectRatio > cylinderAspectRatio) {
            // Canvas is wider than cylinder proportionally
            texture.repeat.x = cylinderAspectRatio / dimensions.aspectRatio;
            texture.repeat.y = 1;
            texture.offset.x = (1 - texture.repeat.x) / 2;
        } else {
            // Canvas is taller than cylinder proportionally
            texture.repeat.x = 1;
            texture.repeat.y = dimensions.aspectRatio / cylinderAspectRatio;
        }
    
        // Center the texture
        texture.offset.y = (1 - texture.repeat.y) / 2;
    }
    
    function Billboard({ texture, dimensions, radius = 5, ...props }) {
        const ref = useRef(null);
    
        setupCylinderTextureMapping(texture, dimensions, radius, 2);
    
        useFrame((state, delta) => {
            if (texture) texture.offset.x += delta * 0.001;
        });
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} />
                <meshBasicMaterial map={texture} side={THREE.DoubleSide} />
            </mesh>
        );
    }
    
    export default Billboard;

    I think it would look even better if we made the back of the images a little darker. To do this, I created MeshImageMaterial – it’s just an extension of MeshBasicMaterial that makes our backface a bit darker.

    MeshImageMaterial.js

    import * as THREE from 'three';
    import { extend } from '@react-three/fiber';
    
    export class MeshImageMaterial extends THREE.MeshBasicMaterial {
        constructor(parameters = {}) {
            super(parameters);
            this.setValues(parameters);
        }
    
        onBeforeCompile = (shader) => {
            shader.fragmentShader = shader.fragmentShader.replace(
                '#include <color_fragment>',
                /* glsl */ `#include <color_fragment>
                if (!gl_FrontFacing) {
                vec3 blackCol = vec3(0.0);
                diffuseColor.rgb = mix(diffuseColor.rgb, blackCol, 0.7);
                }
                `
            );
        };
    }
    
    extend({ MeshImageMaterial });

    Billboard.jsx

    'use client';
    
    import * as THREE from 'three';
    import { useRef } from 'react';
    import { useFrame } from '@react-three/fiber';
    import '@/webgl/materials/MeshImageMaterial';
    
    function setupCylinderTextureMapping(texture, dimensions, radius, height) {
        const cylinderCircumference = 2 * Math.PI * radius;
        const cylinderHeight = height;
        const cylinderAspectRatio = cylinderCircumference / cylinderHeight;
    
        if (dimensions.aspectRatio > cylinderAspectRatio) {
            // Canvas is wider than cylinder proportionally
            texture.repeat.x = cylinderAspectRatio / dimensions.aspectRatio;
            texture.repeat.y = 1;
            texture.offset.x = (1 - texture.repeat.x) / 2;
        } else {
            // Canvas is taller than cylinder proportionally
            texture.repeat.x = 1;
            texture.repeat.y = dimensions.aspectRatio / cylinderAspectRatio;
        }
    
        // Center the texture
        texture.offset.y = (1 - texture.repeat.y) / 2;
    }
    
    function Billboard({ texture, dimensions, radius = 5, ...props }) {
        const ref = useRef(null);
    
        setupCylinderTextureMapping(texture, dimensions, radius, 2);
    
        useFrame((state, delta) => {
            if (texture) texture.offset.x += delta * 0.001;
        });
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} />
                <meshImageMaterial map={texture} side={THREE.DoubleSide} toneMapped={false} />
            </mesh>
        );
    }
    
    export default Billboard;

    And now we have our images moving around cylinders. Next, we’ll focus on banners (or marquees, whatever you prefer).

    Adding Texture to the Banner

    The last thing we need to fix is our Banner component. I wrapped it with this texture. Feel free to take it and edit it however you want, but remember to keep the proper dimensions of the texture.

    We simply import our texture using the useTexture hook, map it onto our material, and animate the texture offset just like we did in our Billboard component.

    Billboard.jsx

    'use client';
    
    import * as THREE from 'three';
    import bannerTexture from '@/assets/images/banner.jpg';
    import { useTexture } from '@react-three/drei';
    import { useFrame } from '@react-three/fiber';
    import { useRef } from 'react';
    
    function Banner({ radius = 1.6, ...props }) {
        const ref = useRef(null);
    
        const texture = useTexture(bannerTexture.src);
        texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
    
        useFrame((state, delta) => {
            if (!ref.current) return;
            const material = ref.current.material;
            if (material.map) material.map.offset.x += delta / 30;
        });
    
        return (
            <mesh ref={ref} {...props}>
                <cylinderGeometry
                    args={[radius, radius, radius * 0.07, radius * 80, radius * 10, true]}
                />
                <meshBasicMaterial
                    map={texture}
                    map-anisotropy={16}
                    map-repeat={[15, 1]}
                    side={THREE.DoubleSide}
                    toneMapped={false}
                    backfaceRepeatX={3}
                />
            </mesh>
        );
    }
    
    export default Banner;

    Nice! Now we have something cool, but I think it would look even cooler if we replaced the backface with something different. Maybe a gradient? For this, I created another extension of MeshBasicMaterial called MeshBannerMaterial. As you probably guessed, we just put a gradient on the backface. That’s it! Let’s use it in our Banner component.

    We replace the MeshBasicMaterial with MeshBannerMaterial and now it looks like this!

    MeshBannerMaterial.js

    import * as THREE from 'three';
    import { extend } from '@react-three/fiber';
    
    export class MeshBannerMaterial extends THREE.MeshBasicMaterial {
        constructor(parameters = {}) {
            super(parameters);
            this.setValues(parameters);
    
            this.backfaceRepeatX = 1.0;
    
            if (parameters.backfaceRepeatX !== undefined)
    
            this.backfaceRepeatX = parameters.backfaceRepeatX;
        }
    
        onBeforeCompile = (shader) => {
            shader.uniforms.repeatX = { value: this.backfaceRepeatX * 0.1 };
            shader.fragmentShader = shader.fragmentShader
            .replace(
                '#include <common>',
                /* glsl */ `#include <common>
                uniform float repeatX;
    
                vec3 pal( in float t, in vec3 a, in vec3 b, in vec3 c, in vec3 d ) {
                    return a + b*cos( 6.28318*(c*t+d) );
                }
                `
            )
            .replace(
                '#include <color_fragment>',
                /* glsl */ `#include <color_fragment>
                if (!gl_FrontFacing) {
                diffuseColor.rgb = pal(vMapUv.x * repeatX, vec3(0.5,0.5,0.5),vec3(0.5,0.5,0.5),vec3(1.0,1.0,1.0),vec3(0.0,0.10,0.20) );
                }
                `
            );
        };
    }
    
    extend({ MeshBannerMaterial });

    Banner.jsx

    'use client';
    
    import * as THREE from 'three';
    import bannerTexture from '@/assets/images/banner.jpg';
    import { useTexture } from '@react-three/drei';
    import { useFrame } from '@react-three/fiber';
    import { useRef } from 'react';
    import '@/webgl/materials/MeshBannerMaterial';
    
    function Banner({ radius = 1.6, ...props }) {
    const ref = useRef(null);
    
    const texture = useTexture(bannerTexture.src);
    
    texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
    
    useFrame((state, delta) => {
        if (!ref.current) return;
    
        const material = ref.current.material;
    
        if (material.map) material.map.offset.x += delta / 30;
    });
    
    return (
        <mesh ref={ref} {...props}>
            <cylinderGeometry
                args={[radius, radius, radius * 0.07, radius * 80, radius * 10, true]}
            />
            <meshBannerMaterial
                map={texture}
                map-anisotropy={16}
                map-repeat={[15, 1]}
                side={THREE.DoubleSide}
                toneMapped={false}
                backfaceRepeatX={3}
            />
        </mesh>
    );
    }
    
    export default Banner;

    And now we have it ✨

    Check out the demo

    You can experiment with this method in lots of ways. For example, I created 2 more examples with shapes I made in Blender, and mapped canvas textures on them. You can check them out here:

    Final Words

    Check out the final versions of all demos:

    I hope you enjoyed this tutorial and learned something new!

    Feel free to check out the source code for more details!



    Source link

  • Bloom Paris TV: Where Refined Art Direction Meets World-Class Production

    Bloom Paris TV: Where Refined Art Direction Meets World-Class Production



    In today’s oversaturated landscape of production service companies vying for attention, Bloom Paris TV approached our studio with an extraordinarily bold ambition: to distinguish themselves through an uncompromising combination of refined style and substantive expertise. Strategically positioned in the cultural and creative heart of Paris, Bloom offers international productions comprehensive and seamless on-the-ground support throughout France — meticulously handling everything from complex technical logistics to complex administrative workflows and regulatory requirements.

    But what truly sets Bloom apart is that they don’t merely facilitate shoots — they orchestrate them with exceptional precision, artistic vision, and unwavering reliability. In an industry where every minute counts, their discerning clients demand speed without sacrificing quality, complete trust in execution, and uncompromising excellence at every touchpoint. Bloom consistently delivers all three elements — seemingly effortlessly and with characteristic French sophistication.

    Our mission became crystal clear: design and develop a digital experience that authentically reflects the remarkable sharpness of their creative eye, the methodical structure of their production process, and the sophisticated elegance of their flawless execution across every project they undertake.

    The Concept

    We approached the website design with one unambiguous and defining intention: make an immediate, memorable impact upon first impression.

    Operating in a fast-paced industry where critical decisions are often made in mere seconds, we recognized that the digital experience needed to be simultaneously bold, fluid, and instantaneously engaging. Our strategic approach centered on minimalism with deliberate intent — methodically stripping away all superfluous elements while preserving only the absolute essentials, then thoughtfully amplifying Bloom’s distinctive core identity throughout the interface.

    At the conceptual heart of Bloom’s sophisticated logo lies a deceptively simple dot — subtle in appearance yet powerful in significance. We strategically extended this symbolic element across the entire user interface: integrating it within interactive buttons, intuitive navigation elements, typographic superscripts, and responsive interaction states. This visual motif evolved into the unifying thread throughout the experience, functioning as a recurring punctuation mark that guides users through a clean, cinematic narrative journey.

    Typography & Color System

    After careful consideration, we selected a commanding, contemporary sans-serif typeface specifically chosen to convey professional confidence and exceptional clarity. This distinctive font effectively anchors the entire site within a precisely calibrated, almost editorial layout structure — creating a harmonious balance between the dynamically asymmetric grid system and the meticulously structured, authoritative tone of voice that characterizes Bloom’s communication style.

    The carefully curated color palette features a sophisticated high-contrast dialogue between rich soft black and warm, inviting light grey, consciously avoiding the harshness of traditional monochrome combinations. A strategically placed vibrant yellow accent punctuates key interactive elements throughout the interface — subtly referencing cinematic film titles and professional cue markers, while simultaneously introducing a welcome sense of warmth, energy and approachability to the otherwise restrained interface design.

    Technology Stack

    Beneath the visually striking surface, the site is meticulously constructed with a powerful combination of technologies:

    • WordPress implemented as a robust, infinitely customizable content management system, providing Bloom with comprehensive control over their content strategy and presentation
    • GSAP for implementing buttery-smooth, cinematically-inspired animations and seamless page transitions throughout the experience
    • Custom-developed SVG masking techniques meticulously crafted to achieve elegantly seamless panel-based transitions between content sections
    • A fully responsive, thoroughly performance-optimized front-end architecture that ensures consistent excellence across all devices and connection speeds

    Loader & Page Transitions

    From the earliest conceptual discussions, we were determined to ensure that every transition moment within the experience would feel authentically cinematic and emotionally resonant.

    Each individual page opens with a dynamically animated panel that dramatically reveals the upcoming section title with a sweeping, theatrical gesture. This carefully choreographed visual sequence not only significantly enhances user orientation within the site architecture, but deliberately sets the sophisticated tone for a fluid, immersive journey through Bloom’s professional world.

    The distinctive homepage loader was specifically designed to create instant emotional resonance and connection: a fullscreen mask elegantly opens to dramatically reveal Bloom’s captivating showreel — creating an unforgettable first impression that immediately communicates their production capabilities. Thoughtfully combined with an interactive progress indicator, this element transforms into an engaging interactive curtain, gracefully inviting users to step into Bloom’s compelling narrative universe.

    Project Grid & Hover States

    Throughout the portfolio section, Bloom’s impressive projects are presented within a sophisticated asymmetric editorial grid structure, deliberately breaking the predictable monotony of conventional layouts while thoughtfully echoing the dynamic rhythm of visual storytelling. Individual content sizes and positions shift intuitively throughout the composition, creating intentional moments of both contemplative pause and energetic flow.

    During user interaction, the signature dot elegantly reappears as an intuitive focus indicator, while a smoothly animated marquee title gracefully glides over the preview image — simultaneously drawing attention and adding perceptual depth to the experience. This carefully considered combination creates a remarkably tactile, multi-layered effect that meaningfully rewards user interaction without overwhelming the visual hierarchy or distracting from the exceptional quality of Bloom’s project portfolio.

    Footer

    Thoughtfully designed as a final memorable touchpoint rather than an afterthought, the site’s footer functions as much more than a mere sign-off — it serves as an compelling invitation to further engagement.

    The footer section artfully reprises elements from the initial showreel presentation, elegantly contained within a precisely masked frame that maintains consistent visual language throughout the experience. Both functionally informative and poetically expressive, this distinctive footer ensures that Bloom’s powerful brand experience lingers in the user’s memory — even long after the final scroll action concludes their immediate journey.

    Who We Are

    We proudly define ourselves as a specialized digital design studio operating at the fascinating intersection of compelling narrative, intuitive interaction design, and cutting-edge technology implementation. We fundamentally believe in the transformative power of crafting digital interfaces that move with deliberate intention and purpose, thoughtfully combining minimalist aesthetic principles with boldly distinctive creative identity expressions.

    With each project we undertake, we consistently strive to create memorable digital experiences that communicate with exceptional clarity, move with captivating beauty, and feel genuinely alive and responsive to human interaction.



    Source link

  • From Static to Dynamic: 3 Micro-Animations Every Web Developer Can Master with Rive

    From Static to Dynamic: 3 Micro-Animations Every Web Developer Can Master with Rive


    Interactive web animations have become essential for modern websites, but choosing the right implementation approach can be challenging. CSS, Video and JavaScript are the familiar methods and each certainly has its place in a developer’s toolkit. When you need your site to have unique custom interactions (while remaining light and performant, of course), that’s where Rive shines.

    Rive animations, whether vector or raster, look crisp at any size, are lightweight (often smaller than equivalent Lottie files), and can respond to user interactions and real-time data through a straightforward JavaScript API.

    This tutorial will walk you through Rive’s workflow and implementation process using three practical examples. We’ll build them step-by-step using a fictional smart plant care company called “TapRoot” as our case study, so you can see exactly how Rive fits into a real development process and decide if it’s right for your next project.

    There are countless ways to use Rive, but we’ll focus on these three patterns:

    1. Animated Hero Images create an immediate emotional connection and brand personality
    2. Interactive CTAs increase conversion rates by providing clear, satisfying feedback
    3. Flexible Layouts combine elements into an experience that works at any size

    Each pattern builds on the previous one, teaching you progressively more sophisticated Rive techniques while solving real-world UX challenges.

    Pattern 1: The Living Hero Image

    The Static Starting Point

    A static hero section for TapRoot could feature a photo of their smart plant pot with overlay text. It show’s the product, but we can do better.

    Creating the Rive Animation

    Let’s create an animated version that transforms this simple scene into a revealing experience that literally shows what makes TapRoot “smarter than it looks.” The animation features:

    • Gently swaying leaves: Constant, subtle motion brings a sense of life to the page.
    • Interior-reveal effect: Hovering over the pot reveals the hidden root system and embedded sensors
    • Product Feature Callouts: Key features are highlighted with interactive callouts

    Although Rive is vector-based, you can also import JPG, PNG, and PSD files. With an embedded image, a mesh can be constructed and a series of bones can be bound to it. Animating the bones gives the subtle motion of the leaves moving. We’ll loop it at a slow speed so the motion is noticeable, but not distracting.

    Adding Interactivity

    Next we’ll add a hover animation that reveals the inside of the pot. By clipping the image of the front of the pot to a rectangle, we can resize the shape to reveal the layers underneath. Using a joystick allows us to have an animation follow the cursor when it’s in within the hit area of the pot and snap back to normal when the cursor leaves the area.

    Feature Callouts

    With a nested artboard, it is easy to build a single layout to create multiple versions of an element. In this case, a feature callout has an updated icon, title, and short description for three separate features.

    The Result

    What was once a simple product photo is now an interactive revelation of TapRoot’s hidden intelligence. The animation embodies the brand message—”smarter than it looks”—by literally revealing the sophisticated technology beneath a beautifully minimal exterior.

    Pattern 2: The Conversion-Boosting Interactive CTA

    Beyond the Basic Button

    Most CTAs are afterthoughts—a colored rectangle with text. But your CTA is often the most important element on your page. Let’s make it irresistible.

    The Static Starting Point

    <button class="cta-button">Get yours today</button>
    .cta-button {
      background: #4CAF50;
      color: white;
      padding: 16px 32px;
      border: none;
      border-radius: 8px;
      font-size: 18px;
      cursor: pointer;
      transition: background-color 0.3s;
    }
    
    .cta-button:hover {
      background: #45a049;
    }

    Looks like this:

    Get’s the job done, but we can do better.

    The Rive Animation Design

    Our smart CTA tells a story in three states:

    1. Idle State: Clean, minimal button with an occasional “shine” animation
    2. Hover State: Fingerprint icon begins to follow the cursor
    3. Click State: An animated “tap” of the button

    Pattern 3: Flexible Layout

    Next we can combine the elements into a responsive animated layout that works on any device size. Rive’s layout features familiar row and column arrangements and lets you determine how your animated elements fit within areas as they resize.

    Check this out on the Rive Marketplace to dive into the file or remix it: https://rive.app/community/files/21264-39951-taproot-layout/

    Beyond These Three Patterns

    Once you’re comfortable with hero images, interactive CTAs, and flexible layouts, you can apply the same Rive principles to:

    • Loading states that tell stories while users wait
    • Form validation that guides users with gentle visual feedback
    • Data visualizations that reveal insights through motion
    • Onboarding flows that teach through interaction
    • Error states that maintain user confidence through friendly animation

    Your Next Steps

    1. Start Simple: Choose one existing static element on your site
    2. Design with Purpose: Every animation should solve a real user problem
    3. Test and Iterate: Measure performance and user satisfaction
    4. Explore Further: Check out the Rive Documentation and Community for inspiration

    Conclusion

    The web is becoming more interactive and alive. By understanding how to implement Rive animations—from X-ray reveals to root network interactions—you’re adding tools that create experiences users remember and share.

    The difference between a good website and a great one often comes down to these subtle details: the satisfying feedback of a button click, the smooth transition between themes, the curiosity sparked by hidden technology. These micro-interactions connect with users on an emotional level while providing genuine functional value.



    Source link

  • Invisible Forces: The Making of Phantom.land’s Interactive Grid and 3D Face Particle System

    Invisible Forces: The Making of Phantom.land’s Interactive Grid and 3D Face Particle System


    From the outset, we knew we wanted something that subverted any conventional agency website formulas. Instead,
    inspired by the unseen energy that drives creativity, connection and transformation, we arrived at the idea of
    invisible forces
    . Could we take the powerful yet intangible elements that shape our world—motion, emotion, intuition, and
    inspiration—and manifest them in a digital space?

    We were excited about creating something that included many custom interactions and a very experiential feel. However,
    our concern was picking a set of tools that would allow most of our developers to contribute to and maintain the site
    after launch.

    We chose to start from a Next / React base, as we often do at Phantom. React also has the advantage of being
    compatible with the excellent React Three Fiber library, which we used to seamlessly bridge the gap between our DOM
    components and the WebGL contexts used across the site. For styles, we are using our very own
    CSS components
    as well as SASS.

    For interactive behaviours and animation, we chose to use GSAP for two main reasons. Firstly, it contains a lot of
    plugins we know and love, such as SplitText, CustomEase and ScrollTrigger. Secondly, GSAP allows us to use a single
    animation framework across DOM and WebGL components.

    We could go on and on talking about the details behind every single animation and micro-interaction on the site, but
    for this piece we have chosen to focus our attention on two of the most unique components of our site: the homepage
    grid and the scrollable employee face particle carousel.

    The Homepage Grid

    It took us a very long time to get this view to perform and feel just how we wanted it to. In this article, we will focus on the interactive part. For more info on how we made things performant, head to our previous article: Welcome back to Phantomland

    Grid View

    The project’s grid view is integrated into the homepage by incorporating a primitive Three.js object into a React
    Three Fiber scene.

    //GridView.tsx
    const GridView = () => {
      return (
        <Canvas>
          ...
          <ProjectsGrid />
          <Postprocessing />
        </Canvas>
      );
    }
    
    //ProjectsGrid.tsx
    const ProjectsGrid = ({atlases, tiles}: Props) => {
      const {canvas, camera} = useThree();
      
      const grid = useMemo(() => {
        return new Grid(canvas, camera, atlases, tiles);
      }, [canvas, camera, atlases, tiles]);
    
      if(!grid) return null;
      return (
        <primitive object={grid} />
      );
    }

    We initially wanted to write all the code for the grid using React Three Fiber but realised that, due to the
    complexity of our grid component, a vanilla
    Three.js
    class would be easier to maintain.

    One of the key elements that gives our grid its iconic feel is our post-processing distortion effect. We implemented
    this feature by creating a custom shader pass within our post-processing pipeline:

    // Postprocessing.tsx
    const Postprocessing = () => {
      const {gl, scene, camera} = useThree();
      
      // Create Effect composer
      const {effectComposer, distortionShader} = useMemo(() => {
        const renderPass = new RenderPass(scene, camera);
        const distortionShader = new DistortionShader();
        const distortionPass = new ShaderPass(distortionShader);
        const outputPass = new OutputPass();
    
        const effectComposer = new EffectComposer(gl);
        effectComposer.addPass(renderPass);
        effectComposer.addPass(distortionPass);
        effectComposer.addPass(outputPass);
    
        return {effectComposer, distortionShader};
      }, []);
      
      // Update distortion intensity
      useEffect(() => {
        if (workgridState === WorkgridState.INTRO) {
          distortionShader.setDistortion(CONFIG.distortion.flat);
        } else {
          distortionShader.setDistortion(CONFIG.distortion.curved);
        }
      }, [workgridState, distortionShader]);
      
      // Update distortion intensity
      useFrame(() => {
        effectComposer.render();
      }, 1);
     
      return null;
    }

    When the grid transitions in and out on the site, the distortion intensity changes to make the transition feel
    natural. This animation is done through a simple tween in our
    DistortionShader
    class:

    class DistortionShader extends ShaderMaterial {
      private distortionIntensity = 0;
    
      super({
          name: 'DistortionShader',
          uniforms: {
            distortionIntensity: {value: new Vector2()},
            ...
          },
          vertexShader,
          fragmentShader,
      });
    
      update() {
        const ratio = window.innerWidth, window.innerHeight;
        this.uniforms[DistortionShaderUniforms.DISTORTION].value.set(
          this.distortionIntensity * ratio,
          this.distortionIntensity * ratio,
        );
      }
    
      setDistortion(value: number) {
        gsap.to(this, {
          distortionIntensity: value,
          duration: 1,
          ease: 'power2.out',
          onUpdate: () => this.update()    }
      }
    }

    Then the distortion is applied through our custom shader:

    // fragment.ts
    export const fragmentShader = /* glsl */ `
      uniform sampler2D tDiffuse;
      uniform vec2 distortion;
      uniform float vignetteOffset;
      uniform float vignetteDarkness;
    
      varying vec2 vUv;
      
      // convert uv range from 0 -> 1 to -1 -> 1
      vec2 getShiftedUv(vec2 uv) {
        return 2. * (uv - .5);
      }
      
      // convert uv range from -1 -> 1 to 0 -> 1
      vec2 getUnshiftedUv(vec2 shiftedUv) {
        return shiftedUv * 0.5 + 0.5;
      }
    
    
      void main() {
        vec2 shiftedUv = getShiftedUv(vUv);
        float distanceToCenter = length(shiftedUv);
        
        // Lens distortion effect
        shiftedUv *= (0.88 + distortion * dot(shiftedUv));
        vec2 transformedUv = getUnshiftedUv(shiftedUv);
        
        // Vignette effect
        float vignetteIntensity = smoothstep(0.8, vignetteOffset * 0.799,  (vignetteDarkness + vignetteOffset) * distanceToCenter);
        
        // Sample render texture and output fragment
        color = texture2D( tDiffuse, distortedUV ).rgb * vignetteIntensity;
        gl_FragColor = vec4(color, 1.);
      }

    We also added a vignette effect to our post-processing shader to darken the corners of the viewport, focusing the
    user’s attention toward the center of the screen.

    In order to make our home view as smooth as possible, we also spent a fair amount of time crafting the
    micro-interactions and transitions of the grid.

    Ambient mouse offset

    When the user moves their cursor around the grid, the grid moves slightly in the opposite direction, creating a very
    subtle ambient floating effect. This was simply achieved by calculating the mouse position on the grid and moving the
    grid mesh accordingly:

    getAmbientCursorOffset() {
      // Get the pointer coordinates in UV space ( 0 - 1 ) range
      const uv = this.navigation.pointerUv;
      const offset = uv.subScalar(0.5).multiplyScalar(0.2);
      return offset;
    }
    
    update() {
      ...
      // Apply cursor offset to grid position
      const cursorOffset = getAmbientCursorOffset();
      this.mesh.position.x += cursorOffset.x;
      this.mesh.position.y += cursorOffset.y;
    }

    Drag Zoom

    When the grid is dragged around, a zoom-out effect occurs and the camera seems to pan away from the grid. We created
    this effect by detecting when the user starts and stops dragging their cursor, then using that to trigger a GSAP
    animation with a custom ease for extra control.

    onPressStart = () => {
      this.animateCameraZ(0.5, 1);
    }
    
    onPressEnd = (isDrag: boolean) => {
      if(isDrag) {
        this.animateCameraZ(0, 1);
      }
    }
    
    animateCameraZ(distance: number, duration: number) {
      gsap.to(this.camera.position, {
        z: distance,
        duration,
        ease: CustomEase.create('cameraZoom', '.23,1,0.32,1'),
      });
    }

    Drag Movement

    Last but not least, when the user drags across the grid and releases their cursor, the grid slides through with a
    certain amount of inertia.

    drag(offset: Vector2) {
      this.dragAction = offset;
    
      // Gradually increase velocity with drag time and distance
      this.velocity.lerp(offset, 0.8);
    }
    
    // Every frame
    update() {
      // positionOffset is later used to move the grid mesh
      if(this.isDragAction) {
        // if the user is dragging their cursor, add the drag value to offset
        this.positionOffset.add(this.dragAction.clone());
      } else {
        // if the user is not dragging, add the velocity to the offset
        this.positionOffset.add(this.velocity);
      }
    
      this.dragAction.set(0, 0);
      // Attenuate velocity with time
      this.velocity.lerp(new Vector2(), 0.1);
    }

    Face Particles

    The second major component we want to highlight is our employee face carousel, which presents team members through a
    dynamic 3D particle system. Built with React Three Fiber’s
    BufferGeometry
    and custom GLSL shaders, this implementation leverages custom shader materials for lightweight performance and
    flexibility, allowing us to generate entire 3D face representations using only a 2D colour photograph and its
    corresponding depth map—no 3D models required.

    Core Concept: Depth-Driven Particle Generation

    The foundation of our face particle system lies in converting 2D imagery into volumetric 3D representations. We’ve
    kept things efficient, with each face using only two optimized 256×256 WebP images (under 15KB each).

    To capture the images, each member of the Phantom team was 3D scanned using
    RealityScan
    from Unreal Engine on iPhone, creating a 3D model of their face.

    These scans were cleaned up and then rendered from Cinema4D with a position and colour pass.

    The position pass was converted into a greyscale depth map in Photoshop, and this—along with the colour pass—was
    retouched where needed, cropped, and then exported from Photoshop to share with the dev team.

    Each face is constructed from approximately 78,400 particles (280×280 grid), where each particle’s position and
    appearance is determined by sampling data from our two source textures.

    /* generate positions attributes array */
    const POINT_AMOUNT = 280;
    
    const points = useMemo(() => {
      const length = POINT_AMOUNT * POINT_AMOUNT;
      const vPositions = new Float32Array(length * 3);
      const vIndex = new Float32Array(length * 2);
      const vRandom = new Float32Array(length * 4);
    
      for (let i = 0; i < length; i++) {
          const i2 = i * 2;
          vIndex[i2] = (i % POINT_AMOUNT) / POINT_AMOUNT;
          vIndex[i2 + 1] = i / POINT_AMOUNT / POINT_AMOUNT;
    
          const i3 = i * 3;
          const theta = Math.random() * 360;
          const phi = Math.random() * 360;
          vPositions[i3] = 1 * Math.sin(theta) * Math.cos(phi);
          vPositions[i3 + 1] = 1 * Math.sin(theta) * Math.sin(phi);
          vPositions[i3 + 2] = 1 * Math.cos(theta);
    
          const i4 = i * 4;
          vRandom.set(
            Array(4)
              .fill(0)
              .map(() => Math.random()),
            i4,
          );
      }
    
      return {vPositions, vRandom, vIndex};
    }, []);
    // React Three Fiber component structure 
    const FaceParticleSystem = ({ particlesData, currentDataIndex }) => {
      return (
        <points ref={pointsRef} position={pointsPosition}>
          <bufferGeometry>
            <bufferAttribute attach="attributes-vIndex" 
                 args={[points.vIndex, 2]} />
            <bufferAttribute attach="attributes-position"
                 args={[points.vPositions, 3]} />
            <bufferAttribute attach="attributes-vRandom"
                 args={[points.vRandom, 4]} />
          </bufferGeometry>
          
          <shaderMaterial
            blending={NormalBlending}
            transparent={true}
            fragmentShader={faceFrag}
            vertexShader={faceVert}
            uniforms={uniforms}
          />
        </points>
      );
    };

    The depth map provides normalized values (0–1) that directly translate to Z-depth positioning. A value of 0 represents
    the furthest point (background), while 1 represents the closest point (typically the nose tip).

    /* vertex shader */ 
    
    // sample depth and color data for each particle
    vec3 depthTexture1 = texture2D(depthMap1, vIndex.xy).xyz;
    
    // convert depth to Z-position
    float zDepth = (1. - depthValue.z);
    pos.z = (zDepth * 2.0 - 1.0) * zScale;

    Dynamic Particle Scaling Through Colour Analysis

    One of the key methods that brings our faces to life is utilizing colour data to influence particle scale. In our
    vertex shader, rather than using uniform particle sizes, we analyze the colour density of each pixel so that brighter,
    more colourful areas of the face (like eyes, lips, or well-lit cheeks) generate larger, more prominent particles,
    while darker areas (shadows, hair) create smaller, subtler particles. The result is a more organic, lifelike
    representation that emphasizes facial features naturally.

    /* vertex shader */ 
    
    vec3 colorTexture1 = texture2D(colorMap1, vIndex.xy).xyz;
    
    // calculate color density
    float density = (mainColorTexture.x + mainColorTexture.y + mainColorTexture.z) / 3.;
    
    // map density to particle scale
    float pScale = mix(pScaleMin, pScaleMax, density);

    The calibration below demonstrates the influence of colour (contrast, brightness, etc.) on the final 3D particle formation.

    Ambient Noise Animation

    To prevent static appearances and maintain visual interest, we apply continuous noise-based animation to all
    particles. This ambient animation system uses curl noise to create subtle, flowing movement across the entire
    face structure.

    /* vertex shader */ 
    
    // primary curl noise for overall movement 
    pos += curlNoise(pos * curlFreq1 + time) * noiseScale * 0.1;
    // animation updates in React Three Fiber
    
    useFrame((state, delta) => {
      if (!materialRef.current) return;
      
      materialRef.current.uniforms.time.value = state.clock.elapsedTime * NOISE_SPEED;
      
      // update rotation based on mouse interaction
      easing.damp(pointsRef.current.rotation, 'y', state.mouse.x * 0.12 * Math.PI, 0.25, delta);
      easing.damp(pointsRef.current.rotation, 'x', -state.pointer.y * 0.05 * Math.PI, 0.25, delta);
    
    });

    Face Transition Animation

    When transitioning between different team members, we combine timeline-based interpolation with visual effects written
    in shader materials.

    GSAP-Driven Lerp Method

    The transition foundation uses GSAP timelines to animate multiple shader parameters simultaneously:

    timelineRef.current = gsap
      .timeline()
      .fromTo(uniforms.transition, {value: 0}, {value: 1.3, duration: 1.6})
      .to(uniforms.posZ, {value: particlesParams.offset_z, duration: 1.6}, 0)
      .to(uniforms.zScale, {value: particlesParams.face_scale_z, duration: 1.6}, 0);

    And the shader handles the visual blending between two face states:

    /* vertex shader */ 
    
    // smooth transition curve
    float speed = clamp(transition * mix(0.8, .9, transition), 0., 1.0); 
    speed = smoothstep(0.0, 1.0, speed); 
    
    // blend textures 
    vec3 mainColorTexture = mix(colorTexture1, colorTexture2, speed); 
    vec3 depthValue =mix(depthTexture1, depthTexture2, speed);

    To add visual interest during transitions, we further inject additional noise that’s strongest at the midpoint of the
    transition. This creates a subtle “disturbance” effect where particles temporarily deviate from their target
    positions, making transitions feel more dynamic and organic.

    /* vertex shader */ 
    
    // secondary noise movement applied for transition
    float randomZ = vRandom.y + cnoise(pos * curlFreq2 + t2) * noiseScale2;
    
    float smoothTransition = abs(sin(speed * PI)); 
    pos.x += nxScale * randomZ * 0.1 * smoothTransition; 
    pos.y += nyScale *randomZ * 0.1 * smoothTransition;
    pos.z += nzScale * randomZ * 0.1 * smoothTransition;

    Custom Depth of Field Effect

    To enhance the three-dimensional perception, we implemented a custom depth of field effect directly in our shader
    material. It calculates view-space distance for each particle and modulates both opacity and size based on proximity
    to a configurable focus plane.

    /* vertex shader - calculate view distance */
    
    vec4 viewPosition = viewMatrix * modelPosition;
    vDistance = abs(focus +viewPosition.z); 
    
    // apply distance to point size for blur effect 
    gl_PointSize = pointSize * pScale * vDistance * blur * totalScale;
    /* fragment shader - calculate distance-based alpha for DOF */
    
    
    float alpha = (1.04 - clamp(vDistance * 1.5, 0.0, 1.0));
    gl_FragColor = vec4(color, alpha);

    Challenges: Unifying Face Scales

    One of the challenges we faced was achieving visual consistency across different team members’ photos. Each photograph
    was captured under slightly different conditions—varying lighting, camera distances, and facial proportions.
    Therefore, we went through each face to calibrate multiple scaling factors:

    • Depth scale calibration
      to ensure no nose protrudes too aggressively
    • Colour density balancing
      to maintain consistent particle size relationships
    • Focus plane optimization
      to prevent excessive blur on any individual face
    // individual face parameters requiring manual tuning 
    
    particle_params: { 
      offset_z: 0,           // overall Z-position
      z_depth_scale: 0,      // depth map scaling factor
      face_size: 0,          // overall face scale 
    }

    Final Words

    Our face particle system demonstrates how simple yet careful technical implementation can create fun visual
    experiences from minimal assets. By combining lightweight WebP textures, custom shader materials, and animations,
    we’ve created a system that transforms simple 2D portraits into interactive 3D figures.

    Check out the full site.

    Curious about what we’re up to in the Phantom studio? Or have a project you think we’d be interested in? Get in touch.



    Source link

  • Motion Highlights #10

    Motion Highlights #10



    A fresh collection of motion designs and animations from the creative community.



    Source link

  • Inside the Frontier of AI, WebXR & Real-Time 3D: Crafting KODE Immersive

    Inside the Frontier of AI, WebXR & Real-Time 3D: Crafting KODE Immersive


    KODE Immersive fuses AR, VR, real-time 3D, and spatial computing to craft high-impact, interactive experiences. It’s not just a platform – it’s a portal. Designed to ignite emotion, shatter expectations, and redefine digital engagement.

    Our challenge? To bring this pioneering vision to life, not just by explaining what KODE Immersive is, but by making visitors experience what it’s like to be inside it.

    Background

    Our relationship with KODE began in 2022 when we extended their brand identity and reimagined their digital home. What started as a brand refresh quickly evolved into a creative partnership rooted in shared values and a mutual obsession with crafted brand experience and beautiful design.

    In late 2024, KODE approached us with a new venture. This time, they were diving headfirst into emerging technologies (AI, WebXR, and real-time 3D) to expand their service offering. We knew immediately, this was the kind of project you dream about. It was a timely opportunity and got us excited to push boundaries.

    The Brief

    The brief was as open as it gets. Beyond a few core objectives (namely, budget and timeline), there were no strict constraints. We received a three-slide deck: a name, a positioning statement, three brand pillars (CREATE, IDEATE, DELIVER), and a few straplines.

    No case studies. No visual identity. Just a bold vision.

    And that freedom became our greatest asset. We built everything from scratch, visual language, tone, interactions, while staying mindful of budget and speed. Our approach:  move fast, iterate often, and push boundaries.

    To pull it off, we adopted a phased R&D process. We teamed up with the brilliant Francesco Michelini (who previously helped build the Malvah website). Francesco lives and breathes WebGL. He once spent a week refining a mechanic we had already agreed to abandon, just because he couldn’t accept defeat. That kind of drive made him the perfect collaborator.

    Our Process

    We used KODE Immersive as a live testing ground for our refined four-phase process, aimed at delivering the best creative solutions while avoiding unnecessary feedback loops. Here’s how it shaped the final outcome.

    01 Discover

    We kicked things off with an in-depth strategy session where we unpacked the brief, explored concepts, discussed competitors, and mapped out technical possibilities. Style tiles helped form the foundation of our visual language.

    Typography was the key differentiator. We knew the right typeface would communicate innovation and intent. After multiple rounds, we landed on Brut by Off-Type – an unconventional mono-inspired form that struck just the right balance of structure and tension.

    Colour took cues from the parent brand, but we flipped the hierarchy. Orange became the dominant tone, with bold black moments layered throughout. Familiar, yet distinctive.

    Iconography evolved from KODE’s chevron mark. We repurposed it as a modular, dynamic system to guide interactions and reflect the brand’s three core pillars.

    02 Create

    This phase became interesting, since the experience would rely heavily on user interaction, this phase was driven more by prototyping than traditional design. We worked in tight, iterative loops with the client, across design, 3D, and development to test feasibility early and often. It became an it was extremely organic process and ideal to reach the deadline while stretching limitations.

    From the start, we knew we didn’t just want users to interact—we wanted them to feel immersed. To lose track of time by being emotionally and mentally engaged. 

    We developed a range of 3D concepts in Cinema 4D and funnelled them through R&D cycles. The process required a lot of iterating and relooking creative solutions, but was always collaborative – and ultimately, essential for innovation.

    03 Craft

    This is where the magic happens. 

    Our craft is what we consider our special sauce at Malvah – this is where we like to push, refine, and design with intent and clarity. It’s hard not to get lost in the sauce. Massive respect for Francesco during this phase as it is the most intense in terms of iterations, from shader logic to ambient lighting to the haptic quality of cursor interactions, and every component was built to feel immersive yet effortless. Luckily, Francesco is an actual living wizard and provided us with testing environments where we could craft all these elements seamlessly. 

    Still, something was missing! The high-fidelity 3D was clashing with the flat backgrounds. The fix? A subtle layer of pixel distortion and soft noise texture. Minimal, but transformative. Suddenly, the whole experience felt unified – like everything belonged as one unified, harmonious experience.

    04 Deliver

    By final QA, most of the heavy lifting was done. We stress-tested performance across browsers, devices, and connection speeds. We refined micro-interactions and polished details based on early user feedback.

    Tech Stack

    Nerd stuff alert.

    From the outset, this was always going to be a Three.js and WebGL project – not for the novelty, but for the storytelling power. Real-time 3D let us turn a static brand into a living, breathing experience. We used Cinema 4D for concepting and prototyping, from early ideation through to final modelling and meta-cap creation.

    One of the most impactful performance optimisations came through the use of BatchedMesh, which enabled us to draw multiple meshes sharing the same material in a single draw call. Since draw calls are among the most expensive operations in WebGL, this dramatically improved efficiency, reducing calls from 40 or 50 down to just one. You’ll see this in action in both the hero section and the footer, where we also implemented the Rapier physics engine for dynamic interaction.

    The real breakthrough, however, was moving the rendering of our most resource-intensive canvases to an OffscreenCanvas, with all related logic handled inside a WebWorker. This shift happened later in the project and required significant reworking, but the gains in performance and responsiveness were undeniable. It was a technically ambitious move, but one that paid off. 

    Features

    The site follows a continuous scroll narrative—a careful dance between interactivity, emotion, and information. With the primary goal to provoke curiosity and invite deep engagement, rom top to bottom, here’s a rundown of our favourite features.

    Chevron

    We land on the hero of the brand, the logo-mark. The chevron is the anchor,  both literally and metaphorically. The driving force behind the iconography that would funnel through the experience. We wanted the entry point to set the tone, bold, dynamic, and intuitive for the user to explore. 

    Shifting Text

    One of those happy accidents. Inspired by a line that didn’t make the final copy, we developed a mechanic where text splits and shifts as the cursor moves. A metaphor for deconstruction and reformation – fluid, dynamic, alive.

    Icons

    A playful space to explore, discover, and interact. Designed to echo the brand’s chevron and embody its core pillars.

    Menu

    One of our favourite elements. It subverts the typical UX pattern by growing from the base and transforming into the footer as users scroll; a small disruption that makes a big impact.

    SFX

    Sound is often the unsung hero. We follow the 80/20 rule here, also known as the Pareto Principle —just the right amount to amplify emotion without overwhelming the experience. From section transitions to hover feedback, the layered soundscape adds depth and atmosphere. The transition from the landing section to the services has the user feeling as if they are entering a new realm.

    We worked with Martin Leitner from Sounds Good to curate the sound elements in aiding the experience and bringing the interaction with the 3D elements to life. This was such a great experience, and Martin’s enthusiasm helped drive the process and the team’s excitement. 

    Easter Egg

    We always planned for an easter egg, we just didn’t know what it was until it revealed itself.

    A sketch mechanic, pulled from KODE’s visual identity, was integrated into the cursor. Users can draw on the screen to reveal a hidden layer; a playful nod to the analogue-digital duality of the brand.

    Early testers were missing it entirely. So we added a subtle auto-activation trigger at just the right moment. Problem solved.

    Reflections

    This project reminded us that the best results often emerge from ambiguity. No case studies. No visual assets. No roadmap. Just vision and trust.

    While we’re proud of what we’ve delivered, we’ve only scratched the surface. Phase Two will introduce interactive case studies and deeper storytelling. We’re especially excited to explore a z-axis scroll journey through each service, bringing dimension and discovery to the next level.For now, KODE Immersive is live.

    And it’s only getting started.



    Source link

  • Coding a 3D Audio Visualizer with Three.js, GSAP & Web Audio API

    Coding a 3D Audio Visualizer with Three.js, GSAP & Web Audio API


    Sound is vibration, vision is vibration you can see. I’m always chasing the moment those waves overlap. For a recent Webflow & GSAP community challenge focusing on GSAP Draggable and Inertia Plugin, I decided to push the idea further by building a futuristic audio-reactive visualizer. The concept was to create a sci-fi “anomaly detector” interface that reacts to music in real time, blending moody visuals with sound.

    The concept began with a simple image in my mind: a glowing orange-to-white sphere sitting alone in a dark void, the core that would later pulse with the music. To solidify the idea, I ran this prompt through Midjourney: “Glowing orange and white gradient sphere, soft blurry layers, smooth distortion, dark black background, subtle film-grain, retro-analog vibe, cinematic lighting.” After a few iterations I picked the frame that felt right, gave it a quick color pass in Photoshop, and used that clean, luminous orb as the visual foundation for the entire audio-reactive build.

    Midjourney explorations

    The project was originally built as an entry for the Webflow × GSAP Community Challenge (Week 2: “Draggable & Inertia”), which encouraged the use of GSAP’s dragging and inertia capabilities. This context influenced the features: I made the on-screen control panels draggable with momentum, and even gave the 3D orb a subtle inertia-driven movement when “flung”. In this article, I’ll walk you through the entire process – from setting up the Three.js scene and analyzing audio with the Web Audio API, to creating custom shaders and adding GSAP animations and interactivity. By the end, you’ll see how code, visuals, and sound come together to create an immersive audio visualizer.

    Setting Up the Three.js Scene

    To build the 3D portion, I used Three.js to create a scene containing a dynamic sphere (the “anomaly”) and other visual elements. 

    We start with the usual Three.js setup: a scene, a camera, and a renderer. I went with a perspective camera to get a nice 3D view of our orb and placed it a bit back so the object is fully in frame. 

    An OrbitControls is used to allow basic click-and-drag orbiting around the object (with some damping for smoothness). Here’s a simplified snippet of the initial setup:

    // Initialize Three.js scene, camera, renderer
    const scene = new THREE.Scene();
    const camera = new THREE.PerspectiveCamera(75, window.innerWidth/window.innerHeight, 0.1, 100);
    camera.position.set(0, 0, 10);  // camera back a bit from origin
    
    const renderer = new THREE.WebGLRenderer({ antialias: true });
    renderer.setSize(window.innerWidth, window.innerHeight);
    document.body.appendChild(renderer.domElement);
    
    // Add OrbitControls for camera rotation
    const controls = new THREE.OrbitControls(camera, renderer.domElement);
    controls.enableDamping = true;
    controls.dampingFactor = 0.1;
    controls.rotateSpeed = 0.5;
    controls.enableZoom = false; // lock zoom for a more fixed view

    Next, I created the anomaly object. This is the main feature: a spiky wireframe sphere that reacts to audio. Three.js provides shapes like SphereGeometry or IcosahedronGeometry that we can use for a sphere. I chose an icosahedron geometry because it gives an interesting multi sided look and allows easy control of detail (via a subdivision level). The anomaly is actually composed of two overlapping parts:

    • Outer wireframe sphere: An IcosahedronGeometry with a custom ShaderMaterial that draws it as a glowing wireframe. This part will distort based on music (imagine it “vibrating” and morphing with the beat).
    • Inner glow sphere: A slightly larger SphereGeometry drawn with a semi-transparent, emissive shader (using the backside of the geometry) to create a halo or aura around the wireframe. This gives the orb a warm glow effect, like an energy field.

    I also added in some extra visuals: a field of tiny particles floating in the background (for a depth effect, like dust or sparks) and a subtle grid overlay in the UI (more on the UI later). The scene’s background is set to a dark color, and I layered a background image (the edited Midjourney visual) behind the canvas to create the mysterious-alien landscape horizon. This combination of 3D objects and 2D backdrop creates the illusion of a holographic display over a planetary surface.

    Integrating the Web Audio API for Music Analysis

    With the 3D scene in place, the next step was making it respond to music. This is where the Web Audio API comes in. I allowed the user to either upload an audio file or pick one of the four provided tracks. When the audio plays, we tap into the audio stream and analyze its frequencies in real-time using an AnalyserNode. The AnalyserNode gives us access to frequency data. This is a snapshot of the audio spectrum (bass, mids, treble levels, etc.) at any given moment, which we can use to drive animations.

    To set this up, I created an AudioContext and an AnalyserNode, and connected an audio source to it. If you’re using an <audio> element for playback, you can create a MediaElementSource from it and pipe that into the analyser. For example:

    // Create AudioContext and Analyser
    const audioContext = new (window.AudioContext || window.webkitAudioContext)();
    const analyser = audioContext.createAnalyser();
    analyser.fftSize = 2048;                  // Use an FFT size of 2048 for analysis
    analyser.smoothingTimeConstant = 0.8;     // Smooth out the frequencies a bit
    
    // Connect an audio element source to the analyser
    const audioElement = document.getElementById('audio-player');  // <audio> element
    const source = audioContext.createMediaElementSource(audioElement);
    source.connect(analyser);
    analyser.connect(audioContext.destination);  // connect to output so sound plays

    Here we set fftSize to 2048, which means the analyser will break the audio into 1024 frequency bins (frequencyBinCount is half of fftSize). We also set a smoothingTimeConstant to make the data less jumpy frame-to-frame. Now, as the audio plays, we can repeatedly query the analyser for data. The method analyser.getByteFrequencyData(array) fills an array with the current frequency magnitudes (0–255) across the spectrum. Similarly, getByteTimeDomainData gives waveform amplitude data. In our animation loop, I call analyser.getByteFrequencyData() on each frame to get fresh data:

    const frequencyData = new Uint8Array(analyser.frequencyBinCount);
    
    function animate() {
      requestAnimationFrame(animate);
    
      // ... update Three.js controls, etc.
      if (analyser) {
        analyser.getByteFrequencyData(frequencyData);
        // Compute an average volume level from frequency data
        let sum = 0;
        for (let i = 0; i < frequencyData.length; i++) {
          sum += frequencyData[i];
        }
        const average = sum / frequencyData.length;
        let audioLevel = average / 255;  // normalize to 0.0–1.0
        // Apply a sensitivity scaling (from a UI slider) 
        audioLevel *= (sensitivity / 5.0);
        // Now audioLevel represents the intensity of the music (0 = silence, ~1 = very loud)
      }
    
      // ... (use audioLevel to update visuals)
      renderer.render(scene, camera);
    }

    In my case, I also identified a “peak frequency” (the frequency bin with the highest amplitude at a given moment) and some other metrics just for fun, which I display on the UI (e.g. showing the dominant frequency in Hz, amplitude, etc., as “Anomaly Metrics”). But the key takeaway is the audioLevel – a value representing overall music intensity – which we’ll use to drive the 3D visual changes.

    Syncing Audio with Visuals: Once we have audioLevel, we can inject it into our Three.js world. I passed this value into the shaders as a uniform every frame, and also used it to tweak some high-level motion (like rotation speed). Additionally, GSAP animations were triggered by play/pause events (for example, a slight camera zoom when music starts, which we’ll cover next). The result is that the visuals move in time with the music: louder or more intense moments in the audio make the anomaly glow brighter and distort more, while quiet moments cause it to settle down.

    Creating the Audio-Reactive Shaders

    To achieve the dynamic look for the anomaly, I used custom GLSL shaders in the material. Three.js lets us write our own shaders via THREE.ShaderMaterial, which is perfect for this because it gives fine-grained control over vertex positions and fragment colors. This might sound difficult if you’re new to shaders, but conceptually we did two major things in the shader:

    1. Vertex Distortion with Noise: We displace the vertices of the sphere mesh over time to make it wobble and spike. I included a 3D noise function (Simplex noise) in the vertex shader – it produces a smooth pseudo-random value for any 3D coordinate. For each vertex, I calculate a noise value based on its position (plus a time factor to animate it). Then I move the vertex along its normal by an amount proportional to that noise. We also multiply this by our audioLevel and a user-controlled distortion factor. Essentially, when the music is intense (high audioLevel), the sphere gets spikier and more chaotic; when the music is soft or paused, the sphere is almost smooth.
    2. Fresnel Glow in Fragment Shader: To make the wireframe edges glow and fade realistically, I used a fresnel effect in the fragment shader. This effect makes surfaces more luminous at glancing angles. We calculate it by taking the dot product of the view direction and the vertex normal – it results in a value that’s small on edges (grazing angles) and larger on faces directly facing the camera. By inverting and exponentiating this, we get a nice glow on the outline of the sphere that intensifies at the edges. I modulated the fresnel intensity with the audioLevel as well, so the glow pulsates with the beat.

    Let’s look at a simplified version of the shader code for the outer wireframe sphere material:

    const outerMaterial = new THREE.ShaderMaterial({
      uniforms: {
        time:      { value: 0 },
        audioLevel:{ value: 0 },            // this will be updated each frame
        distortion:{ value: 1.0 },
        color:     { value: new THREE.Color(0xff4e42) }  // a reddish-orange base color
      },
      wireframe: true,
      transparent: true,
      vertexShader: `
        uniform float time;
        uniform float audioLevel;
        uniform float distortion;
        // (noise function omitted for brevity)
    
        void main() {
          // Start with the original position
          vec3 pos = position;
          // Calculate procedural noise value for this vertex (using its position and time)
          float noise = snoise(pos * 0.5 + vec3(0.0, 0.0, time * 0.3));
          // Displace vertex along its normal
          pos += normal * noise * distortion * (1.0 + audioLevel);
          // Standard transformation
          gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
        }
      `,
      fragmentShader: `
        uniform vec3 color;
        uniform float audioLevel;
        varying vec3 vNormal;
        varying vec3 vPosition;
        
        void main() {
          // Calculate fresnel (view-angle dependent) term
          vec3 viewDir = normalize(cameraPosition - vPosition);
          float fresnel = 1.0 - max(0.0, dot(viewDir, vNormal));
          fresnel = pow(fresnel, 2.0 + audioLevel * 2.0);
          // Make the fragment color brighter on edges (fresnel) and pulse it slightly with time
          float pulse = 0.8 + 0.2 * sin(time * 2.0);
          vec3 emissiveColor = color * fresnel * pulse * (1.0 + audioLevel * 0.8);
          // Alpha fade out a bit when audio is high (to make spikes more ethereal)
          float alpha = fresnel * (0.7 - audioLevel * 0.3);
          gl_FragColor = vec4(emissiveColor, alpha);
        }
      `
    });

    In this shader, snoise is a Simplex noise function (not shown above) producing values ~-1 to 1. The vertex shader uses it to offset each vertex (pos += normal * noise * …). We multiply the noise by (1.0 + audioLevel) so that when audioLevel rises, the displacement increases. The distortion uniform is controlled by a slider in the UI, so the user can manually dial the overall spikiness. The fragment shader calculates a fresnel factor to make the wireframe edges glow. Notice how audioLevel factors into the power and into the final color intensity – louder audio makes the fresnel exponent higher (sharper glow) and also increases brightness a bit. We also included a gentle pulsing (sin(time)) independent of audio, just to give a constant breathing motion.

    For the inner glow sphere, we used a separate ShaderMaterial: it’s basically a sphere drawn with side: THREE.BackSide (so we see the inner surface) and Additive Blending to give a blooming halo. Its fragment shader also uses a fresnel term, but with a much lower alpha so it appears as a soft haze around the orb. The inner sphere’s size is slightly larger (I used about 1.2× the radius of the outer sphere) so that the glow extends beyond the wireframe. When combined, the outer and inner shaders create the effect of a translucent, energy-filled orb whose surface ripples with music.

    To tie it all together, every frame in the render loop I update the shader uniforms with the current time and audio level:

    // in the animation loop:
    outerMaterial.uniforms.time.value = elapsedTime;
    outerMaterial.uniforms.audioLevel.value = audioLevel;
    outerMaterial.uniforms.distortion.value = currentDistortion; 
    glowMaterial.uniforms.time.value = elapsedTime;
    glowMaterial.uniforms.audioLevel.value = audioLevel;

    The result is a 3D object that truly feels alive with the music, it oscillates, pulses, and glows in sync with whatever track is playing. Even the one you add.

    Animations and Interactions with GSAP

    With the visuals reacting to sound, I added GSAP to handle smooth animations and user interactions. GSAP is great for creating timeline sequences and tweening properties with easing, and it also comes with plugins that were perfect for this project: Draggable for click-and-drag UI, and InertiaPlugin for momentum. Best of all, every GSAP plugin is now completely free to use. Below are the key ways I used GSAP in the project:

    Intro Animation & Camera Movement: When the user selects a track and hits play, I trigger a brief “activation” sequence. This involves some text appearing in the “terminal” and a slight camera zoom-in toward the orb to signal that the system is online. The camera movement was done with a simple GSAP tween of the camera’s position. For example, I defined a default camera position and a slightly closer “zoomed” position. On play, I use gsap.to() to interpolate the camera position to the zoomed-in coordinates, and on pause/stop I tween it back out. GSAP makes this kind of 3D property animation straightforward:

    const defaultCameraPos = { x: 0, y: 0, z: 10 };
    const zoomedCameraPos = { x: 0, y: 0, z: 7 }; // move camera closer on zoom
    
    function zoomCameraForAudio(zoomIn) {
      const target = zoomIn ? zoomedCameraPos : defaultCameraPos;
      gsap.to(camera.position, {
        x: target.x,
        y: target.y,
        z: target.z,
        duration: 1.5,
        ease: "power2.inOut"
      });
    }
    
    // When audio starts:
    zoomCameraForAudio(true);
    // When audio ends or is stopped:
    zoomCameraForAudio(false);

    This smooth zoom adds drama when the music kicks in, drawing the viewer into the scene. The power2.inOut easing gives it a nice gentle start and stop. I also used GSAP timelines for any other scripted sequences (like fading out the “Analyzing…” overlay text after a few seconds, etc.), since GSAP’s timeline control is very handy for orchestrating arranging multiple animations in order.

    Draggable UI Panels: The interface has a few UI components overlaying the 3D canvas – e.g. an “Anomaly Controls” panel (with sliders for rotation speed, distortion amount, etc.), an “Audio Spectrum Analyzer” panel (showing a bar graph of frequencies and track selection buttons), and a “System Terminal” readout (displaying log messages like a console). To make the experience playful, I made these panels draggable. Using GSAP’s Draggable plugin, I simply turned each .panel element into a draggable object:

    Draggable.create(".panel", {
      type: "x,y",
      bounds: "body",         // confine dragging within the viewport
      inertia: true,          // enable momentum after release
      edgeResistance: 0.65,   // a bit of resistance at the edges
      onDragStart: () => { /* bring panel to front, etc. */ },
      onDragEnd: function() {
        // Optionally, log the velocity or other info for fun
        console.log("Panel thrown with velocity:", this.getVelocity());
      }
    });

    Setting inertia: true means when the user releases a panel, it will continue moving in the direction they tossed it, gradually slowing to a stop (thanks to InertiaPlugin). This little touch makes the UI feel more tactile and real – you can flick the panels around and they slide with some “weight.” According to GSAP’s docs, Draggable will automatically handle the physics when inertia is enabled , so it was plug-and-play. I also constrained dragging within the body bounds so panels don’t get lost off-screen. Each panel has a clickable header (a drag handle area), set via the handle option, to restrict where a user can grab it. Under the hood, InertiaPlugin calculates the velocity of the drag and creates a tween that smoothly decelerates the element after you let go, mimicking friction.

    Interactive Orb Drag (Bonus): As a creative experiment, I even made the 3D anomaly orb itself draggable. This was a bit more involved since it’s not a DOM element, but I implemented it by raycasting for clicks on the 3D object and then rotating the object based on mouse movement. I applied a similar inertia effect manually: when you “throw” the orb, it keeps spinning and slowly comes to rest. This wasn’t using GSAP’s Draggable directly (since that works in screen space), but I did use the InertiaPlugin concept by capturing the drag velocity and then using an inertial decay on that velocity each frame. It added a fun way to interact with the visualizer – you can nudge the orb and see it respond physically. For example, if you drag and release quickly, the orb will continue rotating with momentum. This kind of custom 3D dragging is outside the scope of a basic tutorial, but it shows how you can combine your own logic with GSAP’s physics concepts to enrich interactions.

    GSAP Draggable and Inertia in action

    In summary, GSAP handles all the non-audio animations: the camera moves, panel drags, and little transitions in the UI. The combination of sound-reactive shader animations (running every frame based on audio data) and event-based GSAP tweens (triggered on user actions or certain times) gives a layered result where everything feels responsive and alive.

    UI and Atmosphere

    Finally, a few words about the surrounding UI/atmosphere which glue the experience together. The visualizer’s style was inspired by sci-fi control panels, so I leaned into that:

    Control Panels and Readouts: I built the overlay UI with HTML/CSS, keeping it minimalistic (just semi-transparent dark panels with light text and a few sliders/buttons). Key controls include rotation speed (how fast the orb spins), resolution (tessellation level of the icosahedron mesh), distortion amount, audio reactivity (scaling of audio impact), and sensitivity (which adjusts how the audio’s volume is interpreted). Changing these in real-time immediately affects the Three.js scene – for example, dragging the “Resolution” slider rebuilds the icosahedron geometry with more or fewer triangles, which is a cool way to see the orb go from coarse to finely subdivided. The “Audio Spectrum Analyzer” panel displays a classic bar graph of frequencies (drawn on a canvas using the analyser data) so you have a 2D visualization accompanying the 3D one. There’s also a console-style terminal readout that logs events (like “AUDIO ANALYSIS SYSTEM INITIALIZED” or the velocity of drags in a playful GSAP log format) to reinforce the concept of a high-tech system at work.

    Design elements: To boost the sci-fi feel, I added a subtle grid overlay across the whole screen. This was done with pure CSS – a pair of repeating linear gradients forming horizontal and vertical lines (1px thin, very transparent) over a transparent background . It’s barely noticeable but gives a technical texture, especially against the glow of the orb. I also added some drifting ambient particles (tiny dots) floating slowly in the background, implemented as simple divs animated with JavaScript. They move in pseudo-random orbits.

    Soundtrack: I curated three atmospheric and moody tracks, along with one of my own unreleased tracks, under my music alias LXSTNGHT. The track was produced in Ableton, and it’s unfinished. The end result is an experience where design, code, and music production collide in real time.

    Bringing all these elements together, the final result is an interactive art piece: you load a track, the “Audio ARK” system comes online with a flurry of text feedback, the ambient music starts playing, and the orb begins to pulse and mutate in sync with the sound. You can tweak controls or toss around panels (or the orb itself) to explore different visuals.

    Final result

    The combination of Three.js (for rendering and shader effects), Web Audio API (for sound analysis), and GSAP (for polished interactions) showcases how creative coding tools can merge to produce an immersive experience that engages multiple senses.

    And that’s a wrap, thanks for following along!



    Source link

  • Modeling the World in 280 Characters

    Modeling the World in 280 Characters


    Hi, I’m Xor. As a graphics programmer, my job is essentially to make pixels prettier using math formulas. I work on
    video effects like lighting, reflections, post-processing, and more for games and animated backgrounds in software.

    For fun, I like to unwind by writing compact little shader programs that fit in a “tweet” (280 characters or less).
    You may have seen some of these posted on X/Twitter. The process of shrinking code while maintaining its functionality
    is called “code golfing.”

    Here’s an animated galaxy I wrote in just 197 characters of GLSL code:

    This little piece of code runs in real time for every pixel on the screen and generates a unique output color using
    some fancy math and logic. I build these demos using a tool called
    Twigl.app
    , an online shader editor designed for sharing mini-shaders. It makes exporting videos super easy, and in its
    “geekiest” mode, it also takes care of the generic header code and shortens built-in variable names.

    I even managed to fit a voxel DDA raytracer with edge detection into just 190 characters:

    Today, I’d like to explain why I make these, share my creation process, and show you how you can try it yourself if
    you’re interested. Let’s start with the “why.”

    Motivation

    Why do I write these? Well, there are several factors. Since I like lists, I’ll go ahead and present them in order of
    relevance:

    • Curiosity and Passion
      : Sometimes I get struck by a new idea and just want to play around with it. I like Twigl because it helps lower my
      expectations and lets me start doodling. There’s less room for overplanning, and it’s super easy to jump in.
    • Learning and Discovery
      : Working within constraints forces me to think through problems differently. By optimizing for code size, I often
      find ways to simplify or approximate. It doesn’t always lead to more performant code (but often it does) and I’ve
      learned how to squeeze the most out of every byte. Having very little code makes it easier to experiment with
      formulas and variations without getting overwhelmed.
    • Challenge
      : Writing tiny code is both challenging and stimulating. It keeps my brain sharp, and I’m constantly developing new
      skills. It’s basically become a game for me. I’ve accidentally learned a ton of math while trying to solve these
      technical problems.
    • Community
      : I’ve connected with so many interesting people through this process—artists, designers, math folks, game devs,
      engineers, tech enthusiasts, and more. Sharing my work has led to some exciting encounters. (More on some notable
      people later!)

    So, in short, it’s fun, thought-provoking, and engaging, and it’s a great way to spark interest in graphics
    programming. Now, what even is a shader?

    Shader Introduction

    In case you haven’t heard of shaders before, they are programs that run on the GPU (Graphics Processing Unit) instead
    of the CPU (Central Processing Unit). CPUs excel at complicated or branching operations, which are computed
    sequentially, one at a time (I’m simplifying here). GPUs are designed to process billions or trillions of predictable
    operations per second in parallel. This sounds like a lot, but a 4K screen at 60 frames per second outputs nearly 500M
    pixels per second. Each pixel could have 100s or 1,000s of operations, not to mention anything else the GPU might be
    used for.

    There are several different types of shaders: vertex shaders, fragment shaders, compute shaders, and more, but these
    tweet shaders are specifically fragment shaders, also known as “pixel shaders,” because they run on every pixel. In
    essence, fragment shaders take the input fragment coordinates and output a color and opacity (or alpha). Fragment
    coordinates give you the position of the center of each pixel on screen, so (0.5, 0.5) is the bottom-left (or
    top-left). One pixel to the right is (1.5, 0.5), and so on to (width – 0.5, height – 0.5). The coordinates variable is
    called “FC” in Twigl. The output color, “o”, has 4 RGBA components: red, green, blue, and alpha, each ranging from 0.0
    to 1.0.

    (1.0, 1.0, 1.0, 1.0)
    is pure white,
    (0.0, 0.0, 0.0, 1.0)
    is opaque black, and
    (1.0, 0.0, 0.0, 1.0)
    is pure red in the RGBA color format. From here, you can already make simple color gradients:

    o = vec4(0.0, FC.y/100.0, 0.0, 1.0)
    ;

    Remember, this is run on every pixel, so each pixel will have a unique Fragment Coordinate. That formula makes a
    simple gradient that starts black at the bottom of the screen (FC.y = 0.0), and the green output value reaches 1.0
    when FC.y reaches 100.0.

    So you have an output color “o”, the input fragment coordinates “FC”, and four “uniform” inputs which are shared among
    all pixels: “r” is the shader screen resolution in pixels, “t” is the time in seconds, and also the less commonly used
    mouse position “m” and the backbuffer texture “b”. And that’s the core of it! From there, it’s a lot of math and logic
    to control the output colors and generate cool images.

    I’m going to skip ahead a bit, but if you’re interested in learning more, try
    starting here
    !

    My Process

    People often ask me whether I write my shaders in a compact form from the start or if I write them expanded and then
    reduce the code afterward. The answer is the former. I’ve practiced code golfing so much that I find it easier to
    prototype ideas in compact form, and I tend not to get lost in tiny shaders. Code golfing shaders requires finding the
    right balance between code size, render performance, artistic appeal, design, and mathematical function. It’s a
    delicate balance that definitely challenges both sides of my brain. I’ve learned a ton about math, art, and design
    through writing these!

    To start one, you need an idea. When writing the “Milky” stars shader, I knew I wanted to create some kind of galaxy, so that was my initial spark.

    My shaders typically start with centering and scaling so that they look good at various resolutions and aspect ratios. For the stars, I looped through 100 point lights revolving around the center. I love glowing effects, and they are pretty easy to create. You just need to know the distance from the current pixel to the light source and use the inverse for the pixel brightness (close pixels are brighter, far pixels are darker).

    I played around with the positions of the particles using some trigonometry and gave the disk a slight skew. For the coloring, I love to use some sine waves with a phase shift for the RGB channels. Sine waves are also useful for picking pseudo-random numbers, so that’s how I select the colors for each star. Using the sine formula, you can get palettes like these:

    I ended up with a slight alteration of the one second from the left. It has a nice range of temperatures and brightness. I also added some variation to the star brightness, which made the image much more interesting to look at.

    Next, I applied some tonemapping with the hyperbolic tangent function for size. Tonemapping prevents the harsh overexposure and hue shifts that happen when a color channel hits its maximum brightness value (left is original, right is with tonemapping):

    Any good shader that has High Dynamic Range lighting should apply some tonemapping, and tweet shaders are no
    exception! Finally, I played with animation. It could have revolved or twisted, but in the end, I liked the
    contraction effect most. I also created a loop so that new stars faded in when the old stars reached the center. You
    can read about my design process in
    more detail here
    !

    Code Golfing

    As you can imagine, there are hundreds of little techniques that I have developed (and continue to discover) in the
    process of shrinking the code down, but I can give you the abridged version! My generalized code-golfing process can
    be listed like so:

    • Reduce names:
      It may be challenging initially, but you can get used to single-letter variables and function names. You may
      sometimes forget what variables are for, but this is actually helpful for code golfing. It forces you to reread your
      code, and you’ll often find better ways to write it when doing so. Like anything else, your memory will improve with
      practice, and over time you will establish some standards (for me: p = position, c = color, O = frag output, I =
      input, etc.).
    • Reduce numbers:
      This is pretty self-explanatory.
      1.0 == 1.
      ,
      1000.0 == 1e3
      . Don’t forget that with vector constructors, you can use any data type as an input, and it gets converted (“cast”)
      to the new type:
      vec4(1.0, 1.0, 1.0, 1.0) == vec4(1)
      . If you’re multiplying by
      10.0
      , you could instead divide by
      .1
      .
    • Minimize initializations:
      If you have two floats, “x” and “y”, try to initialize them together like so:
      float x = 0., y = 1.;
      Look for opportunities to share data types. If you have a color vec3 and a vec4, make them both vec4s. Avoid
      float/int conversions.
    • Avoid ifs:
      If statements in GLSL take up a bit of space, especially if you need an
      else if
      . Try using a ternary instead. For example:
      if (x>y) O = vec4(1,0,0,1); else O = vec4(0,1,0,1);
      becomes
      O = x>y ? vec4(1,0,0,1) : vec4(0,1,0,1);
      . Much shorter, and there’s a lot you can do with it. You can even set multiple variables between
      ?
      and
      :
      .
    • for(;;) > while():
      for
      and
      while
      use the same number of characters, but
      for
      has a spot for initializing (before the first semicolon) and a spot for the final step after each iteration (after
      the last semicolon). These are free slots that can be used for lines that would otherwise have to end with a
      semicolon. Also, avoid using
      break
      , and use the condition spot instead! You can also remove the brackets if each line ends with a comma (so it doesn’t
      work with nested
      for
      -loops).

    Beyond that, I use some function substitutions to reduce the code further. More on that
    over here
    !

    I’ve put together a
    ShaderToy demo
    with some additional variables, formatting, and comments for clarity. Every shader is different and requires using
    different techniques, approximations, and concepts, but that is precisely what makes it so fun for me! I’m still
    learning new stuff nearly every day!

    Questions and Answers

    Here are some questions I was asked on X.

    Do you have a favorite “trick” or “technique”? If so, what is it?

    I go through phases. I loved
    Bokeh DoF
    , then
    volumetric shadows and fractals
    , but currently, my favorite has to be ”
    turbulence
    .” It can be used for some awesome
    magic effects
    ,
    clouds
    , or
    fire
    .

    How did you develop the intuition for related maths?

    It takes lots of time and patience. I had to push through many times when I thought a topic was over my head. If you
    take it in small pieces, take breaks, and sleep on it, you can learn a lot! I wrote about some of the
    conceptualization techniques
    that I’ve picked up over the years. That might save you some time!

    Do you start writing the shader in code-golfing mode, or is it a process until you reach the most optimized code? Which is the best editor for normal shaders and for code-golfing shaders?

    Yes, I write in code-golfing mode because I’ve developed an intuition for it, and it feels faster to prototype at this
    point. I still have to refine the code when I find a look that I like, though. I’m a big fan of Twigl.app, but
    ShaderToy is great too. ShaderToy is best for its community and wealth of knowledge. I try to use it when explaining
    my tweet shaders.

    How did you start writing cool shaders, and what did you use to learn it?

    Well, I’ll explain more about my background later, but it started with an interest in game development. Shaders have
    tons of applications in video game graphics—that’s what sparked my curiosity to learn.

    Do you have regrets related to sacrificing readability?

    Nope. I’m more concerned with size optimizations that lead to slower code, but I don’t mind the unreadable code. To
    me, that’s part of the magic of it.

    What’s your background that got you to the point where you could effectively learn the material?

    It’s story time…

    My Story

    Growing up, I was interested in video games, especially those with “fancy” 3D graphics. When I was around 10, my friend showed me a tool called GameMaker. I tinkered around with it and learned some of the basics of drag ‘n’ drop programming, variables, and conditionals.

    Over time, I started experimenting with 3D graphics in GM, even though it was (and still is) primarily a 2D game engine. It was enough to learn the basics of how 3D rendering works and the render pipeline. Later, GameMaker introduced this thing called “shaders,” which allowed developers to create more advanced effects. At the time, there weren’t many resources available, so it took a while for me to pick it up. I started posting my shaders on the GameMaker forums and got some helpful feedback from the community (shoutout to “xygthop3” for his helpful examples)!

    Game development was a great place to learn about shaders because you have performance constraints (you don’t want a game to stutter), and you learn a lot about the entire rendering process in that context. In 2014, I started posting my earliest shader tutorials, sharing techniques as I learned them. The early tutorials weren’t great, but I’m glad I wrote them. In 2015, I started exploring ShaderToy, and that’s where my skills really developed.

    There were so many great examples to learn from, and it was a good place to get feedback on my ideas. In 2021, I launched a new introductory tutorial series for GameMaker with GLSL 1.00. Now I post more generalized tutorials on all kinds of graphics topics, ranging from math to art to design to code and more. This is definitely my best series yet, and they continue to get better. If you are interested in video games and graphics, I highly recommend starting with GameMaker or Godot. They are relatively easy to learn while still powerful enough to teach you the ropes. If software or web dev is more your thing, you can’t go wrong with ShaderToy or compute.toys.

    Here are some of the great people who have helped me, directly or indirectly, along the way:

    xygthop3 – This guy’s free shader examples were probably the greatest help along the way. His examples were a pivotal point in my understanding of a variety of graphics techniques, so thanks, Michael!

    Inigo Quilez – Inigo is the author of ShaderToy and the king of raymarching. His Signed Distance Field functions are still foundational to this day. An absolute legend!

    Fabrice Neyret – Fabrice is probably the best shader code golfer there is, and many shaders are inspired by his work. He has taught me so many techniques over the years.

    Yonatan “zozuar” – Another major inspiration for me. Yonatan’s work convinced me to try code golfing for real on Twitter, and his brain is amazing.

    Yohei Nishitsuji – This guy is a legend when it comes to tiny fractals. Love his work. Yohei also wrote Rendering the Simulation Theory: Exploring Fractals, GLSL, and the Nature of Reality here on Codrops.

    I’m sure there are many others whose names are eluding me at the moment, but I want to thank the entire shader
    community for their feedback and encouragement.

    Arsenal

    I’ll wrap this up with a few of my favorite tweet shaders so far:

    If you want to see more, you can find hundreds on my personal website, my X , Bluesky, or Instagram. To learn more about shaders, try my tutorials , and if you want to hire me for custom work, go here.

    Thank you for reading! Have a great day!

    -Xor





    Source link