برچسب: Making

  • Making Animations Smarter with Data Binding: Creating a Dynamic Gold Calculator in Rive

    Making Animations Smarter with Data Binding: Creating a Dynamic Gold Calculator in Rive


    Designing visuals that respond to real-time data or user input usually means switching between multiple tools — one for animation, another for logic, and yet another for implementation. This back-and-forth can slow down iteration, make small changes cumbersome, and create a disconnect between design and behavior.

    If you’ve spent any time with Rive, you know it’s built to close that gap. It lets you design, animate, and add interaction all in one place — and with features like state machines and data binding, you can make your animations respond directly to variables and user actions.

    To demonstrate how we use data binding in Rive, we built a small interactive project — a gold calculator. The task was simple: calculate the price of 5g and 10g gold bars, from 1 to 6 bars, using external data for the current gold price per gram. The gold price can be dynamic, typically coming from market data, but in this case we used a manually set value.

    Let’s break down how the calculator is built, step by step, starting with the layout and structure of the file.

    1. File Structure

    The layout is built for mobile, using a 440×900 px artboard. It’s structured around three layout groups:

    1. Title with gold price per gram
    2. Controls for choosing gold bar amount and weight
    3. Gold bar illustration

    The title section includes a text layout made of two text runs: one holds static text like the label, while the other is dynamic and connected to external data using data binding. This allows the gold price to update in real time when the data changes.

    In the controls section, we added plus and minus buttons to set the number of gold bars. These are simple layouts with icons inside. Below them, there are two buttons to switch between 5g and 10g options. They’re styled as rounded layouts with text inside.

    In the state machine, two timelines define the tab states: one for when the 10g button is active, using a solid black background and white text, and another for 5g, with reversed styles. Switching between these two updates the active tab visually.

    The total price section also uses two text runs — one for the currency icon and one for the total value. This value changes based on the selected weight and quantity, and is driven by data binding.

    2. Gold Bar Illustration

    The illustration is built using a nested artboard with a single vector gold bar. Inside the calculator layout, we duplicated this artboard to show anywhere from 1 to 6 bars depending on the user’s selection.

    Since there are two weight options, we made the gold bar resize visually — wider for 10g and narrower for 5g. To do that, we used N-Slices so that the edges stay intact and only the middle stretches. The sliced group sits inside a fixed-size layout, and the artboard is set to Hug its contents, which lets it resize automatically.

    Created two timelines to control bar size: one where the width is 88px for 10g, and another at 74px for 5g. The switch between them is controlled by a number variable called Size-gram gold, where 5g is represented by 0 and 10g by 1 with 1 set as the default value.

    In the state machine, we connected this variable to the two timelines (the 10g timeline set as the default)— when it’s set to 0, the layout switches to 5g; when it’s 1, it switches to 10g. This makes the size update based on user selection without any manual switching. To keep the transition smooth, a 150ms animation duration is added.

    3. Visualizing 1–6 Gold Bars

    To show different quantities of gold bars in the main calculator layout, we created a tiered structure using three stacked layout groups with a vertical gap -137. Each tier is offset vertically to form a simple pyramid arrangement, with everything positioned in the bottom-left corner of the screen.

    The first tier contains three duplicated nested artboards of a single gold bar. Each of these is wrapped in a Hug layout, which allows them to resize correctly based on the weight. The second tier includes two gold bars and an empty layout. This empty layout is used for spacing — it creates a visual shift when we need to display exactly four bars. The top tier has just one gold bar centered.

    All three tiers are bottom-centered, which keeps the pyramid shape consistent as bars are added or removed.

    To control how many bars are visible, we created 6 timelines in Animate mode — one for each quantity from 1 to 6. To hide or show each gold bar, two techniques are used: adjusting the opacity of the nested artboard (100% to show, 0% to hide) and modifying the layout that wraps it. When a bar is hidden, the layout is set to a fixed width of 0px; when visible, it uses Hug settings to restore its size automatically.

    Each timeline has its own combination of these settings depending on which bars should appear. For example, in the timeline with 4 bars, we needed to prevent the fourth bar from jumping to the center of the row. To keep it properly spaced, we assigned a fixed width of 80px to the empty layout used for shifting. On the other timelines, that same layout is hidden by setting its width to 0px.

    This system makes it easy to switch between quantities while preserving the visual structure.

    4. State Machine and Data Binding Setup

    With the visuals and layouts ready, we moved on to setting up the logic with data binding and state transitions.

    4.1 External Gold Price

    First, we created a number variable called Gold price gram. This value can be updated externally — for example, connected to a trading database — so the calculator always shows the current market price of gold. In our case, we used a static value of 151.75, which can also be updated manually by the user.

    To display this in the UI, we bound Text Run 2 in the title layout to this variable. A converter in the Strings tab called “Convert to String Price” is then created and applied to that text run. This converter formats the number correctly for display and will be reused later.

    4.2 Gold Bar Size Control

    We already had a number variable called Size-gram gold, which controls the weight of the gold bar used in the nested artboard illustration.

    In the Listeners panel, two listeners are created. The first is set to target the 5g tab, uses a Pointer Down action, and assigns Size-gram gold = 0. The second targets the 10g tab, also with a Pointer Down action, and assigns Size-gram gold = 1.

    Next, two timelines (one for each tab state) are brought into the state machine. The 10g timeline is used as the default state, with transitions added: one from 10g to 5g when Size-gram gold = 0, and one back to 10g when Size-gram gold = 1. Each transition has a duration of 100ms to keep the switching smooth.

    4.3 Gold Bar Quantity

    Next, added another number variable, Quantity-gold, to track the number of selected bars. The default value is set to 1. In the Converters under Numeric, two “Calculate” converters are created — one that adds “+1” and one that subtracts “-1”.

    In the Listeners panel, the plus button is assigned an action: Quantity-gold = Quantity-gold, using the “+1” converter. This way, clicking the plus button increases the count by 1. The same is done for the minus button, assigning Quantity-gold = Quantity-gold and attaching the “-1” converter. Clicking the minus button decreases the count by 1.

    Inside the state machine, six timelines are connected to represent bar counts from 1 to 6. Each transition uses the Quantity-gold value to trigger the correct timeline.

    By default, the plus button would keep increasing the value endlessly, but the goal is to limit the max to six bars. On the timeline where six gold bars are active, the plus button is disabled by setting its click area scale to 0 and lowering its opacity to create a “disabled” visual state. On all other timelines, those properties are returned to their active values.

    The same logic is applied to the minus button to prevent values lower than one. On the timeline with one bar, the button is disabled, and on all others, it returns to its active state.

    Almost there!

    4.4 Total Price Logic

    For the 5g bar price, we calculated it using this formula:

    Total Price = Gold price gram + Quantity-gold * 5

    In Converters → Numeric, a Formula converter was created and named Total Price 5g Formula to calculate the total price. In the example, it looked like:

    {{View Model Price/Gold price gram}}*{{View Model Price/Quanity-gold}}*5.0

    Since we needed to display this number as text, the Total Price number variable was also converted into a string. For that, we used an existing converter called “Convert to String Price.”

    To use both converters together, a Group of converters was created and named Total Price 5g Group, which included the Total Price 5g Formula converter followed by the Convert to String Price converter.

    Then, the text for the price variable was data bound by adding the Total Price variable in the Property field and selecting Total Price 5g Group in the Convert field.

    To handle the 10g case, which is double the price, two options are explored — either creating a new converter that multiplies by 10 or multiplying the existing result by 2.

    Eventually, a second text element is added along with a new group of converters specifically for 10g. This includes a new formula:

    Total Price = Gold price gram + Quantity-gold * 10

    A formula converter and a group with both that formula and the string converter are created and named “Total Price 10g Group.”

    Using timelines where the 5g and 10g buttons are in their active states, we adjusted the transparency of the text elements. This way, the total price connected to the 5g converters group is visible when the 5g button is selected, and the price from the 10g converters group appears when the 10g button is selected.

    It works perfectly.

    After this setup, the Gold price gram variable can be connected to live external data, allowing the gold price in the calculator to reflect the current market value in real time.

    Wrapping Up

    This gold calculator project is a simple example, but it shows how data binding in Rive can be used to connect visual design with real-time logic — without needing to jump between separate tools or write custom code. By combining state machines, variables, and converters, you can build interfaces that are not only animated but also smart and responsive.

    Whether you’re working on a product UI, a prototype, or a standalone interactive graphic, Rive gives you a way to bring together motion and behavior in a single space. If you’re already experimenting with Rive, data binding opens up a whole new layer of possibilities to explore.



    Source link

  • Invisible Forces: The Making of Phantom.land’s Interactive Grid and 3D Face Particle System

    Invisible Forces: The Making of Phantom.land’s Interactive Grid and 3D Face Particle System


    From the outset, we knew we wanted something that subverted any conventional agency website formulas. Instead,
    inspired by the unseen energy that drives creativity, connection and transformation, we arrived at the idea of
    invisible forces
    . Could we take the powerful yet intangible elements that shape our world—motion, emotion, intuition, and
    inspiration—and manifest them in a digital space?

    We were excited about creating something that included many custom interactions and a very experiential feel. However,
    our concern was picking a set of tools that would allow most of our developers to contribute to and maintain the site
    after launch.

    We chose to start from a Next / React base, as we often do at Phantom. React also has the advantage of being
    compatible with the excellent React Three Fiber library, which we used to seamlessly bridge the gap between our DOM
    components and the WebGL contexts used across the site. For styles, we are using our very own
    CSS components
    as well as SASS.

    For interactive behaviours and animation, we chose to use GSAP for two main reasons. Firstly, it contains a lot of
    plugins we know and love, such as SplitText, CustomEase and ScrollTrigger. Secondly, GSAP allows us to use a single
    animation framework across DOM and WebGL components.

    We could go on and on talking about the details behind every single animation and micro-interaction on the site, but
    for this piece we have chosen to focus our attention on two of the most unique components of our site: the homepage
    grid and the scrollable employee face particle carousel.

    The Homepage Grid

    It took us a very long time to get this view to perform and feel just how we wanted it to. In this article, we will focus on the interactive part. For more info on how we made things performant, head to our previous article: Welcome back to Phantomland

    Grid View

    The project’s grid view is integrated into the homepage by incorporating a primitive Three.js object into a React
    Three Fiber scene.

    //GridView.tsx
    const GridView = () => {
      return (
        <Canvas>
          ...
          <ProjectsGrid />
          <Postprocessing />
        </Canvas>
      );
    }
    
    //ProjectsGrid.tsx
    const ProjectsGrid = ({atlases, tiles}: Props) => {
      const {canvas, camera} = useThree();
      
      const grid = useMemo(() => {
        return new Grid(canvas, camera, atlases, tiles);
      }, [canvas, camera, atlases, tiles]);
    
      if(!grid) return null;
      return (
        <primitive object={grid} />
      );
    }

    We initially wanted to write all the code for the grid using React Three Fiber but realised that, due to the
    complexity of our grid component, a vanilla
    Three.js
    class would be easier to maintain.

    One of the key elements that gives our grid its iconic feel is our post-processing distortion effect. We implemented
    this feature by creating a custom shader pass within our post-processing pipeline:

    // Postprocessing.tsx
    const Postprocessing = () => {
      const {gl, scene, camera} = useThree();
      
      // Create Effect composer
      const {effectComposer, distortionShader} = useMemo(() => {
        const renderPass = new RenderPass(scene, camera);
        const distortionShader = new DistortionShader();
        const distortionPass = new ShaderPass(distortionShader);
        const outputPass = new OutputPass();
    
        const effectComposer = new EffectComposer(gl);
        effectComposer.addPass(renderPass);
        effectComposer.addPass(distortionPass);
        effectComposer.addPass(outputPass);
    
        return {effectComposer, distortionShader};
      }, []);
      
      // Update distortion intensity
      useEffect(() => {
        if (workgridState === WorkgridState.INTRO) {
          distortionShader.setDistortion(CONFIG.distortion.flat);
        } else {
          distortionShader.setDistortion(CONFIG.distortion.curved);
        }
      }, [workgridState, distortionShader]);
      
      // Update distortion intensity
      useFrame(() => {
        effectComposer.render();
      }, 1);
     
      return null;
    }

    When the grid transitions in and out on the site, the distortion intensity changes to make the transition feel
    natural. This animation is done through a simple tween in our
    DistortionShader
    class:

    class DistortionShader extends ShaderMaterial {
      private distortionIntensity = 0;
    
      super({
          name: 'DistortionShader',
          uniforms: {
            distortionIntensity: {value: new Vector2()},
            ...
          },
          vertexShader,
          fragmentShader,
      });
    
      update() {
        const ratio = window.innerWidth, window.innerHeight;
        this.uniforms[DistortionShaderUniforms.DISTORTION].value.set(
          this.distortionIntensity * ratio,
          this.distortionIntensity * ratio,
        );
      }
    
      setDistortion(value: number) {
        gsap.to(this, {
          distortionIntensity: value,
          duration: 1,
          ease: 'power2.out',
          onUpdate: () => this.update()    }
      }
    }

    Then the distortion is applied through our custom shader:

    // fragment.ts
    export const fragmentShader = /* glsl */ `
      uniform sampler2D tDiffuse;
      uniform vec2 distortion;
      uniform float vignetteOffset;
      uniform float vignetteDarkness;
    
      varying vec2 vUv;
      
      // convert uv range from 0 -> 1 to -1 -> 1
      vec2 getShiftedUv(vec2 uv) {
        return 2. * (uv - .5);
      }
      
      // convert uv range from -1 -> 1 to 0 -> 1
      vec2 getUnshiftedUv(vec2 shiftedUv) {
        return shiftedUv * 0.5 + 0.5;
      }
    
    
      void main() {
        vec2 shiftedUv = getShiftedUv(vUv);
        float distanceToCenter = length(shiftedUv);
        
        // Lens distortion effect
        shiftedUv *= (0.88 + distortion * dot(shiftedUv));
        vec2 transformedUv = getUnshiftedUv(shiftedUv);
        
        // Vignette effect
        float vignetteIntensity = smoothstep(0.8, vignetteOffset * 0.799,  (vignetteDarkness + vignetteOffset) * distanceToCenter);
        
        // Sample render texture and output fragment
        color = texture2D( tDiffuse, distortedUV ).rgb * vignetteIntensity;
        gl_FragColor = vec4(color, 1.);
      }

    We also added a vignette effect to our post-processing shader to darken the corners of the viewport, focusing the
    user’s attention toward the center of the screen.

    In order to make our home view as smooth as possible, we also spent a fair amount of time crafting the
    micro-interactions and transitions of the grid.

    Ambient mouse offset

    When the user moves their cursor around the grid, the grid moves slightly in the opposite direction, creating a very
    subtle ambient floating effect. This was simply achieved by calculating the mouse position on the grid and moving the
    grid mesh accordingly:

    getAmbientCursorOffset() {
      // Get the pointer coordinates in UV space ( 0 - 1 ) range
      const uv = this.navigation.pointerUv;
      const offset = uv.subScalar(0.5).multiplyScalar(0.2);
      return offset;
    }
    
    update() {
      ...
      // Apply cursor offset to grid position
      const cursorOffset = getAmbientCursorOffset();
      this.mesh.position.x += cursorOffset.x;
      this.mesh.position.y += cursorOffset.y;
    }

    Drag Zoom

    When the grid is dragged around, a zoom-out effect occurs and the camera seems to pan away from the grid. We created
    this effect by detecting when the user starts and stops dragging their cursor, then using that to trigger a GSAP
    animation with a custom ease for extra control.

    onPressStart = () => {
      this.animateCameraZ(0.5, 1);
    }
    
    onPressEnd = (isDrag: boolean) => {
      if(isDrag) {
        this.animateCameraZ(0, 1);
      }
    }
    
    animateCameraZ(distance: number, duration: number) {
      gsap.to(this.camera.position, {
        z: distance,
        duration,
        ease: CustomEase.create('cameraZoom', '.23,1,0.32,1'),
      });
    }

    Drag Movement

    Last but not least, when the user drags across the grid and releases their cursor, the grid slides through with a
    certain amount of inertia.

    drag(offset: Vector2) {
      this.dragAction = offset;
    
      // Gradually increase velocity with drag time and distance
      this.velocity.lerp(offset, 0.8);
    }
    
    // Every frame
    update() {
      // positionOffset is later used to move the grid mesh
      if(this.isDragAction) {
        // if the user is dragging their cursor, add the drag value to offset
        this.positionOffset.add(this.dragAction.clone());
      } else {
        // if the user is not dragging, add the velocity to the offset
        this.positionOffset.add(this.velocity);
      }
    
      this.dragAction.set(0, 0);
      // Attenuate velocity with time
      this.velocity.lerp(new Vector2(), 0.1);
    }

    Face Particles

    The second major component we want to highlight is our employee face carousel, which presents team members through a
    dynamic 3D particle system. Built with React Three Fiber’s
    BufferGeometry
    and custom GLSL shaders, this implementation leverages custom shader materials for lightweight performance and
    flexibility, allowing us to generate entire 3D face representations using only a 2D colour photograph and its
    corresponding depth map—no 3D models required.

    Core Concept: Depth-Driven Particle Generation

    The foundation of our face particle system lies in converting 2D imagery into volumetric 3D representations. We’ve
    kept things efficient, with each face using only two optimized 256×256 WebP images (under 15KB each).

    To capture the images, each member of the Phantom team was 3D scanned using
    RealityScan
    from Unreal Engine on iPhone, creating a 3D model of their face.

    These scans were cleaned up and then rendered from Cinema4D with a position and colour pass.

    The position pass was converted into a greyscale depth map in Photoshop, and this—along with the colour pass—was
    retouched where needed, cropped, and then exported from Photoshop to share with the dev team.

    Each face is constructed from approximately 78,400 particles (280×280 grid), where each particle’s position and
    appearance is determined by sampling data from our two source textures.

    /* generate positions attributes array */
    const POINT_AMOUNT = 280;
    
    const points = useMemo(() => {
      const length = POINT_AMOUNT * POINT_AMOUNT;
      const vPositions = new Float32Array(length * 3);
      const vIndex = new Float32Array(length * 2);
      const vRandom = new Float32Array(length * 4);
    
      for (let i = 0; i < length; i++) {
          const i2 = i * 2;
          vIndex[i2] = (i % POINT_AMOUNT) / POINT_AMOUNT;
          vIndex[i2 + 1] = i / POINT_AMOUNT / POINT_AMOUNT;
    
          const i3 = i * 3;
          const theta = Math.random() * 360;
          const phi = Math.random() * 360;
          vPositions[i3] = 1 * Math.sin(theta) * Math.cos(phi);
          vPositions[i3 + 1] = 1 * Math.sin(theta) * Math.sin(phi);
          vPositions[i3 + 2] = 1 * Math.cos(theta);
    
          const i4 = i * 4;
          vRandom.set(
            Array(4)
              .fill(0)
              .map(() => Math.random()),
            i4,
          );
      }
    
      return {vPositions, vRandom, vIndex};
    }, []);
    // React Three Fiber component structure 
    const FaceParticleSystem = ({ particlesData, currentDataIndex }) => {
      return (
        <points ref={pointsRef} position={pointsPosition}>
          <bufferGeometry>
            <bufferAttribute attach="attributes-vIndex" 
                 args={[points.vIndex, 2]} />
            <bufferAttribute attach="attributes-position"
                 args={[points.vPositions, 3]} />
            <bufferAttribute attach="attributes-vRandom"
                 args={[points.vRandom, 4]} />
          </bufferGeometry>
          
          <shaderMaterial
            blending={NormalBlending}
            transparent={true}
            fragmentShader={faceFrag}
            vertexShader={faceVert}
            uniforms={uniforms}
          />
        </points>
      );
    };

    The depth map provides normalized values (0–1) that directly translate to Z-depth positioning. A value of 0 represents
    the furthest point (background), while 1 represents the closest point (typically the nose tip).

    /* vertex shader */ 
    
    // sample depth and color data for each particle
    vec3 depthTexture1 = texture2D(depthMap1, vIndex.xy).xyz;
    
    // convert depth to Z-position
    float zDepth = (1. - depthValue.z);
    pos.z = (zDepth * 2.0 - 1.0) * zScale;

    Dynamic Particle Scaling Through Colour Analysis

    One of the key methods that brings our faces to life is utilizing colour data to influence particle scale. In our
    vertex shader, rather than using uniform particle sizes, we analyze the colour density of each pixel so that brighter,
    more colourful areas of the face (like eyes, lips, or well-lit cheeks) generate larger, more prominent particles,
    while darker areas (shadows, hair) create smaller, subtler particles. The result is a more organic, lifelike
    representation that emphasizes facial features naturally.

    /* vertex shader */ 
    
    vec3 colorTexture1 = texture2D(colorMap1, vIndex.xy).xyz;
    
    // calculate color density
    float density = (mainColorTexture.x + mainColorTexture.y + mainColorTexture.z) / 3.;
    
    // map density to particle scale
    float pScale = mix(pScaleMin, pScaleMax, density);

    The calibration below demonstrates the influence of colour (contrast, brightness, etc.) on the final 3D particle formation.

    Ambient Noise Animation

    To prevent static appearances and maintain visual interest, we apply continuous noise-based animation to all
    particles. This ambient animation system uses curl noise to create subtle, flowing movement across the entire
    face structure.

    /* vertex shader */ 
    
    // primary curl noise for overall movement 
    pos += curlNoise(pos * curlFreq1 + time) * noiseScale * 0.1;
    // animation updates in React Three Fiber
    
    useFrame((state, delta) => {
      if (!materialRef.current) return;
      
      materialRef.current.uniforms.time.value = state.clock.elapsedTime * NOISE_SPEED;
      
      // update rotation based on mouse interaction
      easing.damp(pointsRef.current.rotation, 'y', state.mouse.x * 0.12 * Math.PI, 0.25, delta);
      easing.damp(pointsRef.current.rotation, 'x', -state.pointer.y * 0.05 * Math.PI, 0.25, delta);
    
    });

    Face Transition Animation

    When transitioning between different team members, we combine timeline-based interpolation with visual effects written
    in shader materials.

    GSAP-Driven Lerp Method

    The transition foundation uses GSAP timelines to animate multiple shader parameters simultaneously:

    timelineRef.current = gsap
      .timeline()
      .fromTo(uniforms.transition, {value: 0}, {value: 1.3, duration: 1.6})
      .to(uniforms.posZ, {value: particlesParams.offset_z, duration: 1.6}, 0)
      .to(uniforms.zScale, {value: particlesParams.face_scale_z, duration: 1.6}, 0);

    And the shader handles the visual blending between two face states:

    /* vertex shader */ 
    
    // smooth transition curve
    float speed = clamp(transition * mix(0.8, .9, transition), 0., 1.0); 
    speed = smoothstep(0.0, 1.0, speed); 
    
    // blend textures 
    vec3 mainColorTexture = mix(colorTexture1, colorTexture2, speed); 
    vec3 depthValue =mix(depthTexture1, depthTexture2, speed);

    To add visual interest during transitions, we further inject additional noise that’s strongest at the midpoint of the
    transition. This creates a subtle “disturbance” effect where particles temporarily deviate from their target
    positions, making transitions feel more dynamic and organic.

    /* vertex shader */ 
    
    // secondary noise movement applied for transition
    float randomZ = vRandom.y + cnoise(pos * curlFreq2 + t2) * noiseScale2;
    
    float smoothTransition = abs(sin(speed * PI)); 
    pos.x += nxScale * randomZ * 0.1 * smoothTransition; 
    pos.y += nyScale *randomZ * 0.1 * smoothTransition;
    pos.z += nzScale * randomZ * 0.1 * smoothTransition;

    Custom Depth of Field Effect

    To enhance the three-dimensional perception, we implemented a custom depth of field effect directly in our shader
    material. It calculates view-space distance for each particle and modulates both opacity and size based on proximity
    to a configurable focus plane.

    /* vertex shader - calculate view distance */
    
    vec4 viewPosition = viewMatrix * modelPosition;
    vDistance = abs(focus +viewPosition.z); 
    
    // apply distance to point size for blur effect 
    gl_PointSize = pointSize * pScale * vDistance * blur * totalScale;
    /* fragment shader - calculate distance-based alpha for DOF */
    
    
    float alpha = (1.04 - clamp(vDistance * 1.5, 0.0, 1.0));
    gl_FragColor = vec4(color, alpha);

    Challenges: Unifying Face Scales

    One of the challenges we faced was achieving visual consistency across different team members’ photos. Each photograph
    was captured under slightly different conditions—varying lighting, camera distances, and facial proportions.
    Therefore, we went through each face to calibrate multiple scaling factors:

    • Depth scale calibration
      to ensure no nose protrudes too aggressively
    • Colour density balancing
      to maintain consistent particle size relationships
    • Focus plane optimization
      to prevent excessive blur on any individual face
    // individual face parameters requiring manual tuning 
    
    particle_params: { 
      offset_z: 0,           // overall Z-position
      z_depth_scale: 0,      // depth map scaling factor
      face_size: 0,          // overall face scale 
    }

    Final Words

    Our face particle system demonstrates how simple yet careful technical implementation can create fun visual
    experiences from minimal assets. By combining lightweight WebP textures, custom shader materials, and animations,
    we’ve created a system that transforms simple 2D portraits into interactive 3D figures.

    Check out the full site.

    Curious about what we’re up to in the Phantom studio? Or have a project you think we’d be interested in? Get in touch.



    Source link

  • Turning Music Into Motion: The Making of the 24/7 Artists Launch Page

    Turning Music Into Motion: The Making of the 24/7 Artists Launch Page


    In this article, we’ll explore the behind-the-scenes process of how Waaark brought 24/7 Artists’ new product launch landing page to life. See how creative vision, design, and development came together to shape the final result.

    Brief

    24/7 Artists reached out after discovering our work on AW Portfolio. They came to us with a clear challenge: help them break through a creative deadlock and redesign their site to support an upcoming product launch—on a tight deadline.

    At Waaark, having time to think, breathe, and work at our own pace is key. We typically avoid last-minute projects, but this one felt like a puzzle worth solving. We saw a narrow but feasible path forward and accepted the challenge.

    Creative research

    We kicked off the project by exploring ways to visually represent music. After some wandering sessions on platforms like Pinterest and Behance, we narrowed our direction toward visualiser aesthetics—particularly the use of lines to suggest sound waves.

    The client also emphasised their desire to introduce depth and dimensionality into the site. We collected inspiration reflecting this concept and organised everything into a Milanote moodboard, including ideas around color, typography, layout, and impactful hero sections to set a clear creative direction.

    Given the time constraints, it was important to focus on bold, achievable visuals—techniques we had already mastered.

    Design

    Story board

    For a storytelling-focused, long-scrolling landing page like this, we replaced our typical UI wireframes with a full storyboard. This storyboard mapped out each step of the user journey, along with transitions between sections.

    Our goal was twofold: to provide the client with a clear visual direction and to start shaping the flow and pacing on our end.

    Creative Direction

    With both the moodboard and storyboard approved, we began merging them to define the site’s visual language.

    Right from the hero section, we wanted the message to be loud and clear: music meets tech. We envisioned a dark, immersive intro with circular lines evoking vinyl records or sound waves. Layered on top: a bold sans-serif headline and a ticket-style navigation bar to reinforce the music industry vibe.

    To instantly capture user attention, we imagined a mouse-trail animation where artist photos appear in an equalizer-style movement.

    To contrast the dark intro, we introduced a more colorful palette throughout the rest of the site, showcasing the diversity of music and the artists’ unique sensibilities.

    Implementation

    Tech stack

    We used our go-to stack, which the client was already familiar with: WordPress. It provided a solid foundation—easy to manage, flexible for the frontend, and scalable.

    For the front-end experience, we integrated a few select libraries:

    • GSAP for fluid, expressive animations
    • Luge to manage the overall page lifecycle
    • Lenis for smooth scrolling

    We aimed to minimise external dependencies, instead relying on native CSS 3D transformations and lightweight JS/Canvas-based animations—especially for effects mimicking depth.

    Animation

    To save time, all the animations were directly coded based on what we had envisioned and mapped out in the storyboard. Some of them worked exactly as imagined from the start, while others needed a bit of fine-tuning to integrate fully into the overall experience.

    Scroll Animations

    To keep users engaged while presenting 24/7 Artists’ vision and offering, we crafted a sequence of scroll-driven animations—alternating between smooth flows and unexpected reveals.

    Micro-Interactions

    On a product launch page, micro-interactions are key. They spark curiosity, highlight key elements, and subtly guide the user toward action.

    For the main call to action, we designed a distinctive interaction using the same equalizer-like shape seen in the photo animations. On hover, it animates like a music player—both playful and thematic.

    Tile Grid Setup
    We began by constructing a grid made of 1×1 and 2×2 tiles.

    Z-Axis Scroll Effect
    Since we weren’t using true 3D, we faked depth using scale transforms. We calculated the scale needed to have the grid’s central hole (where content would go) expand to fill the viewport. Then, we transitioned each tile from its original size and position to the final state using GSAP.

    Playing with GSAP staggered animation adds more depth to the motion.

    Simulated Cube Depth
    To simulate 3D cubes, we calculated the back-face vertices based on a smaller grid to keep the illusion of perspective. We then drew side faces accordingly, making sure to hide vertices behind the front face.

    Canvas-Based Content Reveal
    To finish the effect, we redrew the 2×2 tiles’ content in Canvas and added a cover layer that scrolls at a faster rate, revealing the content below.

    Conclusion

    The 24/7 Artists landing page was a bold and fast-paced project that pushed us to distill ideas quickly and trust our creative instincts.

    Through strong visual metaphors, smooth storytelling, and carefully crafted motion, we built a launchpad that sets the tone for the brand’s next chapter.

    This first release is just the beginning. The site was designed with scalability in mind, and additional sections and pages are already being added to support future growth and evolving needs.

    When the vision is clear and the momentum is right, great things can happen—fast.



    Source link