برچسب: into

  • Self Doubt and the Quest for Fun: How I Ended up Turning my Portfolio into a Game

    Self Doubt and the Quest for Fun: How I Ended up Turning my Portfolio into a Game



    At the beginning of 2025, I finally decided to build myself a new portfolio. I still pretty much liked the one I made back in 2021, but I felt the need to put to good use all the cool stuff I’ve learned these past couple years working with WebGPU. And, besides, half of the projects featured in my case studies had been put offline anyway, so it was about time.

    I didn’t really know where I was going at this point, except that:

    • It would, of course, feature multiple procedurally generated WebGPU scenes. I already had a few concepts to explore in mind, like particles or boids simulation.
    • I wanted to take care of the design myself. It may seem weird, especially since I was very happy with what Gilles came up designing for my last portfolio, and also because I do suck at design. But this would give me more freedom, and I’ve also always liked building things from scratch on my own.
    • Last but not least, it had to be fun!

    1. The journey

    The (tough) design and content process

    Don’t do this!

    At first, I had no idea what to do design wise. Fonts, colors: there are so many things that could go wrong.

    I started with simple light and dark colors, kept the fonts Gilles had chosen for my previous portfolio and started to copy/paste its old text content. It didn’t feel that great, and it wasn’t fun for sure.

    The very first design iterations… Still a long way to go!

    I definitely needed colors. I could have wasted a few hours (or days) choosing the right pairing, but instead I decided this could be the right opportunity to use this random color palette generator utility I’ve coded a few years ago. I cleaned the code a bit, created a repo, published it to npm and added it to my project. I also slightly changed the tone of the copywriting, and that led me to something still not that great, but a bit more fun.

    Slowly getting there

    I let it site for a while and started working on other parts of the site, such as integrating the CMS or experimenting with the WebGPU scenes. It’s only after a long iteration process that I’ve finally set up my mind on this kind of old school video games retro vibe mixed with a more cheerful, cartoonish aesthetic, almost Candy Crush-esque. Impactful headings, popping animations, banded gradients… you name it.

    Of course, I’ve never gone as far as creating a Figma project (I did select a few reference images as a moodboard though) and just tested a ton of stuff directly with code until I felt it wasn’t that bad anymore. All in all, it was a very long and painful process, and I guess every designer would agree at this point: don’t do this!

    A few images from my final moodboard – all credits go to their respective authors.

    Do you actually read portfolios content?

    Another painful point was to settle on the actual content and overall structure of the site. Do I need detailed case studies pages? Do I need pages at all? Will the users even read all those long blocks of text I will struggle to write?

    In the end, I chose to drop the case studies pages. I had a couple of reasons to do so:

    • Often times the project ends up being put offline for various reasons, and you end up showcasing something the user cannot visit anymore. This is exactly what happened on my previous portfolio.
    • Most of the client work I’ve been doing those past years has been for agencies, and I’m not always allowed to publicly share them. I have no problem with that, but it slightly reduced the number of projects I could highlight.

    From there on, it was a quick decision to just go with a single landing page. I’d put direct links to the projects I could highlight and small videos of all the other projects or personal works I could feature. On top of that, I’d add a few “about” sections mixed with my WebGPU scenes, and that’d be the gist of it.

    Speaking of the WebGPU scenes, I really wanted them to be meaningful, not just a technical demonstration of what I could do. But we’ll get to that later.

    The final UX twist

    After a few months, I felt like I was entering the final stage of development. The page structure was mostly done, all my various sections were there and I was working on the final animations and micro-interactions tweakings.

    So I took a step back, and looked back at my initial expectations. I had my WebGPU scenes showcasing my various technical skills. I had handled the design myself, and it wasn’t that bad. But were the flashy colors and animations enough to make it a really fun experience overall?

    I think you already know the answer. Something was missing.
    Except for the random color palette switcher, the UX basically consisted of scroll-driven animations. Most of the 3D scenes interactions were rudimentary. I needed an idea.

    The design already had this video game cheerful look. So… What if I turned my whole portfolio into a game?
    Once again, I started writing down my ideas:

    • The user would need to interact with the different UI elements to unlock the theme switcher and color palette generator buttons.
    • Each WebGPU scene could serve as a way to unlock the following content, acting as a very basic “puzzle” game.
    • Keep track of the user overall progress.
    • Allow the user to skip the whole game process if they want to.

    This means most of the users wouldn’t ever make it to the footer, or use this random palette generator tool I’ve struggled to implement. This might very well be the most riskiest, stupidest decision I’ve made so far. But it would give my portfolio this unique and fun touch I was looking for in the first place, so I went all in.

    Of course, it goes without saying it implied a major refactoring of the whole code and I needed to come up with original interaction ideas for the WebGPU scenes, but I like to think it was worth it.

    Gamification mechanisms: unlocking content and rewarding message
    Are you one of the few that unlocked the color palette generator button?

    2. Technical study

    Now that you know all the whys, let’s have a look at the hows!

    Tech stack

    I’ve decided to try Sanity Studio as I’ve never worked with it before and as I knew it would be a relatively small project, it’d be a perfect fit to start using it. Even though I felt like I just scratched its surface, I liked the overall developer experience it provided. On the other hand, I already had a good experience working with Nuxt3 so this was an easy choice.

    No need to mention why I chose GSAP and Lenis — everyone knows those are great tools to deliver smooth animated websites.

    Of course, the WebGPU scenes had to be done with gpu-curtains, the 3D engine I spent so much time working on these past two years. It was a great way to test it in a real-life scenario and gave me the opportunity to fix a few bugs or add a couple features along the way.

    And since I wanted the whole process to be as transparent as possible, I’ve published the whole source code as a monorepo on GitHub.

    Animations

    I won’t go too deep into how I handled the various animations, simply because I’ve essentially used CSS and a bit of GSAP here and there, mostly for canvas animations, SplitText effects or the videos carousel using ScrollTrigger observer.

    The basic scenes

    There are a lot of components on the website that needed to draw something onto a <canvas> and react to the theme and/or color palette changes.

    To handle that, I created a Scene.ts class:

    import type { ColorPalette } from "@martinlaxenaire/color-palette-generator";
    
    export interface SceneParams {
      container: HTMLElement;
      progress?: number;
      palette?: ColorPalette;
      colors?: ColorModelBase[];
    }
    
    export class Scene {
      #progress: number;
      container: HTMLElement;
      colors: ColorModelBase[];
      isVisible: boolean;
    
      constructor({ container, progress = 0, colors = [] }: SceneParams) {
        this.container = container;
        this.colors = colors;
        this.#progress = progress;
        this.isVisible = true;
      }
    
      onResize() {}
    
      onRender() {}
    
      setSceneVisibility(isVisible: boolean = true) {
        this.isVisible = isVisible;
      }
    
      setColors(colors: ColorModelBase[]) {
        this.colors = colors;
      }
    
      get progress(): number {
        return this.#progress;
      }
    
      set progress(value: number) {
        this.#progress = isNaN(value) ? 0 : value;
        this.onProgress();
      }
    
      forceProgressUpdate(progress: number = 0) {
        this.progress = progress;
      }
    
      lerp(start = 0, end = 1, amount = 0.1) {
        return (1 - amount) * start + amount * end;
      }
    
      onProgress() {}
    
      destroy() {}
    }

    Since switching theme from light to dark (or vice versa) also updates the color palette by tweaking the HSV value component of the colors a bit, I’ve just put a setColors() method in there to handle these changes.

    The progress handling here is actually a remain of when the WebGPU scenes animations were mostly scroll-driven (before I introduced the game mechanisms), but since a few scenes still used it, I kept it in there.

    All the 2D canvas scenes extend that class, including the WebGPU fallback scenes, the theme switcher button or the dynamic favicon generator (did you notice that?).

    The WebGPU scenes

    One of the very cool features introduced by WebGPU is that you can render to multiple <canvas> elements using only one WebGPU device. I used this to build 4 different scenes (we’ll take a closer look at each of them below), that all extend a WebGPUScene.ts class:

    import { GPUCurtains } from "gpu-curtains";
    import type { ComputeMaterial, RenderMaterial } from "gpu-curtains";
    import { Scene } from "./Scene";
    import type { SceneParams } from "./Scene";
    import {
      QualityManager,
      type QualityManagerParams,
    } from "./utils/QualityManager";
    
    export interface WebGPUSceneParams extends SceneParams {
      gpuCurtains: GPUCurtains;
      targetFPS?: QualityManagerParams["targetFPS"];
    }
    
    export class WebGPUScene extends Scene {
      gpuCurtains: GPUCurtains;
    
      qualityManager: QualityManager;
      quality: number;
      _onVisibilityChangeHandler: () => void;
    
      constructor({
        gpuCurtains,
        container,
        progress = 0,
        colors = [],
        targetFPS = 55,
      }: WebGPUSceneParams) {
        super({ container, progress, colors });
    
        this.gpuCurtains = gpuCurtains;
    
        this._onVisibilityChangeHandler =
          this.onDocumentVisibilityChange.bind(this);
    
        this.qualityManager = new QualityManager({
          label: `${this.constructor.name} quality manager`,
          updateDelay: 2000,
          targetFPS,
          onQualityChange: (newQuality) => this.onQualityChange(newQuality),
        });
    
        this.quality = this.qualityManager.quality.current;
    
        document.addEventListener(
          "visibilitychange",
          this._onVisibilityChangeHandler
        );
      }
    
      override setSceneVisibility(isVisible: boolean = true) {
        super.setSceneVisibility(isVisible);
    
        this.qualityManager.active = isVisible;
      }
    
      onDocumentVisibilityChange() {
        this.qualityManager.active = this.isVisible && !document.hidden;
      }
    
      compilteMaterialOnIdle(material: ComputeMaterial | RenderMaterial) {
        if (!this.isVisible && "requestIdleCallback" in window) {
          window.requestIdleCallback(() => {
            material.compileMaterial();
          });
        }
      }
    
      override onRender(): void {
        super.onRender();
        this.qualityManager.update();
      }
    
      onQualityChange(newQuality: number) {
        this.quality = newQuality;
      }
    
      override destroy(): void {
        super.destroy();
    
        document.removeEventListener(
          "visibilitychange",
          this._onVisibilityChangeHandler
        );
      }
    }

    In the real version, this class also handles the creation of a Tweakpane GUI folder (useful for debugging or tweaking values), but for the sake of clarity I removed the related code here.

    As you can see, each of these scenes closely monitors its own performance using a custom QualityManager class. We’ll talk about that later, in the performance section.

    Okay, now that we have the basic architecture in mind, let’s break down each of the WebGPU scenes!

    Since WebGPU is not fully supported yet, I’ve created fallback versions using the 2D canvas API and the Scene class we’ve seen above for each of the following scenes.

    Hero scene

    The scenes featured in the portfolio somehow respect a kind of complexity order, meaning the more you advance in the portfolio, the more technically involved the scenes become.

    In that way, the hero scene is by far the most simple technically speaking, but it had to look particularly striking and engaging to immediately capture the user’s attention. It was thought as some sort of mobile puzzle game splash screen.

    It’s made of a basic, single fullscreen quad. The idea here is to first rotate its UV components each frame, map them to polar coordinates and use that to create colored triangles segments.

    // Center UVs at (0.5, 0.5)
    var centeredUV = uv - vec2f(0.5);
    
    // Apply rotation using a 2D rotation matrix
    let angleOffset = params.time * params.speed; // Rotation angle in radians
    let cosA = cos(angleOffset);
    let sinA = sin(angleOffset);
    
    // Rotate the centered UVs
    centeredUV = vec2<f32>(
      cosA * centeredUV.x - sinA * centeredUV.y,
      sinA * centeredUV.x + cosA * centeredUV.y
    );
    
    // Convert to polar coordinates
    let angle = atan2(centeredUV.y, centeredUV.x); // Angle in radians
    let radius = length(centeredUV);
    
    // Map angle to triangle index
    let totalSegments  = params.numTriangles * f32(params.nbColors) * params.fillColorRatio;
    let normalizedAngle = (angle + PI) / (2.0 * PI); // Normalize to [0,1]
    let triIndex = floor(normalizedAngle * totalSegments); // Get triangle index
    
    // Compute fractional part for blending
    let segmentFraction = fract(normalizedAngle * totalSegments); // Value in [0,1] within segment
    
    let isEmpty = (i32(triIndex) % i32(params.fillColorRatio)) == i32(params.fillColorRatio - 1.0);
    let colorIndex = i32(triIndex / params.fillColorRatio) % params.nbColors; // Use half as many color indices
    
    let color = select(vec4(params.colors[colorIndex], 1.0), vec4f(0.0), isEmpty);

    There’s actually a wavy noise applied to the UV beforehand using concentric circles, but you get the idea.

    Interestingly enough, the most difficult part was to achieve the rounded rectangle entering animation while preserving the correct aspect ratio. This was done using this function:

    fn roundedRectSDF(uv: vec2f, resolution: vec2f, radiusPx: f32) -> f32 {
        let aspect = resolution.x / resolution.y;
    
        // Convert pixel values to normalized UV space
        let marginUV = vec2f(radiusPx) / resolution;
        let radiusUV = vec2f(radiusPx) / resolution;
    
        // Adjust radius X for aspect ratio
        let radius = vec2f(radiusUV.x * aspect, radiusUV.y);
    
        // Center UV around (0,0) and apply scale (progress)
        var p = uv * 2.0 - 1.0;       // [0,1] → [-1,1]
        p.x *= aspect;                // fix aspect
        p /= max(0.0001, params.showProgress); // apply scaling
        p = abs(p);
    
        // Half size of the rounded rect
        let halfSize = vec2f(1.0) - marginUV * 2.0 - radiusUV * 2.0;
        let halfSizeScaled = vec2f(halfSize.x * aspect, halfSize.y);
    
        let d = p - halfSizeScaled;
        let outside = max(d, vec2f(0.0));
        let dist = length(outside) + min(max(d.x, d.y), 0.0) - radius.x * 2.0;
    
        return dist;
    }

    Highlighted videos slider scene

    Next up is the highlighted videos slider. The original idea came from an old WebGL prototype I had built a few years ago and never used.

    The idea is to displace the planes vertices to wrap them around a cylinder.

    var position: vec3f = attributes.position;
    
    // curve
    let angle: f32 = 1.0 / curve.nbItems;
    
    let cosAngle = cos(position.x * PI * angle);
    let sinAngle = sin(position.x * PI * angle);
          
    position.z = cosAngle * curve.itemWidth;
    position.x = sinAngle;

    I obviously used this for the years titles, whereas the videos and trail effects behind them are distorted using a post-processing pass.

    While this was originally tied to the vertical scroll values (and I really liked the feeling it produced), I had to update its behavior when I switched to the whole gamification idea, making it an horizontal carousel.

    Thanks to gpu-curtains DOM to WebGPU syncing capabilities, it was relatively easy to set up the videos grid prototype using the Plane class.

    The trail effect is done using a compute shader writing to a storage texture. The compute shader only runs when necessary, which means when the slider is moving. I’m sure it could have been done in a thousands different ways, but it was a good excuse to play with compute shaders and storage textures. Here’s the compute shader involved:

    struct Rectangles {
      sizes: vec2f,
      positions: vec2f,
      colors: vec4f
    };
    
    struct Params {
      progress: f32,
      intensity: f32
    };
    
    @group(0) @binding(0) var backgroundStorageTexture: texture_storage_2d<rgba8unorm, write>;
    
    @group(1) @binding(0) var<uniform> params: Params;
    @group(1) @binding(1) var<storage, read> rectangles: array<Rectangles>;
    
    fn sdfRectangle(center: vec2f, size: vec2f) -> f32 {
        let dxy = abs(center) - size;
        return length(max(dxy, vec2(0.0))) + max(min(dxy.x, 0.0), min(dxy.y, 0.0));
    }
    
    @compute @workgroup_size(16, 16) fn main(
        @builtin(global_invocation_id) GlobalInvocationID: vec3<u32>
    ) {
        let bgTextureDimensions = vec2f(textureDimensions(backgroundStorageTexture));
    
        if(f32(GlobalInvocationID.x) <= bgTextureDimensions.x && f32(GlobalInvocationID.y) <= bgTextureDimensions.y) {
            let uv = vec2f(f32(GlobalInvocationID.x) / bgTextureDimensions.x - params.progress,
            f32(GlobalInvocationID.y) / bgTextureDimensions.y);
    
            var color = vec4f(0.0, 0.0, 0.0, 0.0); // Default to black
            let nbRectangles: u32 = arrayLength(&rectangles);
    
            for (var i: u32 = 0; i < nbRectangles; i++) {
                let rectangle = rectangles[i];
    
                let rectDist = sdfRectangle(uv - rectangle.positions, vec2(rectangle.sizes.x * params.intensity, rectangle.sizes.y));
    
                color = select(color, rectangle.colors * params.intensity, rectDist < 0.0);
            }
    
            textureStore(backgroundStorageTexture, vec2<i32>(GlobalInvocationID.xy), color);
        }
    }

    I thought I was done here, but while running production build tests I stumbled upon an issue. Unfortunately, preloading all those videos to use as WebGPU textures resulted in a huge initial payload and also significantly affected the CPU load. To mitigate that, I’ve implemented a sequential video preloading where I’d have to wait for each video to have enough data before loading the next one. This gave a huge boost regarding initial load time and CPU overhead.

    Invoices scene

    The third WebGPU scene was initially supposed to constitute my own take at 3D boids simulations, using instancing and a compute shader. After a bit of work, I had a bunch of instances that were following my mouse, but the end result was not living up to my expectations. The spheres were sometimes overlapping each other, or disappearing behind the edges of the screen. I kept improving it, adding self-collision, edge detections and attraction/repulsion mechanisms until I was happy enough with the result.

    I like to call it the “invoices” scene, because the sphere instances here actually represent all the invoices I actually issued during my freelance career, scaled based on the amounts. Since I’m using google sheets to handle most of my accounting, I’ve made a little script that gathers all my invoices amount in a single, separate private sheet each time I’m updating my accounting sheets. I then fetch and parse that sheet to create the instances. It was a fun little side exercise and turns this scene into an ironically meaningful experiment: each time you click and hold, you kind of help me collect my money.

    The compute shader uses a buffer ping-pong technique: you start with two identically filled buffers (e.g. packed raw data) then at each compute dispatch call, you read the data from the first buffer and update the second one accordingly. Once done, you swap the two buffers before the next call and repeat the process.
    If you’re familiar with WebGL, this is often done with textures. WebGPU and compute shaders allow us to do so with buffers, which is way more powerful. Here is the complete compute shader code:

    struct ParticleB {
      position: vec4f,
      velocity: vec4f,
      rotation: vec4f,
      angularVelocity: vec4f,
      data: vec4f
    };
    
    struct ParticleA {
      position: vec4f,
      velocity: vec4f,
      rotation: vec4f,
      angularVelocity: vec4f,
      data: vec4f
    };
    
    struct SimParams {
      deltaT: f32,
      mousePosition: vec3f,
      mouseAttraction: f32,
      spheresRepulsion: f32,
      boxReboundFactor: f32,
      boxPlanes: array<vec4f, 6>
    };
    
    @group(0) @binding(0) var<uniform> params: SimParams;
    @group(0) @binding(1) var<storage, read> particlesA: array<ParticleA>;
    @group(0) @binding(2) var<storage, read_write> particlesB: array<ParticleB>;
    
    
    fn constrainToFrustum(pos: vec3<f32>, ptr_velocity: ptr<function, vec3<f32>>, radius: f32) -> vec3<f32> {
        var correctedPos = pos;
    
        for (var i = 0u; i < 6u; i++) { // Loop through 6 frustum planes
            let plane = params.boxPlanes[i];
            let dist = dot(plane.xyz, correctedPos) + plane.w;
    
            if (dist < radius) { // If inside the plane boundary (radius = 1)
                // Move the point inside the frustum
                let correction = plane.xyz * (-dist + radius); // Push inside the frustum
                
                // Apply the position correction
                correctedPos += correction;
    
                // Reflect velocity with damping
                let normal = plane.xyz;
                let velocityAlongNormal = dot(*(ptr_velocity), normal);
                
                if (velocityAlongNormal < 0.0) { // Ensure we only reflect if moving towards the plane
                    *(ptr_velocity) -= (1.0 + params.boxReboundFactor) * velocityAlongNormal * normal;
                }
            }
        }
        return correctedPos;
    }
    
    fn quaternionFromAngularVelocity(omega: vec3f, dt: f32) -> vec4f {
        let theta = length(omega) * dt;
        if (theta < 1e-5) {
            return vec4(0.0, 0.0, 0.0, 1.0);
        }
        let axis = normalize(omega);
        let halfTheta = 0.5 * theta;
        let sinHalf = sin(halfTheta);
        return vec4(axis * sinHalf, cos(halfTheta));
    }
    
    fn quaternionMul(a: vec4f, b: vec4f) -> vec4f {
        return vec4(
            a.w * b.xyz + b.w * a.xyz + cross(a.xyz, b.xyz),
            a.w * b.w - dot(a.xyz, b.xyz)
        );
    }
    
    fn integrateQuaternion(q: vec4f, angularVel: vec3f, dt: f32) -> vec4f {
        let omega = vec4(angularVel, 0.0);
        let dq = 0.5 * quaternionMul(q, omega);
        return normalize(q + dq * dt);
    }
    
    @compute @workgroup_size(64) fn main(
        @builtin(global_invocation_id) GlobalInvocationID: vec3<u32>
    ) {
        var index = GlobalInvocationID.x;
        
        var vPos = particlesA[index].position.xyz;
    
        var vVel = particlesA[index].velocity.xyz;
        var collision = particlesA[index].velocity.w;
    
        var vQuat = particlesA[index].rotation;
        var angularVelocity = particlesA[index].angularVelocity.xyz;
    
        var vData = particlesA[index].data;
    
        let sphereRadius = vData.x;
        var newCollision = vData.y;
    
        
        collision += (newCollision - collision) * 0.2;
        collision = smoothstep(0.0, 1.0, collision);
        newCollision = max(0.0, newCollision - 0.0325);
    
        let mousePosition: vec3f = params.mousePosition;
        let minDistance: f32 = sphereRadius; // Minimum allowed distance between spheres
    
        // Compute attraction towards sphere 0
        var directionToCenter = mousePosition - vPos;
        let distanceToCenter = length(directionToCenter);
    
        // Slow down when close to the attractor
        var dampingFactor = smoothstep(0.0, minDistance, distanceToCenter);
        
        if (distanceToCenter > minDistance && params.mouseAttraction > 0.0) { // Only attract if outside the minimum distance
            vVel += normalize(directionToCenter) * params.mouseAttraction * dampingFactor;
            vVel *= 0.95;
        }
        
        // Collision Handling: Packing spheres instead of pushing them away
        var particlesArrayLength = arrayLength(&particlesA);
        
        for (var i = 0u; i < particlesArrayLength; i++) {
            if (i == index) {
                continue;
            }
            
            let otherPos = particlesA[i].position.xyz;
            let otherRadius = particlesA[i].data.x;
            let collisionMinDist = sphereRadius + otherRadius;
            let toOther = otherPos - vPos;
            let dist = length(toOther);
    
            if (dist < collisionMinDist) { 
                let pushDir = normalize(toOther);
                let overlap = collisionMinDist - dist;
                let pushStrength = otherRadius / sphereRadius; // radius
                
                // Push away proportionally to overlap
                vVel -= pushDir * (overlap * params.spheresRepulsion) * pushStrength;
                newCollision = min(1.0, pushStrength * 1.5);
    
                let r = normalize(cross(pushDir, vVel));
                angularVelocity += r * length(vVel) * 0.1 * pushStrength;
            }
        }
    
        let projectedVelocity = dot(vVel, directionToCenter); // Velocity component towards mouse
    
        let mainSphereRadius = 1.0;
    
        if(distanceToCenter <= (mainSphereRadius + minDistance)) {
            let pushDir = normalize(directionToCenter);
            let overlap = (mainSphereRadius + minDistance) - distanceToCenter;
            
            // Push away proportionally to overlap
            vVel -= pushDir * (overlap * params.spheresRepulsion) * (2.0 + params.mouseAttraction);
    
            newCollision = 1.0;
    
            if(params.mouseAttraction > 0.0) {
                vPos -= pushDir * overlap;
            }
    
            let r = normalize(cross(pushDir, vVel));
            angularVelocity += r * length(vVel) * 0.05;
        }
    
        vPos = constrainToFrustum(vPos, &vVel, sphereRadius);
    
        // Apply velocity update
        vPos += vVel * params.deltaT;
    
        angularVelocity *= 0.98;
        let updatedQuat = integrateQuaternion(vQuat, angularVelocity, params.deltaT);
        
        // Write back      
        particlesB[index].position = vec4(vPos, 0.0);
        particlesB[index].velocity = vec4(vVel, collision);
        particlesB[index].data = vec4(vData.x, newCollision, vData.z, vData.w);
        particlesB[index].rotation = updatedQuat;
        particlesB[index].angularVelocity = vec4(angularVelocity, 1.0);
    }

    One of my main inspirations for this scene was this awesome demo by Patrick Schroen. I spent a lot of time looking for the right rendering tricks to use and finally set up my mind on volumetric lighting. The implementation is quite similar to what Maxime Heckel explained in this excellent breakdown article. Funnily enough, I was already deep into my own implementation when he released that piece, and I owe him the idea of using a blue noise texture.

    As a side note, during the development phase this was the first scene that required an actual user interaction and it played a pivotal role in my decision to turn my folio into a game.

    Open source scene

    For the last scene, I wanted to experiment a bit more with particles and curl noise because I’ve always liked how organic and beautiful it can get. I had already published an article using these concepts, so I had to come up with something different. Jaume Sanchez’ Polygon Shredder definitely was a major inspiration here.

    Since this experiment was part of my open source commitment section, I had the idea to use my GitHub statistics as a data source for the particles. Each statistic (number of commits, followers, issues closed and so on) is assigned to a color and turned into a bunch of particles. You can even toggle them on and off using the filters in the information pop-up. Once again, this changed a rather technical demo into something more meaningful.

    While working on the portfolio, I was also exploring new rendering techniques with gpu-curtains such as planar reflections. Traditionally used for mirror effects or floor reflections, it consists of rendering a part of your scene a second time but from a different camera angle and projecting it onto a plane. Having nailed this, I thought it would be a perfect match there and added it to the scene.

    Last but not least, and as a reminder of the retro video games vibe, I wanted to add a pixelated mouse trail post-processing effect. I soon realized it would be way too much though, and ended up showing it only when the user is actually drawing a line, making it more subtle.

    Using the filters can actually help you unlock features!

    Performance and accessibility

    On such highly interactive and immersive pages, performance is key. Here are a few tricks I’ve used to try to maintain the most fluid experience across all devices.

    Dynamic imports

    I’ve used Nuxt dynamic imported components and lazy hydration for almost every non critical components of the page. In the same way, all WebGPU scenes are dynamically loaded only if WebGPU is supported. This significantly decreased the initial page load time.

    // pseudo code
    import type { WebGPUHeroScene } from "~/scenes/hero/WebGPUHeroScene";
    import { CanvasHeroScene } from "~/scenes/hero/CanvasHeroScene";
    
    let scene: WebGPUHeroScene | CanvasHeroScene | null;
    const canvas = useTemplateRef("canvas");
    
    const { colors } = usePaletteGenerator();
    
    onMounted(async () => {
      const { $gpuCurtains, $hasWebGPU, $isReducedMotion } = useNuxtApp();
    
      if ($hasWebGPU && canvas.value) {
        const { WebGPUHeroScene } = await import("~/scenes/hero/WebGPUHeroScene");
    
        scene = new WebGPUHeroScene({
          gpuCurtains: $gpuCurtains,
          container: canvas.value,
          colors: colors.value,
        });
      } else if (canvas.value) {
        scene = new CanvasHeroScene({
          container: canvas.value,
          isReducedMotion: $isReducedMotion,
          colors: colors.value,
        });
      }
    });

    I’m not particularly fond of Lighthouse reports but as you can see the test result is quite good (note that it’s running without WebGPU though).

    PageSpeed Insights report

    Monitoring WebGPU performance in real time

    I’ve briefly mentionned it earlier, but each WebGPU scene actually monitors its own performance by keeping track of its FPS rate in real time. To do so, I’ve written 2 separate classes: FPSWatcher, that records the average FPS over a given period of time, and QualityManager, that uses a FPSWatcher to set a current quality rating on a 0 to 10 scale based on the average FPS.

    This is what they look like:

    export interface FPSWatcherParams {
      updateDelay?: number;
      onWatch?: (averageFPS: number) => void;
    }
    
    export default class FPSWatcher {
      updateDelay: number;
      onWatch: (averageFPS: number) => void;
    
      frames: number[];
      lastTs: number;
      elapsedTime: number;
      average: number;
    
      constructor({
        updateDelay = 1000, // ms
        onWatch = () => {}, // callback called every ${updateDelay}ms
      }: FPSWatcherParams = {}) {
        this.updateDelay = updateDelay;
        this.onWatch = onWatch;
    
        this.frames = [];
    
        this.lastTs = performance.now();
        this.elapsedTime = 0;
    
        this.average = 0;
      }
    
      restart() {
        this.frames = [];
        this.elapsedTime = 0;
        this.lastTs = performance.now();
      }
    
      update() {
        const delta = performance.now() - this.lastTs;
        this.lastTs = performance.now();
        this.elapsedTime += delta;
    
        this.frames.push(delta);
    
        if (this.elapsedTime > this.updateDelay) {
          const framesTotal = this.frames.reduce((a, b) => a + b, 0);
    
          this.average = (this.frames.length * 1000) / framesTotal;
    
          this.frames = [];
          this.elapsedTime = 0;
    
          this.onWatch(this.average);
        }
      }
    }

    It’s very basic: I just record the elapsed time between two render calls, put that into an array and run a callback every updateDelay milliseconds with the latest FPS average value.
    It is then used by the QualityManager class, that does all the heavy lifting to assign an accurate current quality score:

    import type { FPSWatcherParams } from "./FPSWatcher";
    import FPSWatcher from "./FPSWatcher";
    
    export interface QualityManagerParams {
      label?: string;
      updateDelay?: FPSWatcherParams["updateDelay"];
      targetFPS?: number;
      onQualityChange?: (newQuality: number) => void;
    }
    
    export class QualityManager {
      label: string;
      fpsWatcher: FPSWatcher;
      targetFPS: number;
      #lastFPS: number | null;
      #active: boolean;
      onQualityChange: (newQuality: number) => void;
    
      quality: {
        current: number;
        min: number;
        max: number;
      };
    
      constructor({
        label = "Quality manager",
        updateDelay = 1000,
        targetFPS = 60,
        onQualityChange = (newQuality) => {},
      }: QualityManagerParams = {}) {
        this.label = label;
        this.onQualityChange = onQualityChange;
    
        this.quality = {
          min: 0,
          max: 10,
          current: 7,
        };
    
        this.#active = true;
    
        this.targetFPS = targetFPS;
        this.#lastFPS = null;
    
        this.fpsWatcher = new FPSWatcher({
          updateDelay,
          onWatch: (averageFPS) => this.onFPSWatcherUpdate(averageFPS),
        });
      }
    
      get active() {
        return this.#active;
      }
    
      set active(value: boolean) {
        if (!this.active && value) {
          this.fpsWatcher.restart();
        }
    
        this.#active = value;
      }
    
      onFPSWatcherUpdate(averageFPS = 0) {
        const lastFpsRatio = this.#lastFPS
          ? Math.round(averageFPS / this.#lastFPS)
          : 1;
        const fpsRatio = (averageFPS + lastFpsRatio) / this.targetFPS;
    
        // if fps ratio is over 0.95, we should increase
        // else we decrease
        const boostedFpsRatio = fpsRatio / 0.95;
    
        // smooth change multiplier avoid huge changes in quality
        // except if we've seen a big change from last FPS values
        const smoothChangeMultiplier = 0.5 * lastFpsRatio;
    
        // quality difference that should be applied (number with 2 decimals)
        const qualityDiff =
          Math.round((boostedFpsRatio - 1) * 100) * 0.1 * smoothChangeMultiplier;
    
        if (Math.abs(qualityDiff) > 0.25) {
          const newQuality = Math.min(
            Math.max(
              this.quality.current + Math.round(qualityDiff),
              this.quality.min
            ),
            this.quality.max
          );
    
          this.setCurrentQuality(newQuality);
        }
    
        this.#lastFPS = averageFPS;
      }
    
      setCurrentQuality(newQuality: number) {
        this.quality.current = newQuality;
        this.onQualityChange(this.quality.current);
      }
    
      update() {
        if (this.active) {
          this.fpsWatcher.update();
        }
      }
    }
    

    The most difficult part here is to smoothly handle the quality changes to avoid huge drops or gains in quality. You also don’t want to fall in a loop where for example:

    1. The average FPS are poor, so you degrade your current quality.
    2. You detect a quality loss and therefore decide to switch off an important feature, such as shadow mapping.
    3. Removing the shadow mapping gives you a FPS boost and after the expected delay the current quality is upgraded.
    4. You detect a quality gain, decide to re-enable shadow mapping and soon enough, you’re back to step 1.

    Typically, the quality rating is used to update things such as the current pixel ratio of the scene, frame buffers resolutions, number of shadow maps PCF samples, volumetric raymarching steps and so on. In worst case scenarios, it can even disable shadow mapping or post processing effects.

    Accessibility

    Finally, the site had to respect at least the basic accessibility standards. I’m not an accessibility expert and I may have made a few mistakes here and there, but the key points are that the HTML is semantically correct, it is possible to navigate using the keyboard and the prefers-reduced-motion preference is respected. I achieved that by disabling entirely the gamification concept for these users, removing every CSS and JavaScript animations, and made the scenes fall back to their 2D canvas versions, without being animated at all.

    Conclusion

    Well, it was a long journey, wasn’t it?

    Working on my portfolio these past 6 months has been a truly demanding task, technically but also emotionally. I’m still having a lot of self doubts about the overall design, key UX choices or level of creativity. I also do think that it kind of honestly sums up who I am as a developer but also as a person. In the end, it’s probably what matters most.

    I hope that you’ve learnt a few things reading this case study, whether it’d be about technical stuff or my own creative process. Thank you all, and remember: stay fun!



    Source link

  • Where Silence Speaks: Kakeru Taira on Transforming Everyday Spaces into Liminal Experiences

    Where Silence Speaks: Kakeru Taira on Transforming Everyday Spaces into Liminal Experiences


    In the vast field of digital art, few creators manage to transform the familiar into something quietly unsettling as convincingly as Kakeru Taira. Working primarily in Blender, the self-taught Japanese artist has gained international attention for his meticulously crafted liminal spaces — laundromats, apartments, train stations, bookstores — places that feel both intimately real and strangely out of reach.

    What makes his work remarkable is not only its technical precision but also the atmosphere it carries. These environments are steeped in silence and suggestion, capturing the in-between quality of spaces that are usually overlooked. They can feel nostalgic, eerie, or comforting, depending on the viewer — and that ambiguity is intentional. Taira resists defining his own works, believing that each person should encounter them freely, bringing their own memories, feelings, and interpretations.

    For our community of designers and developers, his work offers both inspiration and insight: into craft, persistence, and the power of detail. In this conversation, I spoke with Taira about his journey into 3D, the challenges of mastering Blender, his thoughts on liminal spaces, and his perspective on where CGI art is headed.

    For readers who may be discovering your work for the first time, how would you like to introduce yourself?

    Nice to meet you. My name is Kakeru Taira. I use Blender to create CG works with the theme of “discomfort” and “eerieness” that lurk in everyday life. By adding a slight sense of distortion and unease to spaces that we would normally overlook, I aim to create works that stimulate the imagination of the viewer.

    If someone only saw one of your works to understand who you are, which would you choose and why?

    “An apartment where a man in his early twenties likely lives alone”

    https://www.youtube.com/watch?v=N4zHLdC1osI

    This work is set in a small apartment, a typical Japanese setting.

    I think even first-time viewers will enjoy my work, as it captures the atmosphere of Japanese living spaces, the clutter of objects, and the sense that something is lurking.

    You began with illustration before discovering Blender. What shifted in your way of thinking about space and composition when you moved into 3D?

    When I was drawing illustrations, I didn’t draw backgrounds or spaces, and instead focused mainly on female characters. My main concern was “how to make a person look attractive” within a single picture.

    However, since moving to 3DCG, I often don’t have a clear protagonist character. As a result, it has become necessary to draw the eye to the space itself and let the overall composition speak for the atmosphere.

    As a result, I now spend more time on elements that I hadn’t previously paid much attention to, such as “where to place objects” and “what kind of atmosphere to create with lighting.” I think the “elements to make a person look impressive” that I developed when drawing characters has now evolved into “a perspective that makes the space speak like a person.”

    When you spend long hours building a scene, how do you keep perspective on the overall atmosphere while working on small details?

    When I work, I am always conscious of whether the scene feels “pleasant” when viewed from the camera’s point of view. In my work, I place particular emphasis on arranging objects so that the viewer’s gaze converges toward the center, and on symmetry to create a balance between the left and right sides, in order to tighten up the overall scene.

    Your scenes often feel uncanny because of subtle details. Which kind of detail do you think has the greatest impact on atmosphere, even if most viewers might overlook it?

    In my works, I believe that elements such as the overall color, camera shake, and the “converging lines that converge at the center of the screen” created by the placement of objects have a particularly large influence on the atmosphere.

    Color dominates the impression of the entire space, while camera shake expresses the tension and desperation of the characters and the situation. By placing objects so that the viewer’s eyes naturally converge at the center, I devise a way for them to intuitively sense the overall atmosphere and eeriness of the scene, even if they are looking absentmindedly.

    Many of your works depict ordinary Japanese places. In your opinion, what makes these overlooked everyday spaces such powerful subjects for digital art?

    My works are set in ordinary Japanese spaces that are usually overlooked and no one pays any attention to them. It is precisely because they are overlooked that with just a little modification they have the power to create a different atmosphere and an extraordinary impression. I believe that by bringing out the subtle incongruity and atmosphere that lurks in the everyday through light, color and the placement of objects, it is possible to create a strong and memorable expression even in ordinary places.

    People outside Japan often feel nostalgia in your works, even if they’ve never experienced those locations. Why do you think these atmospheres can feel universally familiar?

    I believe the reason why people outside of Japan feel a sense of nostalgia when they see my works, even in places they’ve never been to, is largely due to the concept of “liminal space,” which has become a hot topic online. One thing my works have in common with liminal space is that, despite the fact that they are spaces where people are meant to come and go and be used, no people are visible on screen. At the same time, however, traces of people’s past, such as the scrapes on the floor and the presence of placed objects, float about, evoking a faint sense of life amid the silence.

    I believe that this “coexistence of absence and traces” stimulates memories that lie deep within the hearts of people of all countries. Even in places that have never been visited, an atmosphere that everyone has experienced at least once is evoked—a universal feeling that perhaps connects to nostalgia and familiarity.

    You’ve said you don’t want to define your works, leaving each viewer free to imagine. Why do you feel that openness is especially important in today’s fast, online culture?

    I believe that prioritizing speed alone would limit the expression I truly want to do, putting the cart before the horse. Of course, I want my work to reach as many people as possible, but I think what’s more important is to “first give form to the video I truly want to make.”

    On top of that, by leaving room for viewers to freely interpret it, I believe my work will not be bound by the times or trends, and will continue to have new meanings for each person. That’s why I feel there is value in being intentionally open, even in today’s fast-paced online culture.

    Working for weeks on a single piece requires persistence. What do you tell yourself in the moments when motivation is low?

    I love my own work, so my biggest motivation is the desire to see the finished product as soon as possible. Sometimes my motivation drops along the way, but each time that happens I tell myself that it will be interesting once it’s finished, and that I’ll be its first audience, and that helps me move forward.

    Creating something is a difficult process, but imagining the finished product naturally lifts my spirits, and I think that’s what allows me to persevere.

    Recently, you’ve shared works where you used Adobe Firefly to generate textures and experiment with new elements. How do you see AI fitting into your creative workflow alongside Blender?

    For me, using AI feels “similar to outsourcing”. For example, I leave detailed work that CG artists aren’t necessarily good at, such as creating textures for product packaging, to AI, as if I were asking a specialized artist. This allows me to focus on core aspects like composition and spatial design, which improves the overall finish and speed of the work.

    By combining modeling in Blender with assistance from AI, I can utilize the strengths of each to advance production, which is of great significance to my current workflow.

    Note: At Kakeru’s request, we’d like to clarify that Adobe Firefly’s learning data is based solely on Adobe Stock and copyright-free content. The tool was developed with copyright considerations in mind to ensure safe use. He asked us to share this so readers can better understand how Firefly is positioned in his workflow.

    You’ve mentioned that AI can speed up some tasks, like texture creation. In your view, which parts of your process should be efficient, and which should remain slow and deliberate?

    I can’t leave the core parts, such as designing the composition or developing the entire work, to AI, as these are the most important elements that reflect my own sense and narrative. On the other hand, I feel that processes such as creating textures and considering variations can be made more efficient by using AI.

    In other words, I value drawing the line between “taking my time carefully to decide the direction and atmosphere of the work” and “having AI help with repetitive tasks and auxiliary parts.” I believe that by being conscious of the balance between efficiency and deliberation, I can take advantage of the convenience of AI while also protecting the originality of my own expression.

    Some artists worry AI reduces originality. How do you approach using AI in a way that still keeps your signature atmosphere intact?

    I use AI solely as a “tool to assist my creation,” and I always make sure to come up with the core story and atmosphere of my work myself. If I become too dependent on AI, I won’t be able to truly say that my work is my own. Ultimately, humans are the main actors, and AI merely exists to make work more efficient and provide opportunities to draw out new ideas.

    For this reason, during the production process, I am always conscious of “at what stage and to what extent should I borrow the power of AI?” By prioritizing my own sense and expression while incorporating the strengths of AI in moderation, I believe I can expand the possibilities of new expression while retaining my own unique atmosphere in my work.

    Outside of Blender, are there experiences — in film, architecture, music, or daily routines — that you feel shape the way you design your environments?

    I am particularly drawn to the works of directors Yasujiro Ozu and Stanley Kubrick, where you can sense their passion for backgrounds and spatial design. Both directors have a very unique way of perceiving space, and even cutting out a portion of the screen has a sense of tension and beauty that makes it stand out as a “picture.” I have been greatly influenced by their approach, and in my own creations I aim to create “spaces that can be appreciated like a painting,” rather than just backgrounds.

    By incorporating the awareness of space I have learned from film works into my own CG expressions, I hope to be able to create a mysterious sense of depth and atmosphere even in everyday scenes.

    If you were giving advice to someone just starting with Blender, what would you say that goes beyond technical skill — about patience, mindset, or approach?

    One of Blender’s biggest strengths is that, unlike other CG software, it is free to start using. There are countless tutorials on YouTube, so you can learn at your own pace without spending money on training or learning. And the more you create, the more models you accumulate as your own assets, which can be motivating when you look back and see how much you’ve grown.

    Furthermore, when continuing your learning journey, it is important to adopt a patient and persistent attitude. At first, things may not go as planned, but the process of trial and error itself is valuable experience. Once you have completed a project, I also recommend sharing it on social media. Due to the influence of algorithms, it is difficult to predict which works will gain attention on social media today. Even a small challenge can catch the eye of many people and lead to unexpected connections or recognition. I hope that this content will be of some assistance to your creative endeavors.

    Step Into Kakeru’s Spaces

    Thank you, Kakeru, for sharing your journey and insights with us!

    Your ability to turn everyday spaces into something quietly profound reminds us of the power of detail, patience, and imagination in creative work. For those curious to experience his atmospheres firsthand, we invite you to explore Kakeru Taira’s works — they are pieces of digital art that blur the line between the familiar and the uncanny, and that might just stir memories you didn’t know you carried.

    Public bathroom
    Downtown diner

    Explore more of his works on X (Twitter), Instagram, TikTok and Youtube.

    I hope you found this interview inspiring. Which artist should I interview next? Let me know 🙂





    Source link

  • A Deep Dive into the UNC6040 Cyber Attack

    A Deep Dive into the UNC6040 Cyber Attack


    Executive Summary

    In early June 2025, Google’s corporate Salesforce instance (used to store contact data for small‑ and medium‑sized business clients) was compromised through a sophisticated vishing‑extortion campaign orchestrated by the threat‑group tracked as UNC6040 & UNC6240 (online cybercrime collective known as “The Com” linked to “ShinyHunters).”

    The attackers combined three core vectors:

    1. Voice‑phishing (vishing) – Impersonating IT staff in a convincing phone call, persuading a Google employee to approve a malicious application connected to Salesforce, a rapid‑reply extortion scheme demanding Bitcoin payments within 72 hrs.
    2. OAuth app abuse – the deployment of custom Python scripts that emulate Salesforce’s DataLoader, allowing automated bulk exports.
    3. Anonymity layers – Mullvad VPN‑initiated calls followed by TOR‑based data exfiltration, which anonymized the actors’ true location.

    Though Google confirmed that no user passwords were stolen, the breached dataset, included business names, email addresses, phone numbers and related notes. The implications reach far beyond the affected small and medium business customers: while associating compliance, brand integrity, partner security, and regulatory scrutiny of SaaS risk management practices.

    Meanwhile, the Salesloft Drift attack orchestrated by UNC6395 has emerged as one of the most significant cyber incidents in late 2025, which compromised the Salesloft Drift (AI chat-bot/assistant) used for its Salesforce integration. The theft of OAuth token appears to have resulted in running SOQL queries on Salesforce databases that held objects such as cases, accounts, users and opportunities. The attack affected hundreds of Salesforce customers, impacting not just Salesforce users but also other third-party integrations. Salesloft said “Initial findings have shown that the actor’s primary objective was to steal credentials, specifically focusing on sensitive information like AWS access keys, passwords and Snowflake-related access tokens”. Google explicitly warned of the breach’s extensive scope beyond its own systems.

    Primary Tactics & Attack Vectors:

    • Initial Access: Unauthorized OAuth apps installed via trial accounts (using legitimate email domains) and later via compromised accounts from unrelated orgs.
    • Vishing, Social Engineering: Voice phishing calls to employees
    • Exfiltration: Custom Python scripts that replicate DataLoader operations.
      Infrastructure: Initial calls routed via Mullvad VPN IPs; data transfer via TOR exit nodes.
    • Extortion: Requesting immediate Bitcoin payment.

    Threat Attribution

    UNC5537, UNC6040 & UNC6240 likely linked with “Scattered LAPSUS$ Hunters” (“Chos hub”) exhibits similar attack patterns.

    A Telegram channel called “Scattered LAPSUS$ Hunters”, blending the names of ShinyHunters, Scattered Spider and Lapsus$ groups emerged, which researchers describe as a chaotic hub for leaks and threats. The group focuses in exploiting the human element to gain access to company networks. The channel ran public polls where members voted on which victim’s data to fully dump, advertised zero-day exploits and a supposed new ransomware toolkit, touting the collective’s action.

    GOOGLE - SALESFORCE BREACH

    UNC6395 shared the theme of abusing OAuth mechanisms for Salesforce access via compromised 3rd party integration – evolving their tactics against cloud ecosystems. Meanwhile, UNC6040 uses vishing and OAuth abuse to access Salesforce through social engineering. Overlapping TTPs indicate targeting trusted access applications and the name ShinyHunters appears across these incidents. Al the same time, Google tracks this cluster separately as UNC6395, ShinyHunters extortion group initially told BleepingComputer that they were behind the SalesLoft Drift attack.

    Parallel Campaigns

    Similar tactics applied in attacks targeting Adidas, Qantas, Allianz Life, LVMH brands (Louis Vuitton, Dior, Tiffany & Co.), Chanel, AT&T, Santander, Starbucks Singapore, Snowflake breach at Ticketmaster, Cisco, Pandora, Bouygues Telecom, Tokopedia, Homechef, Chatbooks, Portail Orange, Farmers Insurance, TransUnion, UK Legal Aid Agency, Gucci, Salesforce, Fairhaven Homes, Workday, Mazars.fr, Adidas, Air France-KLM, Phantom Wallet, Neiman Marcus, Coca-Cola, ZScaler.

    • Qantas Airways: Employee credentials & sensitive flight/customer records targeted. Attack blended SIM swapping + SaaS compromise.
    • Air France-KLM: Airline loyalty accounts and CRM cloud environment probed.
    • Retailers (generalized set) → Used social engineering and SIM-swap vishing to gain access to IT/helpdesk portals.
    • Okta: Service provider breach led to downstream impact on multiple clients (identity federation exploited).
    • MGM Resorts: Social engineering of IT desk led to ransomware deployment, slot machines & hotel services down for days.
    • Caesars Entertainment: Extortion campaign where ransom was allegedly paid; loyalty program records got leaked.
    • AT&T: Call metadata (500M+ records, including phone numbers, call/SMS logs) stolen and advertised for sale.
    • Ticketmaster (Live Nation): ~560M customer records including event ticketing details, addresses, payment info leaked.
    • Advance Auto Parts: Data set of supply chain and retail customer info stolen.
    • Santander Bank: Customer financial records compromised; reported 30M records affected.
    • LendingTree: Customer PII and loan data exposed.
    • Neiman Marcus: Customer loyalty and credit program data targeted.
    • Los Angeles Unified School District (LAUSD): Student/employee data exfiltrated from Snowflake environment.
    • Pandora, Adidas, LVMH (Louis Vuitton, Chanel, Dior): Retail brand data exposed (customer PII + sales info).
    • ZScaler: UNC6395 compromised Salesforce instance through Salesloft Drift and steals customer data

     

     

    With the attack that involves compromise of the Salesloft Drift AI OAuth token, any data that could potentially be compromised from the databases (that held information on users, accounts, cases, etc,) can be utilized by the attacker in various ways. The stolen data could either be sold to third parties or used to access emails (as reported from a very small number of Google Workspace accounts) launch further credential-reuse attacks on other SaaS accounts.

    Indicators of Compromise:

    UNC6040, UNC6240 UNC6395
    81.17.28.95

    31.133.0.210

    45.138.16.69

    45.90.185.109

    45.141.215.19

    45.90.185.115

    45.90.185.107

    37.114.50.27

    45.90.185.118

    179.43.159.201

    38.135.24.30

    91.199.42.164

    192.159.99.74

    208.68.36.90

    44.215.108.109

    154.41.95.2

    176.65.149.100

    179.43.159.198

    185.130.47.58

    185.207.107.130

    185.220.101.133

    185.220.101.143

    185.220.101.164

    185.220.101.167

    185.220.101.169

    185.220.101.180

    185.220.101.185

    185.220.101.33

    192.42.116.179

    192.42.116.20

    194.15.36.117

    195.47.238.178

    195.47.238.83

    shinycorp@tuta[.]com

    shinygroup@tuta[.]com

    shinyhuntersgroups@tutamail[.]com

    ticket-dior[.]com

    ticket-nike[.]com

    ticket-audemarspiguet[.]com

    Salesforce-Multi-Org-Fetcher/1.0

    Salesforce-CLI/1.0

    python-requests/2.32.4

    Python/3.11 aiohttp/3.12.15

     

    In both the campaigns Google observed TOR exit nodes being used to access compromised Salesforce accounts.

    • Majority of attacks orchestrated by UNC6040 and UNC6240 (ShinyHunters) could be traced to originate from TOR exit nodes hosted either in Netherlands or Poland. These were hosted primarily at Macarne or Private Layer INC.

    • Attackers were found to blend TOR traffic with legitimate OAuth sessions to obscure origin and make detection harder. Attacks orchestrated by UNC6395 could be traced to originate from TOR exit nodes hosted either in Germany or Netherlands. These were hosted primarily at Stiftung Erneuerbare Freiheit.
    • Many suspicious SOQL queries (data exfiltration) and deletion of scheduled jobs were initiated from TOR IP addresses, indicating adversaries were anonymizing data theft operations.

    Similarly, Scattered Spider used TOR exit IPs as a cover for account takeovers and extortion activity.

    • Attackers combined vishing (helpdesk calls) with credential access, then routed subsequent access through Tor.
    • Tor traffic was especially noted when adversaries escalated privileges or accessed sensitive SaaS applications.
    • Europe-heavy nodes with a notable U.S. presence.

    Common Threads Across Both Campaigns

    • TOR IPs as operational cover was consistently used to hide adversary infrastructure.
    • Identity-based intrusions by both groups abused identity trust rather than exploiting zero-days.
    • Overlap with Scattered Spider tradecraft where both campaigns show attackers mixing social engineering or stolen credentials with TOR.
    • TOR exit nodes have different ASNs, but both campaigns leverage NL exit nodes. ASN 58087 (Florian Kolb, DE) overlaps across both the campaigns.

    Threat Landscape

    Threat actors such as UNC6040 (ShinyHunters-affiliated), Scattered Spider (UNC3944), and UNC5537 have targeted organizations in the hospitality, retail, and education sectors in the Americas and Europe.

    Scattered Spider (UNC3944) is known for sophistication and stealth:

    • Reliably uses commercial VPN services to mask origin: Mullvad VPN, ExpressVPN, NordVPN, Ultrasurf, Easy VPN, ZenMate.
    • Employs Tools and TTPs including disabling Antivirus/EDR, lateral movement via ADRecon, credential dumping with Mimikatz/LaZagne, and persistence via RMM and cloud VMs.

    “The Com”, short for The Community, is less a formal hacking group and more a sociopathic cybercriminal subculture:

    • Comprised of 1,000+ members and mostly aged 11–25, they operate across Canada, the U.S., and the U.K.
    • Engages in SIM swapping, cryptocurrency theft, swatting, sextortion, spear-phishing, and even extreme coercion or violence.
    • Intel471 reports that members are recruited via social media/gaming and coerced into crimes ranging from grooming to violent acts; the network has also issued a manual (“The Bible”) detailing techniques such as ATM skimming, IP grabbing, doxxing, extortion, and grooming.
    Source: DHS’s Joint Regional Intelligence Center and the Central California Intelligence Center

    UNC5537 orchestrated a large-scale breach targeting Snowflake customer environments:

    • In April–June 2024, accessed over 160 organizations including AT&T, Ticketmaster/Live Nation, Santander, Advance Auto Parts, LendingTree, Neiman Marcus, and LA Unified School District – via stolen credentials, often from infostealers, and constraints due to lack of MFA.
    • Data stolen included sensitive PII, event tickets, DEA numbers, and call/text metadata (500M+ records in aggregate).
    • Targets were later advertised and extorted through forums.

    DataBreaches.net received screenshots of a Telegram message from ShinyHunters claiming to outpace law enforcement, mocking capabilities of agencies like the NSA and stating: “Even the NSA can’t stop or identify us anymore. The FBI… is irrelevant and incompetent…”. In conversation, “Shiny” asserted that Scattered Spider sources voice calls and share access and hinted at a future “Snowflake 3.0” campaign, promising even greater operations ahead.

    Source: DataBreaches.Net

    Cross-Actor Victim Overlaps

    • Cloud SaaS as a hub: Salesforce (UNC6040), Okta (Scattered Spider), and Snowflake (UNC5537) breaches show pivot via cloud identity/data platforms.
    • Retail & hospitality: Multiple actors target customer/loyalty records
      • Scattered Spider targeted casinos.
      • UNC6040 targeted retailers.
      • UNC5537 targeted luxury brands.
    • Education: UNC6040 and UNC5537 both hit educational institutions, stealing student/faculty data.
    • Financial institutions: Santander (UNC5537) vs smaller fintech/payment targets by The Com/Scattered Spider (SIM swaps).

    Detection & Monitoring Guidance

    Additional indicators and associated detection rules for detecting the threat group is made available through STI and SMAP.

    What we recommend

    • Monitoring Logs
      Continuously scan for LOGIN events from unfamiliar IP ranges (especially Mullvad or TOR exit nodes). Flag any API activity exhibiting a high volume of requests every hour.
    • OAuth App Watch‑list
      Maintain a dynamic registry of approved apps. Trigger alerts on new or anomalous app registrations. Enforce a mandatory admin sign‑off workflow. The below detection rule is an example to detect suspicious signin events with OAuth:2.0:
      `SigninLogs | where ResultType == “0” | where AuthenticationDetails has “OAuth:2.0” | where AppDisplayName startswith “Salesforce” | summarize count() by UserPrincipalName, AppDisplayName, IPAddress | where count_ > 5`
    • Vishing Detection
      Implement caller‑ID verification, deploy voice‑analytics modules that detect key phrases (eg: “please pay”, “this is Google”) and cross‑reference against known threat‑intelligence feeds. Integrate with your call‑center platform to surface suspicious calls in real time.
    • Network Traffic Analysis
      Inspect outbound traffic for TOR exit nodes and VPN tunnels that deviate from corporate baselines. Use DPI to spot unusually large, encrypted payloads.
    • Threat‑Intelligence Feeds
      Subscribe to the latest ATT&CK and IOC updates for UNC6040/ShinyHunters. Monitor public Telegram channels for freshly disclosed IOCs.
    • Zero‑Trust IAM to reduce credential‑compromise impact
      MFA, least‑privilege, RBAC for all Salesforce users.
    • OAuth App Governance to stop rogue app installations
      Manual approval + periodic review
    • IP‑Based Restrictions to limit exfiltration paths
      Allow only corporate VPN IPs; block TOR exits
    • Endpoint Security to stop malicious code execution
      EDR to detect custom Python scripts
    • Call‑Center Hardening to mitigate human‑facing social engineering
      Caller‑ID verification, recorded scripts, staff training
    • Data Loss Prevention to detects anomalous data movements
      DLP on outbound exports (volume limits + alerts)
    • Strategic Initiative: SaaS Posture Management – continuous inventory & policy enforcement for third‑party integrations. Early rogue‑app detection is our key takeaway.
    • Revoke and rotate tokens/credentials: Immediately revoke OAuth tokens tied to Salesloft Drift and reset all exposed API keys.
    • Audit activity logs: Review SOQL queries and job deletions between Aug 8–18, 2025 for suspicious access.
    • Limit OAuth permissions: Enforce least privilege, review app scopes regularly, and tighten approval workflows.
    • Govern tokens: Ensure short-lived tokens, track their use, and revoke unused ones.
    • Secure stored credentials: Move AWS keys, Snowflake tokens, and other secrets out of Salesforce objects into vaults.
    • Enhance monitoring: Use UEBA to detect unusual SaaS behavior and consolidate logs across Salesforce, identity providers, and third-party apps.
    • Restrict integrations: Apply IP/network restrictions and remove untrusted apps until validated.

    Strategic Outlook

    • TTP Evolution – The ShinyHunters group hints at a potential pivot towards ransomware‑as‑a‑service (ShinySP1D3R).
    • Broader Targeting – High‑profile brands (Adidas, Qantas, Chanel, etc.) demonstrate that the same methodology can be scaled.
    • Regulatory Momentum – Expect stricter SaaS risk‑management mandates, amplifying the need for proactive controls.
    • Attribution Difficulty – Continued use of VPN/TOR & compromised third‑party accounts will heighten detection complexity; behavioral analytics will become indispensable.

    Final Note from Our Research Team

    The Google Salesforce breach is a textbook illustration of how modern threat actors blend technical supply‑chain exploitation with fast‑turnover social engineering. For organizations that rely on cloud‑native platforms, we see a critical need to:

    • Revisit SaaS integration policies – treat every third‑party app as a potential attack vector.
    • Strengthen human‑facing security – call‑center hardening and real‑time vishing detection should become a standard part of the security stack.
    • Adopt a data‑centric risk perspective – even smaller datasets can fuel large-scale phishing campaigns.
    • Our threat‑intelligence platform remains actively monitoring the ShinyHunters/Tor‑Mullvad threat chain and will update clients with emerging IOCs and risk indicators. We encourage you to integrate these insights into your defensive posture and to collaborate with our team for a tailored, intelligence‑driven response.

    Conclusion

    The Google internal Salesforce breach orchestrated by UNC6040 (“ShinyHunters”) underscores critical vulnerabilities in modern SaaS environments. The attack demonstrates that even data traditionally considered “low-sensitivity” can be weaponized for targeted phishing and extortion schemes, while also posing significant regulatory, reputational, operational, and financial risks. Organizations must adopt robust Identity & Access Management controls, enforce strict OAuth governance, and integrate comprehensive monitoring to mitigate evolving threats.

    The UNC6395 campaign highlights how third-party OAuth integrations can undermine SaaS security. By abusing trusted tokens, attackers bypassed MFA and exfiltrated sensitive data from hundreds of organizations. This attack reinforces SaaS ecosystems and not just core apps as prime targets. Strong governance over OAuth apps, token lifecycles, and SaaS behaviors is critical to reducing risk. Proactive monitoring, least privilege, and credential hygiene are essential to defending against token-based intrusions like this.

     

    Authors

    Deepak Thomas Philip

    Kartikkumar Jivani

    Sathwik Ram Prakki

    Subhajeet Singha

    Rhishav Kanjilal

    Shayak Tarafdar



    Source link

  • Hello Robo’s Rebrand: Distilling Complex Tech Into Interfaces Anyone Can Use

    Hello Robo’s Rebrand: Distilling Complex Tech Into Interfaces Anyone Can Use



    Hello Robo is a New York based digital product design agency that turns complex technology into intuitive, usable interfaces. We work with forward-thinking teams to create market-ready digital products that are easy to use and hard to ignore.

    Earlier this year, the design team at Hello Robo decided to update our brand and website site to speak the language of our current clients — AI, space, aviation, and robotics — after realizing the old, “startup-y” look sold us short.

    The new design and copy showcase our ability to tame complex systems with clear thinking and precise interfaces, signaling to deep-tech teams that we understand their world and can make their products make sense.

    We wanted our site to do only 2 things but well:

    • Have the design language to appeal to our existing and new target clients
    • Most of our work is not allowed to be shared. Our second goal was to let design, motion and interaction give our visitors a sense of what we are great at.

    Research

    Before we sketching a single screen, our design lead on this project Daria Krauskopf, did what we do before we starting any project at Hello Robo. She decided to talk with our customers. We asked every existing client two questions:

    1. What do you think we do?
    2. What’s one thing you think we’re absolutely great at?

    The replies were almost word-for-word:

    “You do excellent product design—not crazy, unachievable vision design, and not MVPs either. You’re absolutely great at taking complex, technical systems and turning them into beautiful interfaces that our users actually love to use.”

    That became the foundation for how we approached the new site.

    Design & Art Direction

    We love robots—and robotics inspires everything we do. For the new site, we moved away from soft colors and rounded corners and leaned into a more hi-tech visual language: dark backgrounds, thin lines, sharper shapes. Daria wanted the design to feel more precise, more engineered—something that would resonate with the kind of clients we work with in aviation, robotics, and defense. Every visual choice was about clarity, control, and intention.

    A few boards from Hello Robo new brand, reimagined by our design Hanna Shpak

    Animation and Interaction

    All of our interface work is rooted in interaction and motion—because real-world products aren’t static. They always change and respond to users input and actions. We wanted the site to reflect that. Not with flashy effects or distracting transitions, but with just enough subtle animation to guide, respond, and feel alive. Everything moves with purpose—quiet, responsive, and smooth.

    Case Studies

    We didn’t want our case studies to be just a scroll of pretty images. Each one is built as a story—showing not just what we made, but how it worked and why it mattered. We walk through key features, the thinking behind UX decisions, and the problems we solved for each client. It’s less about showing off visuals, and more about showing how we think.

    Final words

    In the end, we got what we set out to build: a clearer visual and verbal language that reflects who we are and who we work with. The site feels more aligned with the complexity and ambition of our clients—and with the way we approach design: thoughtful, precise, and grounded in real product work. It’s not trying to impress with noise. It’s built to resonate with the kind of teams who care about clarity, systems, and getting things right.

    Credits

    Web designer: Daria Krauskopf

    Brand design: Hanna Shpak

    UX design: Vlad Duhnov

    Webflow development: Miron Umantsev

    Design director: Shakir Dzheyranov



    Source link

  • Integrating Rive into a React Project: Behind the Scenes of Valley Adventures

    Integrating Rive into a React Project: Behind the Scenes of Valley Adventures


    Bringing new tools into a workflow is always exciting—curiosity bumps up against the comfort of familiar methods. But when our longtime client, Chumbi Valley, came to us with their Valley Adventures project, we saw the perfect opportunity to experiment with Rive and craft cartoon-style animations that matched the playful spirit of the brand.

    Rive is a powerful real-time interactive design tool with built-in support for interactivity through State Machines. In this guide, we’ll walk you through how we integrated a .riv file into a React environment and added mouse-responsive animations.

    We’ll also walk through a modernized integration method using Rive’s newer Data Binding feature—our current preferred approach for achieving the same animation with less complexity and greater flexibility.

    Animation Concept & File Preparation

    Valley Adventures is a gamified Chumbi NFT staking program, where magical creatures called Chumbi inhabit an enchanted world. The visual direction leans heavily into fairytale book illustrations—vibrant colors, playful characters, and a whimsical, cartoon-like aesthetic.

    To immediately immerse users in this world, we went with a full-section hero animation on the landing page. We split the animation into two parts:

    • an idle animation that brings the scene to life;
    • a cursor-triggered parallax effect, adding depth and interactivity.

    Several elements animate simultaneously—background layers like rustling leaves and flickering fireflies, along with foreground characters that react to movement. The result is a dynamic, storybook-like experience that invites users to explore.

    The most interesting—and trickiest—part of the integration was tying animations to mouse tracking. Rive provides a built-in way to handle this: by applying constraints with varying strengths to elements within a group that’s linked to Mouse Tracking, which itself responds to the cursor’s position.

    However, we encountered a limitation with this approach: the HTML buttons layered above the Rive asset were blocking the hover state, preventing it from triggering the animation beneath.

    To work around this, we used a more robust method that gave us finer control and avoided those problems altogether. 

    Here’s how we approached it:

    1. Create four separate timelines, each with a single keyframe representing an extreme position of the animation group:
      • Far left
      • Far right
      • Top
      • Bottom
    2. Add two animation layers, each responsible for blending between opposite keyframes:
      • Layer 1 blends the far-left and far-right timelines
      • Layer 2 blends the top and bottom timelines
    3. Tie each layer’s blend amount to a numeric input—one for the X axis, one for the Y axis.

    By adjusting the values of these inputs based on the cursor’s position, you can control how tightly the animation responds on each axis. This approach gives you a smoother, more customizable parallax effect—and prevents unexpected behavior caused by overlapping UI.

    Once the animation is ready, simply export it as a .riv file—and leave the rest of the magic to the devs.

    How We Did It: Integrating a Rive File into a React Project

    Before we dive further, let’s clarify what a .riv file actually is.

    A .riv file is the export format from the Rive editor. It can include:

    • vector graphics,
    • timeline animations,
    • a State Machine with input parameters.

    In our case, we’re using a State Machine with two numeric inputs: Axis_X and Axis_Y. These inputs are tied to how we control animation in Rive, using values from the X and Y axes of the cursor’s position.

    These inputs drive the movement of different elements—like the swaying leaves, fluttering fireflies, and even subtle character reactions—creating a smooth, interactive experience that responds to the user’s mouse.

    Step-by-Step Integration

    Step 1: Install the Rive React runtime

    Install the official package:

    npm install @rive-app/react-canvas

    Step 2: Create an Animation Component

    Create a component called RiveBackground.tsx to handle loading and rendering the animation.

    Step 3: Connect animation

    const { rive, setCanvasRef, setContainerRef } = useRive({
      src: 'https://cdn.rive.app/animations/hero.riv',
      autoplay: true,
      layout: new Layout({ fit: Fit.Cover, alignment: Alignment.Center }),
      onLoad: () => setIsLoaded(true),
      enableRiveAssetCDN: true,
    });
    

    For a better understanding, let’s take a closer look at each prop you’ll typically use when working with Rive in React:

    What each option does:

    Property Description
    src Path to your .riv file — can be local or hosted via CDN
    autoplay Automatically starts the animation once it’s loaded
    layout Controls how the animation fits into the canvas (we’re using Cover and Center)
    onLoad Callback that fires when the animation is ready — useful for setting isLoaded
    enableRiveAssetCDN Allows loading of external assets (like fonts or textures) from Rive’s CDN

    Step 4: Connect State Machine Inputs

    const numX = useStateMachineInput(rive, 'State Machine 1', 'Axis_X', 0);
    const numY = useStateMachineInput(rive, 'State Machine 1', 'Axis_Y', 0);

    This setup connects directly to the input values defined inside the State Machine, allowing us to update them dynamically in response to user interaction.

    • State Machine 1 — the name of your State Machine, exactly as defined in the Rive editor
    • Axis_X and Axis_Y — numeric inputs that control movement based on cursor position
    • 0 — the initial (default) value for each input

    ☝️ Important: Make sure your .riv file includes the exact names: Axis_X, Axis_Y, and State Machine 1. These must match what’s defined in the Rive editor — otherwise, the animation won’t respond as expected.

    Step 5: Handle Mouse Movement

    useEffect(() => {
      if (!numX || !numY) return;
    
      const handleMouseMove = (e: MouseEvent) => {
        const { innerWidth, innerHeight } = window;
        numX.value = (e.clientX / innerWidth) * 100;
        numY.value = 100 - (e.clientY / innerHeight) * 100;
      };
    
      window.addEventListener('mousemove', handleMouseMove);
      return () => window.removeEventListener('mousemove', handleMouseMove);
    }, [numX, numY]);

    What’s happening here:

    • We use clientX and clientY to track the mouse position within the browser window.
    • The values are normalized to a 0–100 range, matching what the animation expects.
    • These normalized values are then passed to the Axis_X and Axis_Y inputs in the Rive State Machine, driving the interactive animation.

    ⚠️ Important: Always remember to remove the event listener when the component unmounts to avoid memory leaks and unwanted behavior. 

    Step 6: Cleanup and Render the Component

    useEffect(() => {
      return () => rive?.cleanup();
    }, [rive]);

    And the render:

    return (
      <div
        ref={setContainerRef}
        className={`rive-container ${className ?? ''} ${isLoaded ? 'show' : 'hide'}`}
      >
        <canvas ref={setCanvasRef} />
      </div>
    );
    • cleanup() — frees up resources when the component unmounts. Always call this to prevent memory leaks.
    • setCanvasRef and setContainerRef — these must be connected to the correct DOM elements in order for Rive to render the animation properly.

    And here’s the complete code:

    import {
      useRive,
      useStateMachineInput,
      Layout,
      Fit,
      Alignment,
    } from '@rive-app/react-canvas';
    import { useEffect, useState } from 'react';
    
    export function RiveBackground({ className }: { className?: string }) {
      const [isLoaded, setIsLoaded] = useState(false);
    
      const { rive, setCanvasRef, setContainerRef } = useRive({
        src: 'https://cdn.rive.app/animations/hero.riv',
        animations: ['State Machine 1','Timeline 1','Timeline 2'
    ],
        autoplay: true,
        layout: new Layout({ fit: Fit.Cover, alignment: Alignment.Center }),
        onLoad: () => setIsLoaded(true),
        enableRiveAssetCDN: true,
      });
    
      const numX = useStateMachineInput(rive, 'State Machine 1', 'Axis_X', 0);
      const numY = useStateMachineInput(rive, 'State Machine 1', 'Axis_Y', 0);
    
      useEffect(() => {
        if (!numX || !numY) return;
    
        const handleMouseMove = (e: MouseEvent) => {
    	if (!numX || !numY) {
            return;
          }
    
          const { innerWidth, innerHeight } = window;
          numX.value = (e.clientX / innerWidth) * 100;
          numY.value = 100 - (e.clientY / innerHeight) * 100;
        };
    
        window.addEventListener('mousemove', handleMouseMove);
        return () => window.removeEventListener('mousemove', handleMouseMove);
      }, [numX, numY]);
    
      useEffect(() => {
        return () => {
          rive?.cleanup();
        };
      }, [rive]);
    
      return (
        <div
          ref={setContainerRef}
          className={`rive-container ${className ?? ''} ${isLoaded ? 'show' : 'hide'}`}
        >
          <canvas ref={setCanvasRef} />
        </div>
      );
    }
    

    Step 7: Use the Component

    Now you can use the RiveBackground like any other component:

    <RiveBackground className="hero-background" />

    Step 8: Preload the WASM File

    To avoid loading the .wasm file at runtime—which can delay the initial render—you can preload it in App.tsx:

    import riveWASMResource from '@rive-app/canvas/rive.wasm';
    
    <link
      rel="preload"
      href={riveWASMResource}
      as="fetch"
      crossOrigin="anonymous"
    />

    This is especially useful if you’re optimizing for first paint or overall performance.

    Simple Parallax: A New Approach with Data Binding

    In the first part of this article, we used a classic approach with a State Machine to create the parallax animation in Rive. We built four separate animations (top, bottom, left, right), controlled them using input variables, and blended their states to create smooth motion. This method made sense at the time, especially before Data Binding support was introduced.

    But now that Data Binding is available in Rive, achieving the same effect is much simpler—just a few steps. Data binding in Rive is a system that connects editor elements to dynamic data and code via view models, enabling reactive, runtime-driven updates and interactions between design and development.

    In this section, we’ll show how to refactor the original Rive file and code using the new approach.

    Updating the Rive File

    1. Remove the old setup:
      • Go to the State Machine.
      • Delete the input variables: top, bottom, left, right.
      • Remove the blending states and their associated animations.
    2. Group the parallax layers:
      • Wrap all the parallax layers into a new group—e.g., ParallaxGroup.
    3. Create binding parameters:
      • Select ParallaxGroup and add:
        • pointerX (Number)
        • pointerY (Number)
    4. Bind coordinates:
      • In the properties panel, set:
        • X → pointerX
        • Y → pointerY

    Now the group will move dynamically based on values passed from JavaScript.

    The Updated JS Code

    Before we dive into the updated JavaScript, let’s quickly define an important concept:

    When using Data Binding in Rive, viewModelInstance refers to the runtime object that links your Rive file’s bindable properties (like pointerX or pointerY) to your app’s logic. In the Rive editor, you assign these properties to elements like positions, scales, or rotations. At runtime, your code accesses and updates them through the viewModelInstance—allowing for real-time, declarative control without needing a State Machine.

    With that in mind, here’s how the new setup replaces the old input-driven logic:

    import { useRive } from '@rive-app/react-canvas';
    import { useEffect, useState } from 'react';
    
    export function ParallaxEffect({ className }: { className?: string }) {
      const [isLoaded, setIsLoaded] = useState(false);
    
      const { rive, setCanvasRef, setContainerRef } = useRive({
        src: 'https://cdn.rive.app/animations/hero.riv',
        autoplay: true,
        autoBind: true,
        onLoad: () => setIsLoaded(true),
      });
    
      useEffect(() => {
        if (!rive) return;
    
        const vmi = rive.viewModelInstance;
        const pointerX = vmi?.number('pointerX');
        const pointerY = vmi?.number('pointerY');
    
        if (!pointerX || !pointerY) return;
    
        const handleMouseMove = (e: MouseEvent) => {
          const { innerWidth, innerHeight } = window;
          const x = (e.clientX / innerWidth) * 100;
          const y = 100 - (e.clientY / innerHeight) * 100;
          pointerX.value = x;
          pointerY.value = y;
        };
    
        window.addEventListener('mousemove', handleMouseMove);
    
        return () => {
          window.removeEventListener('mousemove', handleMouseMove);
          rive.cleanup();
        };
      }, [rive]);
    
      return (
        <div
          ref={setContainerRef}
          className={`rive-container ${className ?? ''} ${isLoaded ? 'show' : 'hide'}`}
        >
          <canvas ref={setCanvasRef} />
        </div>
      );
    }

    The Result

    You get the same parallax effect, but:

    • without input variables or blending;
    • without a State Machine;
    • with simple control via the ViewModel.

    Official Live Example from Rive

    👉 CodeSandbox: Data Binding Parallax

    Conclusion

    Data Binding is a major step forward for interactive Rive animations. Effects like parallax can now be set up faster, more reliably, and with cleaner logic. We strongly recommend this approach for new projects.

    Final Thoughts

    So why did we choose Rive over Lottie for this project?

    • Interactivity: With Lottie, achieving the same level of interactivity would’ve required building a custom logic layer from scratch. With Rive, we got that behavior baked into the file—plug and play.
    • Optimization: Rive gives you more control over each asset inside the .riv file, and the output tends to be lighter overall.

    Our biggest takeaway? Don’t be afraid to experiment with new tools—especially when they feel like the right fit for your project’s concept. Rive matched the playful, interactive vibe of Valley Adventures perfectly, and we’re excited to keep exploring what it can do.



    Source link

  • Integrating Psychology into Software Development | by Ulas Can Cengiz


    14 min read

    Nov 10, 2023

    Photo by Bret Kavanaugh on Unsplash

    Imagine sitting down at your desk to untangle a particularly complex piece of software code. Your eyes scan lines packed with logical operations and function calls. Somewhere in this intricate weave, a bug lurks, derailing the application’s performance. This scenario, familiar to many developers, isn’t just a test of technical skill; it’s a psychological challenge. The frustration and cognitive fatigue that often accompany such tasks can cloud judgment and prolong resolution. It’s in moments like these that the intersection of psychology and software development comes into sharp focus.

    Cognitive load theory, originally applied to educational psychology, has profound implications for managing complexity in software projects. It posits that our working memory has a limited capacity for processing new information. In the context of software development, this translates to the need for clean, readable code and well-architected systems that minimize the cognitive load on developers. By understanding and applying this theory, we can create development environments that reduce unnecessary complexity and allow developers to allocate their cognitive resources…



    Source link

  • Turning Music Into Motion: The Making of the 24/7 Artists Launch Page

    Turning Music Into Motion: The Making of the 24/7 Artists Launch Page


    In this article, we’ll explore the behind-the-scenes process of how Waaark brought 24/7 Artists’ new product launch landing page to life. See how creative vision, design, and development came together to shape the final result.

    Brief

    24/7 Artists reached out after discovering our work on AW Portfolio. They came to us with a clear challenge: help them break through a creative deadlock and redesign their site to support an upcoming product launch—on a tight deadline.

    At Waaark, having time to think, breathe, and work at our own pace is key. We typically avoid last-minute projects, but this one felt like a puzzle worth solving. We saw a narrow but feasible path forward and accepted the challenge.

    Creative research

    We kicked off the project by exploring ways to visually represent music. After some wandering sessions on platforms like Pinterest and Behance, we narrowed our direction toward visualiser aesthetics—particularly the use of lines to suggest sound waves.

    The client also emphasised their desire to introduce depth and dimensionality into the site. We collected inspiration reflecting this concept and organised everything into a Milanote moodboard, including ideas around color, typography, layout, and impactful hero sections to set a clear creative direction.

    Given the time constraints, it was important to focus on bold, achievable visuals—techniques we had already mastered.

    Design

    Story board

    For a storytelling-focused, long-scrolling landing page like this, we replaced our typical UI wireframes with a full storyboard. This storyboard mapped out each step of the user journey, along with transitions between sections.

    Our goal was twofold: to provide the client with a clear visual direction and to start shaping the flow and pacing on our end.

    Creative Direction

    With both the moodboard and storyboard approved, we began merging them to define the site’s visual language.

    Right from the hero section, we wanted the message to be loud and clear: music meets tech. We envisioned a dark, immersive intro with circular lines evoking vinyl records or sound waves. Layered on top: a bold sans-serif headline and a ticket-style navigation bar to reinforce the music industry vibe.

    To instantly capture user attention, we imagined a mouse-trail animation where artist photos appear in an equalizer-style movement.

    To contrast the dark intro, we introduced a more colorful palette throughout the rest of the site, showcasing the diversity of music and the artists’ unique sensibilities.

    Implementation

    Tech stack

    We used our go-to stack, which the client was already familiar with: WordPress. It provided a solid foundation—easy to manage, flexible for the frontend, and scalable.

    For the front-end experience, we integrated a few select libraries:

    • GSAP for fluid, expressive animations
    • Luge to manage the overall page lifecycle
    • Lenis for smooth scrolling

    We aimed to minimise external dependencies, instead relying on native CSS 3D transformations and lightweight JS/Canvas-based animations—especially for effects mimicking depth.

    Animation

    To save time, all the animations were directly coded based on what we had envisioned and mapped out in the storyboard. Some of them worked exactly as imagined from the start, while others needed a bit of fine-tuning to integrate fully into the overall experience.

    Scroll Animations

    To keep users engaged while presenting 24/7 Artists’ vision and offering, we crafted a sequence of scroll-driven animations—alternating between smooth flows and unexpected reveals.

    Micro-Interactions

    On a product launch page, micro-interactions are key. They spark curiosity, highlight key elements, and subtly guide the user toward action.

    For the main call to action, we designed a distinctive interaction using the same equalizer-like shape seen in the photo animations. On hover, it animates like a music player—both playful and thematic.

    Tile Grid Setup
    We began by constructing a grid made of 1×1 and 2×2 tiles.

    Z-Axis Scroll Effect
    Since we weren’t using true 3D, we faked depth using scale transforms. We calculated the scale needed to have the grid’s central hole (where content would go) expand to fill the viewport. Then, we transitioned each tile from its original size and position to the final state using GSAP.

    Playing with GSAP staggered animation adds more depth to the motion.

    Simulated Cube Depth
    To simulate 3D cubes, we calculated the back-face vertices based on a smaller grid to keep the illusion of perspective. We then drew side faces accordingly, making sure to hide vertices behind the front face.

    Canvas-Based Content Reveal
    To finish the effect, we redrew the 2×2 tiles’ content in Canvas and added a cover layer that scrolls at a faster rate, revealing the content below.

    Conclusion

    The 24/7 Artists landing page was a bold and fast-paced project that pushed us to distill ideas quickly and trust our creative instincts.

    Through strong visual metaphors, smooth storytelling, and carefully crafted motion, we built a launchpad that sets the tone for the brand’s next chapter.

    This first release is just the beginning. The site was designed with scalability in mind, and additional sections and pages are already being added to support future growth and evolving needs.

    When the vision is clear and the momentum is right, great things can happen—fast.



    Source link