برچسب: Interactive

  • Interactive Video Projection Mapping with Three.js

    Interactive Video Projection Mapping with Three.js



    Projection mapping has long fascinated audiences in the physical world, turning buildings, sculptures, and entire cityscapes into moving canvases. What if you could recreate that same sense of spectacle directly inside the browser?

    With WebGL and Three.js, you can project video not onto walls or monuments but onto dynamic 3D grids made of hundreds of cubes, each one carrying a fragment of the video like a digital mosaic. Many will surely recognize this effect from Rogier de Boevé’s portfolio, which gained wide attention for showcasing exactly this approach.

    In this tutorial we’ll explore how to simulate video projection mapping in a purely digital environment, from building a grid of cubes, to UV-mapping video textures, to applying masks that determine which cubes appear. The demo for this tutorial is inspired by Rogier’s work, which he breaks down beautifully in his case study for anyone interested in the concept behind it.

    The result is a mesmerizing effect that feels both sculptural and cinematic, perfect for interactive installations, portfolio showcases, or simply as a playground to push your creative coding skills further.

    What is Video Projection Mapping in the Real World?

    When describing video projection mapping, it’s easiest to think of huge buildings lit up with animations during festivals, or art installations where a moving image is “painted” onto sculptures.

    Here are some examples of real-world video projections:

    Bringing it to our 3D World

    In 3D graphics, we can do something similar: instead of shining a physical projector, we map a video texture onto objects in a scene.

    Therefore, let’s build a grid of cubes using a mask image that will determine which cubes are visible. A video texture is UV-mapped so each cube shows the exact video fragment that corresponds to its grid cell—together they reconstruct the video, but only where the mask is dark.

    Prerequesites:

    • Three.js r155+
    • A small, high-contrast mask image (e.g. a heart silhouette).
    • A video URL with CORS enabled.

    Our Boilerplate and Starting Point

    Here is a basic starter setup, i.e. the minimum amount of code and structure you need to get a scene rendering in the browser, without worrying about the specific creative content yet.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            const geometry = new THREE.BoxGeometry( 1, 1, 1 );
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            const cube = new THREE.Mesh( geometry, this.material );
            this.group.add( cube );
            this.is_ready = true
        }
        
        ...
    }

    The result is a spinning red cube:

    Creating the Grid

    A centered grid of cubes (10×10 by default). Every cube has the same size and material. The grid spacing and overall scale are configurable.

    export default class Models {
    	constructor(gl_app) {
            ...
    
    		this.gridSize = 10;
            this.spacing = 0.75;
            this.createGrid()
        }
    
        createGrid() {
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            
            // Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    const mesh = new THREE.Mesh(geometry, this.material);
                    mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.z = 0;
    
                    this.group.add(mesh);
                }
            }
            this.group.scale.setScalar(0.5)
            ...
        }   
        ...
    }

    Key parameters

    World-space distance between cube centers. Increase for larger gaps, decrease to pack tighter.

    How many cells per side. A 10×10 grid ⇒ 100 cubes

    Creating the Video Texture

    This function creates a video texture in Three.js so you can use a playing HTML <video> as the texture on 3D objects.

    • Creates an HTML <video> element entirely in JavaScript (not added to the DOM).
    • We’ll feed this element to Three.js to use its frames as a texture.
    • loop = true → restarts automatically when it reaches the end.
    • muted = true → most browsers block autoplay for unmuted videos, so muting ensures it plays without user interaction.
    • .play() → starts playback.
    • ⚠️ Some browsers still need a click/touch before autoplay works — you can add a fallback listener if needed.
    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createVideoTexture() {
    		this.video = document.createElement('video')
    		this.video.src = 'https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4'
    		this.video.crossOrigin = 'anonymous'
    		this.video.loop = true
    		this.video.muted = true
    		this.video.play()
    
    		// Create video texture
    		this.videoTexture = new THREE.VideoTexture(this.video)
    		this.videoTexture.minFilter = THREE.LinearFilter
    		this.videoTexture.magFilter = THREE.LinearFilter
    		this.videoTexture.colorSpace = THREE.SRGBColorSpace
    		this.videoTexture.wrapS = THREE.ClampToEdgeWrap
    		this.videoTexture.wrapT = THREE.ClampToEdgeWrap
    
    		// Create material with video texture
    		this.material = new THREE.MeshBasicMaterial({ 
    			map: this.videoTexture,
    			side: THREE.FrontSide
    		})
        }
    
        createGrid() {
            this.createVideoTexture()
            ...
        }
        ...
    }

    This is the video we are using: Big Buck Bunny (without CORS)

    All the meshes have the same texture applied:

    Attributing Projection to the Grid

    We will be turning the video into a texture atlas split into a gridSize × gridSize lattice.
    Each cube in the grid gets its own little UV window (sub-rectangle) of the video so, together, all cubes reconstruct the full frame.

    Why per-cube geometry? Because we can create a new BoxGeometry for each cube since the UVs must be unique per cube. If all cubes shared one geometry, they’d also share the same UVs and show the same part of the video.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            ...
    		// Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    
    				// Create individual geometry for each box to have unique UV mapping
    				// Calculate UV coordinates for this specific box
    				const uvX = x / this.gridSize
    				const uvY = y / this.gridSize // Remove the flip to match correct orientation
    				const uvWidth = 1 / this.gridSize
    				const uvHeight = 1 / this.gridSize
    				
    				// Get the UV attribute
    				const uvAttribute = geometry.attributes.uv
    				const uvArray = uvAttribute.array
    				
    				// Map each face of the box to show the same portion of video
    				// We'll focus on the front face (face 4) for the main projection
    				for (let i = 0; i < uvArray.length; i += 2) {
    					// Map all faces to the same UV region for consistency
    					uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
    					uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
    				}
    				
    				// Mark the attribute as needing update
    				uvAttribute.needsUpdate = true
                    ...
                }
            }
            ...
        }
        ...
    }

    The UV window for cell (x, y)
    For a grid of size N = gridSize:

    • UV origin of this cell:
      – uvX = x / N
      – uvY = y / N
    • UV size of each cell:
      – uvWidth = 1 / N
      – uvHeight = 1 / N

    Result: every face of the box now samples the same sub-region of the video (and we noted “focus on the front face”; this approach maps all faces to that region for consistency).

    Creating Mask

    We need to create a canvas using a mask that determines which cubes are visible in the grid.

    • Black (dark) pixels → cube is created.
    • White (light) pixels → cube is skipped.

    To do this, we need to:

    1. Load the mask image.
    2. Scale it down to match our grid size.
    3. Read its pixel color data.
    4. Pass that data into the grid-building step.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            // Create a canvas to read mask pixel data
            const canvas = document.createElement('canvas')
            const ctx = canvas.getContext('2d')
    
            const maskImage = new Image()
            maskImage.crossOrigin = 'anonymous'
            maskImage.onload = () => {
                // Get original image dimensions to preserve aspect ratio
                const originalWidth = maskImage.width
                const originalHeight = maskImage.height
                const aspectRatio = originalWidth / originalHeight
    
                // Calculate grid dimensions based on aspect ratio
                this.gridWidth
    			this.gridHeight
                if (aspectRatio > 1) {
                    // Image is wider than tall
                    this.gridWidth = this.gridSize
                    this.gridHeight = Math.round(this.gridSize / aspectRatio)
                } else {
                    // Image is taller than wide or square
                    this.gridHeight = this.gridSize
                    this.gridWidth = Math.round(this.gridSize * aspectRatio)
                }
    
                canvas.width = this.gridWidth
                canvas.height = this.gridHeight
                ctx.drawImage(maskImage, 0, 0, this.gridWidth, this.gridHeight)
    
                const imageData = ctx.getImageData(0, 0, this.gridWidth, this.gridHeight)
                this.data = imageData.data
    			this.createGrid()
    		}
    
            maskImage.src = '../images/heart.jpg'
    	}
        ...
    }

    Match mask resolution to grid

    • We don’t want to stretch the mask — this keeps it proportional to the grid.
    • gridWidth and gridHeight are how many mask pixels we’ll sample horizontally and vertically.
    • This matches the logical cube grid, so each cube can correspond to one pixel in the mask.

    Applying the Mask to the Grid

    Let’s combines mask-based filtering with custom UV mapping to decide where in the grid boxes should appear, and how each box maps to a section of the projected video.
    Here’s the concept step by step:

    • Loops through every potential (x, y) position in a virtual grid.
    • At each grid cell, it will decide whether to place a box and, if so, how to texture it.
    • flippedY: Flips the Y-axis because image coordinates start from the top-left, while the grid’s origin starts from the bottom-left.
    • pixelIndex: Locates the pixel in the this.data array.
    • Each pixel stores 4 values: red, green, blue, alpha.
    • Extracts the R, G, and B values for that mask pixel.
    • Brightness is calculated as the average of R, G, B.
    • If the pixel is dark enough (brightness < 128), a cube will be created.
    • White pixels are ignored → those positions stay empty.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            ...
    	}
    
        createGrid() {
            ...
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
    
                    // Get pixel color from mask (sample at grid position)
                    // Flip Y coordinate to match image orientation
                    const flippedY = this.gridHeight - 1 - y
                    const pixelIndex = (flippedY * this.gridWidth + x) * 4
                    const r = this.data[pixelIndex]
                    const g = this.data[pixelIndex + 1]
                    const b = this.data[pixelIndex + 2]
    
                    // Calculate brightness (0 = black, 255 = white)
                    const brightness = (r + g + b) / 3
    
                    // Only create box if pixel is dark (black shows, white hides)
                    if (brightness < 128) { // Threshold for black vs white
    
                        // Create individual geometry for each box to have unique UV mapping
                        // Calculate UV coordinates for this specific box
                        const uvX = x / this.gridSize
                        const uvY = y / this.gridSize // Remove the flip to match correct orientation
                        const uvWidth = 1 / this.gridSize
                        const uvHeight = 1 / this.gridSize
                        
                        // Get the UV attribute
                        const uvAttribute = geometry.attributes.uv
                        const uvArray = uvAttribute.array
                        
                        // Map each face of the box to show the same portion of video
                        // We'll focus on the front face (face 4) for the main projection
                        for (let i = 0; i < uvArray.length; i += 2) {
                            // Map all faces to the same UV region for consistency
                            uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
                            uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
                        }
                        
                        // Mark the attribute as needing update
                        uvAttribute.needsUpdate = true
                        
                        const mesh = new THREE.Mesh(geometry, this.material);
    
                        mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.z = 0;
    
                        this.group.add(mesh);
                    }
                }
            }
            ...
        }
        ...
    }

    Further steps

    • UV mapping is the process of mapping 2D video pixels onto 3D geometry.
    • Each cube gets its own unique UV coordinates corresponding to its position in the grid.
    • uvWidth and uvHeight are how much of the video texture each cube covers.
    • Modifies the cube’s uv attribute so all faces display the exact same portion of the video.

    Here is the result with the mask applied:

    Adding Some Depth and Motion to the Grid

    Adding subtle motion along the Z-axis brings the otherwise static grid to life, making the projection feel more dynamic and dimensional.

    update() {
        if (this.is_ready) {
            this.group.children.forEach((model, index) => {
                model.position.z = Math.sin(Date.now() * 0.005 + index * 0.1) * 0.6
            })
        }
    }

    It’s the time for Multiple Grids

    Up until now we’ve been working with a single mask and a single video, but the real fun begins when we start layering multiple projections together. By combining different mask images with their own video sources, we can create a collection of independent grids that coexist in the same scene. Each grid can carry its own identity and motion, opening the door to richer compositions, transitions, and storytelling effects.

    1. A Playlist of Masks and Videos

    export default class Models {
    	constructor(gl_app) {
            ...
            this.grids_config = [
                {
                    id: 'heart',
                    mask: `heart.jpg`,
                    video: `fruits_trail_squared-transcode.mp4`
                },
                {
                    id: 'codrops',
                    mask: `codrops.jpg`,
                    video: `KinectCube_1350-transcode.mp4`
                },
                {
                    id: 'smile',
                    mask: `smile.jpg`,
                    video: `infinte-grid_squared-transcode.mp4`
                },
            ]
            this.grids_config.forEach((config, index) => this.createMask(config, index))
            this.grids = []
        }
    ...
    }

    Instead of one mask and one video, we now have a list of mask-video pairs.

    Each object defines:

    • id → name/id for each grid.
    • mask → the black/white image that controls which cubes appear.
    • video → the texture that will be mapped onto those cubes.

    This allows you to have multiple different projections in the same scene.

    2. Looping Over All Grids

    Once we have our playlist of mask–video pairs defined, the next step is to go through each item and prepare it for rendering.

    For every configuration in the list we call createMask(config, index), which takes care of loading the mask image, reading its pixels, and then passing the data along to build the corresponding grid.

    At the same time, we keep track of all the grids by storing them in a this.grids array, so later on we can animate them, show or hide them, and switch between them interactively.

    3. createMask(config, index)

    createMask(config, index) {
        ...
        maskImage.onload = () => {
            ...
            this.createGrid(config, index)
        }
        maskImage.src = `../images/${config.mask}`
    }
    • Loads the mask image for the current grid.
    • When the image is loaded, runs the mask pixel-reading logic (as explained before) and then calls createGrid() with the same config and index.
    • The mask determines which cubes are visible for this specific grid.

    4. createVideoTexture(config, index)

    createVideoTexture(config, index) {
        this.video = document.createElement('video')
        this.video.src = `../videos/${config.video}`
        ...
    }
    • Creates a <video> element using the specific video file for this grid.
    • The video is then converted to a THREE.VideoTexture and assigned as the material for the cubes in this grid.
    • Each grid can have its own independent video playing.

    5. createGrid(config, index)

    createGrid(config, index) {
            this.createVideoTexture(config, index)
            const grid_group = new THREE.Group()
            this.group.add(grid_group)
    
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                        ...
                        grid_group.add(mesh);
                }
            }
            grid_group.name = config.id
            this.grids.push(grid_group);
            grid_group.position.z = - 2 * index 
            ...
        }
    • Creates a new THREE.Group for this grid so all its cubes can be moved together.
    • This keeps each mask/video projection isolated.
    • grid_group.name: Assigns a name (you might later use config.id here).
    • this.grids.push(grid_group): Stores this grid in an array so you can control it later (e.g., show/hide, animate, change videos).
    • grid_group.position.z: Offsets each grid further back in Z-space so they don’t overlap visually.

    And here is the result for the multiple grids:

    And finally: Interaction & Animations

    Let’s start by creating a simple UI with some buttons on our HTML:

    <ul class="btns">
    	<li class="btns__item">
    		<button class="active" data-id="heart">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="codrops">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="smile">
    			...
    		</button>
    	</li>
    </ul>

    We’ll also create a data-current="heart" to our canvas element, it will be necessary to change its background-color depending on which button was clicked.

    <canvas id="sketch" data-current="heart"></canvas>

    Let’s not create some colors for each grid using CSS:

    [data-current="heart"] {
    	background-color: #e19800;
    }
    
    [data-current="codrops"] {
    	background-color: #00a00b
    }
    
    [data-current="smile"] {
    	background-color: #b90000;
    }

    Time to apply to create the interactions:

    createGrid(config, index) {
        ...
        this.initInteractions()
    }

    1. this.initInteractions()

    initInteractions() {
        this.current = 'heart'
        this.old = null
        this.is_animating = false
        this.duration = 1
    
        this.DOM = {
            $btns: document.querySelectorAll('.btns__item button'),
            $canvas: document.querySelector('canvas')
        }
        this.grids.forEach(grid => {
            if(grid.name != this.current) {
                grid.children.forEach(mesh => mesh.scale.setScalar(0))
            }
        })
        this.bindEvents()
    }
    • this.current → The currently active grid ID. Starts as "heart" so the "heart" grid will be visible by default.
    • this.old → Used to store the previous grid ID when switching between grids.
    • this.is_animating → Boolean flag to prevent triggering a new transition while one is still running.
    • this.duration → How long the animation takes (in seconds).
    • $btns → Selects all the buttons inside .btns__item. Each button likely corresponds to a grid you can switch to.
    • $canvas → Selects the main <canvas> element where the Three.js scene is rendered.

    Loops through all the grids in the scene.

    • If the grid is not the current one (grid.name != this.current),
    • → It sets all of that grid’s cubes (mesh) to scale = 0 so they are invisible at the start.
    • This means only the "heart" grid will be visible when the scene first loads.

    2. bindEvents()

    bindEvents() {
        this.DOM.$btns.forEach(($btn, index) => {
            $btn.addEventListener('click', () => {
                if (this.is_animating) return
                this.is_animating = true
                this.DOM.$btns.forEach(($btn, btnIndex) => {
                    btnIndex === index ? $btn.classList.add('active') : $btn.classList.remove('active')
                })
                this.old = this.current
                this.current = `${$btn.dataset.id}`
                this.revealGrid()
                this.hideGrid()
            })
        })
    }

    This bindEvents() method wires up the UI buttons so that clicking one will trigger switching between grids in the 3D scene.

    • For each button, attach a click event handler.
    • If an animation is already running, do nothing — this prevents starting multiple transitions at the same time.
    • Sets is_animating to true so no other clicks are processed until the current switch finishes.

    Loops through all buttons again:

    • If this is the clicked button → add the active CSS class (highlight it).
    • Otherwise → remove the active class (unhighlight).
    • this.old → keeps track of which grid was visible before the click.
    • this.current → updates to the new grid’s ID based on the button’s data-id attribute.
      • Example: if the button has data-id="heart", this.current becomes "heart".

    Calls two separate methods:

    • revealGrid() → makes the newly selected grid appear (by scaling its cubes from 0 to full size).
    • hideGrid() → hides the previous grid (by scaling its cubes back down to 0).

    3. revealGrid() & hideGrid()

    revealGrid() {
        // Filter the current grid based on this.current value
        const grid = this.grids.find(item => item.name === this.current);
        
        this.DOM.$canvas.dataset.current = `${this.current}` 
        const tl = gsap.timeline({ delay: this.duration * 0.25, defaults: { ease: 'power1.out', duration: this.duration } })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 1, y: 1, z: 1, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, { z: 0 }, '<')
        })
    }
    
    hideGrid() {
        // Filter the current grid based on this.old value
        const grid = this.grids.find(item => item.name === this.old);
        const tl = gsap.timeline({
            defaults: { ease: 'power1.out', duration: this.duration },
            onComplete: () => { this.is_animating = false }
        })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 0, y: 0, z: 0, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, {
                    z: 6, onComplete: () => {
                        gsap.set(child.scale, { x: 0, y: 0, z: 0 })
                        gsap.set(child.position, { z: - 6 })
                    }
                }, '<')
        })
    }

    And that is it! A full animated and interactive Video Projection Slider, made with hundreds of small cubes (meshes).

    ⚠️ Perfomance considerations

    The approach used in this tutorial, is the simplest and more digestable way to apply the projection concept; However, it can create too many draw calls: 100–1,000 cubes might fine; tens of thousands can be slow. If you need more detailed grid or more meshes on it, consider InstancedMesh and Shaders.

    Going further

    This a fully functional and versatile concept; Therefore, it opens so many possibilities.
    Which can be applied in some really cool ways, like scrollable story-telling, exhibition simulation, intro animations, portfolio showcase and etc.

    Here are some links for you to get inspired:

    Final Words

    I hope you’ve enjoyed this tutorial, and give a try on your projects or just explore the possibilities by changing the grid parameters, masks and videos.

    And talking about the videos, those used on this example are screen-recording of the Creative Code lessons contained in my Web Animations platform vwlab.io, where you can learn how to create more interactions and animations like this one.

    Come join us, you will be more than welcome! ☺️❤️



    Source link

  • Interactive Video Projection Mapping with Three.js

    Interactive Video Projection Mapping with Three.js



    Projection mapping has long fascinated audiences in the physical world, turning buildings, sculptures, and entire cityscapes into moving canvases. What if you could recreate that same sense of spectacle directly inside the browser?

    With WebGL and Three.js, you can project video not onto walls or monuments but onto dynamic 3D grids made of hundreds of cubes, each one carrying a fragment of the video like a digital mosaic. Many will surely recognize this effect from Rogier de Boevé’s portfolio, which gained wide attention for showcasing exactly this approach.

    In this tutorial we’ll explore how to simulate video projection mapping in a purely digital environment, from building a grid of cubes, to UV-mapping video textures, to applying masks that determine which cubes appear. The demo for this tutorial is inspired by Rogier’s work, which he breaks down beautifully in his case study for anyone interested in the concept behind it.

    The result is a mesmerizing effect that feels both sculptural and cinematic, perfect for interactive installations, portfolio showcases, or simply as a playground to push your creative coding skills further.

    What is Video Projection Mapping in the Real World?

    When describing video projection mapping, it’s easiest to think of huge buildings lit up with animations during festivals, or art installations where a moving image is “painted” onto sculptures.

    Here are some examples of real-world video projections:

    Bringing it to our 3D World

    In 3D graphics, we can do something similar: instead of shining a physical projector, we map a video texture onto objects in a scene.

    Therefore, let’s build a grid of cubes using a mask image that will determine which cubes are visible. A video texture is UV-mapped so each cube shows the exact video fragment that corresponds to its grid cell—together they reconstruct the video, but only where the mask is dark.

    Prerequesites:

    • Three.js r155+
    • A small, high-contrast mask image (e.g. a heart silhouette).
    • A video URL with CORS enabled.

    Our Boilerplate and Starting Point

    Here is a basic starter setup, i.e. the minimum amount of code and structure you need to get a scene rendering in the browser, without worrying about the specific creative content yet.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            const geometry = new THREE.BoxGeometry( 1, 1, 1 );
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            const cube = new THREE.Mesh( geometry, this.material );
            this.group.add( cube );
            this.is_ready = true
        }
        
        ...
    }

    The result is a spinning red cube:

    Creating the Grid

    A centered grid of cubes (10×10 by default). Every cube has the same size and material. The grid spacing and overall scale are configurable.

    export default class Models {
    	constructor(gl_app) {
            ...
    
    		this.gridSize = 10;
            this.spacing = 0.75;
            this.createGrid()
        }
    
        createGrid() {
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            
            // Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    const mesh = new THREE.Mesh(geometry, this.material);
                    mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.z = 0;
    
                    this.group.add(mesh);
                }
            }
            this.group.scale.setScalar(0.5)
            ...
        }   
        ...
    }

    Key parameters

    World-space distance between cube centers. Increase for larger gaps, decrease to pack tighter.

    How many cells per side. A 10×10 grid ⇒ 100 cubes

    Creating the Video Texture

    This function creates a video texture in Three.js so you can use a playing HTML <video> as the texture on 3D objects.

    • Creates an HTML <video> element entirely in JavaScript (not added to the DOM).
    • We’ll feed this element to Three.js to use its frames as a texture.
    • loop = true → restarts automatically when it reaches the end.
    • muted = true → most browsers block autoplay for unmuted videos, so muting ensures it plays without user interaction.
    • .play() → starts playback.
    • ⚠️ Some browsers still need a click/touch before autoplay works — you can add a fallback listener if needed.
    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createVideoTexture() {
    		this.video = document.createElement('video')
    		this.video.src = 'https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4'
    		this.video.crossOrigin = 'anonymous'
    		this.video.loop = true
    		this.video.muted = true
    		this.video.play()
    
    		// Create video texture
    		this.videoTexture = new THREE.VideoTexture(this.video)
    		this.videoTexture.minFilter = THREE.LinearFilter
    		this.videoTexture.magFilter = THREE.LinearFilter
    		this.videoTexture.colorSpace = THREE.SRGBColorSpace
    		this.videoTexture.wrapS = THREE.ClampToEdgeWrap
    		this.videoTexture.wrapT = THREE.ClampToEdgeWrap
    
    		// Create material with video texture
    		this.material = new THREE.MeshBasicMaterial({ 
    			map: this.videoTexture,
    			side: THREE.FrontSide
    		})
        }
    
        createGrid() {
            this.createVideoTexture()
            ...
        }
        ...
    }

    This is the video we are using: Big Buck Bunny (without CORS)

    All the meshes have the same texture applied:

    Attributing Projection to the Grid

    We will be turning the video into a texture atlas split into a gridSize × gridSize lattice.
    Each cube in the grid gets its own little UV window (sub-rectangle) of the video so, together, all cubes reconstruct the full frame.

    Why per-cube geometry? Because we can create a new BoxGeometry for each cube since the UVs must be unique per cube. If all cubes shared one geometry, they’d also share the same UVs and show the same part of the video.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            ...
    		// Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    
    				// Create individual geometry for each box to have unique UV mapping
    				// Calculate UV coordinates for this specific box
    				const uvX = x / this.gridSize
    				const uvY = y / this.gridSize // Remove the flip to match correct orientation
    				const uvWidth = 1 / this.gridSize
    				const uvHeight = 1 / this.gridSize
    				
    				// Get the UV attribute
    				const uvAttribute = geometry.attributes.uv
    				const uvArray = uvAttribute.array
    				
    				// Map each face of the box to show the same portion of video
    				// We'll focus on the front face (face 4) for the main projection
    				for (let i = 0; i < uvArray.length; i += 2) {
    					// Map all faces to the same UV region for consistency
    					uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
    					uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
    				}
    				
    				// Mark the attribute as needing update
    				uvAttribute.needsUpdate = true
                    ...
                }
            }
            ...
        }
        ...
    }

    The UV window for cell (x, y)
    For a grid of size N = gridSize:

    • UV origin of this cell:
      – uvX = x / N
      – uvY = y / N
    • UV size of each cell:
      – uvWidth = 1 / N
      – uvHeight = 1 / N

    Result: every face of the box now samples the same sub-region of the video (and we noted “focus on the front face”; this approach maps all faces to that region for consistency).

    Creating Mask

    We need to create a canvas using a mask that determines which cubes are visible in the grid.

    • Black (dark) pixels → cube is created.
    • White (light) pixels → cube is skipped.

    To do this, we need to:

    1. Load the mask image.
    2. Scale it down to match our grid size.
    3. Read its pixel color data.
    4. Pass that data into the grid-building step.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            // Create a canvas to read mask pixel data
            const canvas = document.createElement('canvas')
            const ctx = canvas.getContext('2d')
    
            const maskImage = new Image()
            maskImage.crossOrigin = 'anonymous'
            maskImage.onload = () => {
                // Get original image dimensions to preserve aspect ratio
                const originalWidth = maskImage.width
                const originalHeight = maskImage.height
                const aspectRatio = originalWidth / originalHeight
    
                // Calculate grid dimensions based on aspect ratio
                this.gridWidth
    			this.gridHeight
                if (aspectRatio > 1) {
                    // Image is wider than tall
                    this.gridWidth = this.gridSize
                    this.gridHeight = Math.round(this.gridSize / aspectRatio)
                } else {
                    // Image is taller than wide or square
                    this.gridHeight = this.gridSize
                    this.gridWidth = Math.round(this.gridSize * aspectRatio)
                }
    
                canvas.width = this.gridWidth
                canvas.height = this.gridHeight
                ctx.drawImage(maskImage, 0, 0, this.gridWidth, this.gridHeight)
    
                const imageData = ctx.getImageData(0, 0, this.gridWidth, this.gridHeight)
                this.data = imageData.data
    			this.createGrid()
    		}
    
            maskImage.src = '../images/heart.jpg'
    	}
        ...
    }

    Match mask resolution to grid

    • We don’t want to stretch the mask — this keeps it proportional to the grid.
    • gridWidth and gridHeight are how many mask pixels we’ll sample horizontally and vertically.
    • This matches the logical cube grid, so each cube can correspond to one pixel in the mask.

    Applying the Mask to the Grid

    Let’s combines mask-based filtering with custom UV mapping to decide where in the grid boxes should appear, and how each box maps to a section of the projected video.
    Here’s the concept step by step:

    • Loops through every potential (x, y) position in a virtual grid.
    • At each grid cell, it will decide whether to place a box and, if so, how to texture it.
    • flippedY: Flips the Y-axis because image coordinates start from the top-left, while the grid’s origin starts from the bottom-left.
    • pixelIndex: Locates the pixel in the this.data array.
    • Each pixel stores 4 values: red, green, blue, alpha.
    • Extracts the R, G, and B values for that mask pixel.
    • Brightness is calculated as the average of R, G, B.
    • If the pixel is dark enough (brightness < 128), a cube will be created.
    • White pixels are ignored → those positions stay empty.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            ...
    	}
    
        createGrid() {
            ...
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
    
                    // Get pixel color from mask (sample at grid position)
                    // Flip Y coordinate to match image orientation
                    const flippedY = this.gridHeight - 1 - y
                    const pixelIndex = (flippedY * this.gridWidth + x) * 4
                    const r = this.data[pixelIndex]
                    const g = this.data[pixelIndex + 1]
                    const b = this.data[pixelIndex + 2]
    
                    // Calculate brightness (0 = black, 255 = white)
                    const brightness = (r + g + b) / 3
    
                    // Only create box if pixel is dark (black shows, white hides)
                    if (brightness < 128) { // Threshold for black vs white
    
                        // Create individual geometry for each box to have unique UV mapping
                        // Calculate UV coordinates for this specific box
                        const uvX = x / this.gridSize
                        const uvY = y / this.gridSize // Remove the flip to match correct orientation
                        const uvWidth = 1 / this.gridSize
                        const uvHeight = 1 / this.gridSize
                        
                        // Get the UV attribute
                        const uvAttribute = geometry.attributes.uv
                        const uvArray = uvAttribute.array
                        
                        // Map each face of the box to show the same portion of video
                        // We'll focus on the front face (face 4) for the main projection
                        for (let i = 0; i < uvArray.length; i += 2) {
                            // Map all faces to the same UV region for consistency
                            uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
                            uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
                        }
                        
                        // Mark the attribute as needing update
                        uvAttribute.needsUpdate = true
                        
                        const mesh = new THREE.Mesh(geometry, this.material);
    
                        mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.z = 0;
    
                        this.group.add(mesh);
                    }
                }
            }
            ...
        }
        ...
    }

    Further steps

    • UV mapping is the process of mapping 2D video pixels onto 3D geometry.
    • Each cube gets its own unique UV coordinates corresponding to its position in the grid.
    • uvWidth and uvHeight are how much of the video texture each cube covers.
    • Modifies the cube’s uv attribute so all faces display the exact same portion of the video.

    Here is the result with the mask applied:

    Adding Some Depth and Motion to the Grid

    Adding subtle motion along the Z-axis brings the otherwise static grid to life, making the projection feel more dynamic and dimensional.

    update() {
        if (this.is_ready) {
            this.group.children.forEach((model, index) => {
                model.position.z = Math.sin(Date.now() * 0.005 + index * 0.1) * 0.6
            })
        }
    }

    It’s the time for Multiple Grids

    Up until now we’ve been working with a single mask and a single video, but the real fun begins when we start layering multiple projections together. By combining different mask images with their own video sources, we can create a collection of independent grids that coexist in the same scene. Each grid can carry its own identity and motion, opening the door to richer compositions, transitions, and storytelling effects.

    1. A Playlist of Masks and Videos

    export default class Models {
    	constructor(gl_app) {
            ...
            this.grids_config = [
                {
                    id: 'heart',
                    mask: `heart.jpg`,
                    video: `fruits_trail_squared-transcode.mp4`
                },
                {
                    id: 'codrops',
                    mask: `codrops.jpg`,
                    video: `KinectCube_1350-transcode.mp4`
                },
                {
                    id: 'smile',
                    mask: `smile.jpg`,
                    video: `infinte-grid_squared-transcode.mp4`
                },
            ]
            this.grids_config.forEach((config, index) => this.createMask(config, index))
            this.grids = []
        }
    ...
    }

    Instead of one mask and one video, we now have a list of mask-video pairs.

    Each object defines:

    • id → name/id for each grid.
    • mask → the black/white image that controls which cubes appear.
    • video → the texture that will be mapped onto those cubes.

    This allows you to have multiple different projections in the same scene.

    2. Looping Over All Grids

    Once we have our playlist of mask–video pairs defined, the next step is to go through each item and prepare it for rendering.

    For every configuration in the list we call createMask(config, index), which takes care of loading the mask image, reading its pixels, and then passing the data along to build the corresponding grid.

    At the same time, we keep track of all the grids by storing them in a this.grids array, so later on we can animate them, show or hide them, and switch between them interactively.

    3. createMask(config, index)

    createMask(config, index) {
        ...
        maskImage.onload = () => {
            ...
            this.createGrid(config, index)
        }
        maskImage.src = `../images/${config.mask}`
    }
    • Loads the mask image for the current grid.
    • When the image is loaded, runs the mask pixel-reading logic (as explained before) and then calls createGrid() with the same config and index.
    • The mask determines which cubes are visible for this specific grid.

    4. createVideoTexture(config, index)

    createVideoTexture(config, index) {
        this.video = document.createElement('video')
        this.video.src = `../videos/${config.video}`
        ...
    }
    • Creates a <video> element using the specific video file for this grid.
    • The video is then converted to a THREE.VideoTexture and assigned as the material for the cubes in this grid.
    • Each grid can have its own independent video playing.

    5. createGrid(config, index)

    createGrid(config, index) {
            this.createVideoTexture(config, index)
            const grid_group = new THREE.Group()
            this.group.add(grid_group)
    
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                        ...
                        grid_group.add(mesh);
                }
            }
            grid_group.name = config.id
            this.grids.push(grid_group);
            grid_group.position.z = - 2 * index 
            ...
        }
    • Creates a new THREE.Group for this grid so all its cubes can be moved together.
    • This keeps each mask/video projection isolated.
    • grid_group.name: Assigns a name (you might later use config.id here).
    • this.grids.push(grid_group): Stores this grid in an array so you can control it later (e.g., show/hide, animate, change videos).
    • grid_group.position.z: Offsets each grid further back in Z-space so they don’t overlap visually.

    And here is the result for the multiple grids:

    And finally: Interaction & Animations

    Let’s start by creating a simple UI with some buttons on our HTML:

    <ul class="btns">
    	<li class="btns__item">
    		<button class="active" data-id="heart">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="codrops">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="smile">
    			...
    		</button>
    	</li>
    </ul>

    We’ll also create a data-current="heart" to our canvas element, it will be necessary to change its background-color depending on which button was clicked.

    <canvas id="sketch" data-current="heart"></canvas>

    Let’s not create some colors for each grid using CSS:

    [data-current="heart"] {
    	background-color: #e19800;
    }
    
    [data-current="codrops"] {
    	background-color: #00a00b
    }
    
    [data-current="smile"] {
    	background-color: #b90000;
    }

    Time to apply to create the interactions:

    createGrid(config, index) {
        ...
        this.initInteractions()
    }

    1. this.initInteractions()

    initInteractions() {
        this.current = 'heart'
        this.old = null
        this.is_animating = false
        this.duration = 1
    
        this.DOM = {
            $btns: document.querySelectorAll('.btns__item button'),
            $canvas: document.querySelector('canvas')
        }
        this.grids.forEach(grid => {
            if(grid.name != this.current) {
                grid.children.forEach(mesh => mesh.scale.setScalar(0))
            }
        })
        this.bindEvents()
    }
    • this.current → The currently active grid ID. Starts as "heart" so the "heart" grid will be visible by default.
    • this.old → Used to store the previous grid ID when switching between grids.
    • this.is_animating → Boolean flag to prevent triggering a new transition while one is still running.
    • this.duration → How long the animation takes (in seconds).
    • $btns → Selects all the buttons inside .btns__item. Each button likely corresponds to a grid you can switch to.
    • $canvas → Selects the main <canvas> element where the Three.js scene is rendered.

    Loops through all the grids in the scene.

    • If the grid is not the current one (grid.name != this.current),
    • → It sets all of that grid’s cubes (mesh) to scale = 0 so they are invisible at the start.
    • This means only the "heart" grid will be visible when the scene first loads.

    2. bindEvents()

    bindEvents() {
        this.DOM.$btns.forEach(($btn, index) => {
            $btn.addEventListener('click', () => {
                if (this.is_animating) return
                this.is_animating = true
                this.DOM.$btns.forEach(($btn, btnIndex) => {
                    btnIndex === index ? $btn.classList.add('active') : $btn.classList.remove('active')
                })
                this.old = this.current
                this.current = `${$btn.dataset.id}`
                this.revealGrid()
                this.hideGrid()
            })
        })
    }

    This bindEvents() method wires up the UI buttons so that clicking one will trigger switching between grids in the 3D scene.

    • For each button, attach a click event handler.
    • If an animation is already running, do nothing — this prevents starting multiple transitions at the same time.
    • Sets is_animating to true so no other clicks are processed until the current switch finishes.

    Loops through all buttons again:

    • If this is the clicked button → add the active CSS class (highlight it).
    • Otherwise → remove the active class (unhighlight).
    • this.old → keeps track of which grid was visible before the click.
    • this.current → updates to the new grid’s ID based on the button’s data-id attribute.
      • Example: if the button has data-id="heart", this.current becomes "heart".

    Calls two separate methods:

    • revealGrid() → makes the newly selected grid appear (by scaling its cubes from 0 to full size).
    • hideGrid() → hides the previous grid (by scaling its cubes back down to 0).

    3. revealGrid() & hideGrid()

    revealGrid() {
        // Filter the current grid based on this.current value
        const grid = this.grids.find(item => item.name === this.current);
        
        this.DOM.$canvas.dataset.current = `${this.current}` 
        const tl = gsap.timeline({ delay: this.duration * 0.25, defaults: { ease: 'power1.out', duration: this.duration } })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 1, y: 1, z: 1, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, { z: 0 }, '<')
        })
    }
    
    hideGrid() {
        // Filter the current grid based on this.old value
        const grid = this.grids.find(item => item.name === this.old);
        const tl = gsap.timeline({
            defaults: { ease: 'power1.out', duration: this.duration },
            onComplete: () => { this.is_animating = false }
        })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 0, y: 0, z: 0, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, {
                    z: 6, onComplete: () => {
                        gsap.set(child.scale, { x: 0, y: 0, z: 0 })
                        gsap.set(child.position, { z: - 6 })
                    }
                }, '<')
        })
    }

    And that is it! A full animated and interactive Video Projection Slider, made with hundreds of small cubes (meshes).

    ⚠️ Perfomance considerations

    The approach used in this tutorial, is the simplest and more digestable way to apply the projection concept; However, it can create too many draw calls: 100–1,000 cubes might fine; tens of thousands can be slow. If you need more detailed grid or more meshes on it, consider InstancedMesh and Shaders.

    Going further

    This a fully functional and versatile concept; Therefore, it opens so many possibilities.
    Which can be applied in some really cool ways, like scrollable story-telling, exhibition simulation, intro animations, portfolio showcase and etc.

    Here are some links for you to get inspired:

    Final Words

    I hope you’ve enjoyed this tutorial, and give a try on your projects or just explore the possibilities by changing the grid parameters, masks and videos.

    And talking about the videos, those used on this example are screen-recording of the Creative Code lessons contained in my Web Animations platform vwlab.io, where you can learn how to create more interactions and animations like this one.

    Come join us, you will be more than welcome! ☺️❤️



    Source link

  • Exporting a Cloth Simulation from Blender to an Interactive Three.js Scene

    Exporting a Cloth Simulation from Blender to an Interactive Three.js Scene



    This tutorial walks through creating an interactive animation: starting in Blender by designing a button and simulating a cloth-like object that drops onto a surface and settles with a soft bounce.

    After baking the cloth simulation, the animation is exported and brought into a Three.js project, where it becomes an interactive scene that can be replayed on click.

    By the end, you’ll have a user-triggered animation that blends Blender’s physics simulations with Three.js rendering and interactivity.

    Let’s dive in!

    Step 1: Create a Cube and Add Subdivisions

    1. Start a New Project: Open Blender and delete the default cube (select it and press X, then confirm).
    2. Add a Cube: Press Shift + A > Mesh > Cube to create a new cube.
    3. Enter Edit Mode: Select the cube, then press Tab to switch to Edit Mode.
    4. Subdivide the Cube: Press Ctrl + R to add a loop cut, hover over the cube, and scroll your mouse wheel to increase the number of cuts.
    5. Apply Subdivision: With the cube still selected in Object Mode, go to the Modifiers panel (wrench icon), and click Add Modifier > Subdivision Surface. Set the Levels to 2 or 3 for a smoother result, then click Apply.

    Step 2: Add Cloth Physics and Adjust Settings

    1. Select the Cube: Ensure your subdivided cube is selected in Object Mode.
    2. Add Cloth Physics: Go to the Physics tab in the Properties panel. Click Cloth to enable cloth simulation.
    3. Pin the Edges (Optional): If you want parts of the cube to stay fixed (e.g., the top), switch to Edit Mode, select the vertices you want to pin, go back to the Physics tab, and under Cloth > Shape, click Pin to assign those vertices to a vertex group.
    4. Adjust Key Parameters:
      • Quality Steps: Set to 10-15 for smoother simulation (higher values increase accuracy but slow down computation).
      • Mass: Set to around 0.2-0.5 kg for a lighter, more flexible cloth.
      • Pressure: Under Cloth > Pressure, enable it and set a positive value (e.g., 2-5) to simulate inflation. This will make the cloth expand as if air is pushing it outward.
      • Stiffness: Adjust Tension and Compression (e.g., 10-15) to control how stiff or loose the cloth feels.
    5. Test the Simulation: Press the Spacebar to play the animation and see the cloth inflate. Tweak settings as needed.

    Step 3: Add a Ground Plane with a Collision

    1. Create a Ground Plane: Press Shift + A > Mesh > Plane. Scale it up by pressing S and dragging (e.g., scale it to 5-10x) so it’s large enough for the cloth to interact with.
    2. Position the Plane: Move the plane below the cube by pressing G > Z > -5 (or adjust as needed).
    3. Enable Collision: Select the plane, go to the Physics tab, and click Collision. Leave the default settings.
    4. Run the Simulation: Press the Spacebar again to see the cloth inflate and settle onto the ground plane.

    Step 4: Adjust Materials and Textures

    1. Select the Cube: In Object Mode, select the cloth (cube) object.
    2. Add a Material: Go to the Material tab, click New to create a material, and name it.
    3. Set Base Color/UV Map: In the Base Color slot, choose a fabric-like color (e.g., red or blue) or connect an image texture by clicking the yellow dot next to Base Color and selecting Image Texture. Load a texture file if you have one.
    4. Adjust Roughness and Specular: Set Roughness to 0.1-0.3 for a soft fabric look.
    5. Apply to Ground (Optional): Repeat the process for the plane, using a simple gray or textured material for contrast.

    Step 5: Export as MDD and Generate Shape Keys for Three.js

    To use the cloth animation in a Three.js project, we’ll export the physics simulation as an MDD file using the NewTek MDD plugin, then re-import it to create Shape Keys. Follow these steps:

    1. Enable the NewTek MDD Plugin:
      1. Go to Edit > Preferences > Add-ons.
      2. Search for “NewTek” or “MDD” and enable the “Import-Export: NewTek MDD format” add-on by checking the box. Close the Preferences window.
    2. Apply All Modifiers and All Transform:
      1. In Object Mode, select the cloth object.
      2. Go to the Modifiers panel (wrench icon). For each modifier (e.g., Subdivision Surface, Cloth), click the dropdown and select Apply. This “freezes” the mesh with its current shape and physics data.
      3. Ensure no unapplied deformations (e.g., scale) remain: Press Ctrl + A > All Transforms to apply location, rotation, and scale.
    3. Export as MDD:
      1. With the cloth object selected, go to File > Export > Lightwave Point Cache (.mdd).
      2. In the export settings (bottom left):
        • Set FPS (frames per second) to match your project (e.g., 24, 30, or 60).
        • Set the Start/End Frame of your animation.
      3. Choose a save location (e.g., “inflation.mdd”) and click Export MDD.
    4. Import the MDD:
      1. Go to File > Import > Lightwave Point Cache (.mdd), and load the “inflation.mdd” file.
      2. In the Physics and Modifiers panel, remove any cloth simulation-related options, as we now have shape keys.

    Step 6: Export the Cloth Simulation Object as GLB

    After importing the MDD, select the cube with the animation data.

    1. Export as glTF 2.0 (.glb/.gltf): Go to File > Export > glTF 2.0 (.glb/.gltf).
    2. Check Shape Keys and Animation
      1. Under the Data section, check Shape Keys to include the morph targets generated from the animation.
      2. Check Animations to export the animation data tied to the Shape Keys.
    3. Export: Choose a save location (e.g., “inflation.glb”) and click Export glTF 2.0. This file is now ready for use in Three.js.

    Step 7: Implement the Cloth Animation in Three.js

    In this step, we’ll use Three.js with React (via @react-three/fiber) to load and animate the cloth inflation effect from the inflation.glb file exported in Step 6. Below is the code with explanations:

    1. Set Up Imports and File Path:
      1. Import necessary libraries: THREE for core Three.js functionality, useRef, useState, useEffect from React for state and lifecycle management, and utilities from @react-three/fiber and @react-three/drei for rendering and controls.
      2. Import GLTFLoader from Three.js to load the .glb file.
      3. Define the model path: const modelPath = ‘/inflation.glb’; points to the exported file (adjust the path based on your project structure).
    2. Create the Model Component:
      1. Define the Model component to handle loading and animating the .glb file.
      2. Use state variables: model for the loaded 3D object, loading to track progress, and error for handling issues.
      3. Use useRef to store the AnimationMixer (mixerRef) and animation actions (actionsRef) for controlling playback.
    3. Load the Model with Animations:
      1. In a useEffect hook, instantiate GLTFLoader and load inflation.glb.
      2. On success (gltf callback):
        • Extract the scene (gltf.scene) and create an AnimationMixer to manage animations.
        • For each animation clip in gltf.animations:
          • Set duration to 6 seconds (clip.duration = 6).
          • Create an AnimationAction (mixer.clipAction(clip)).
          • Configure the action: clampWhenFinished = true stops at the last frame, loop = THREE.LoopOnce plays once, and setDuration(6) enforces the 6-second duration.
          • Reset and play the action immediately, storing it in actionsRef.current.
        • Update state with the loaded model and set loading to false.
      3. Log loading progress with the xhr callback.
      4. Handle errors in the error callback, updating error state.
      5. Clean up the mixer on component unmount.
    4. Animate the Model:
      1. Use useFrame to update the mixer each frame with mixerRef.current.update(delta), advancing the animation based on time.
      2. Add interactivity:
        • handleClick: Resets and replays all animations on click.
        • onPointerOver/onPointerOut: Changes the cursor to indicate clickability.
    5. Render the Model:
      1. Return null if still loading, an error occurs, or no model is loaded.
      2. Return a <primitive> element with the loaded model, enabling shadows and attaching event handlers.
    6. Create a Reflective Ground:
      1. Define MetalGround as a mesh with a plane geometry (args={[100, 100]}).
      2. Apply MeshReflectorMaterial with properties like metalness=0.5, roughness=0.2, and color=”#202020″ for a metallic, reflective look. Adjust blur, strength, and resolution as needed.
    7. Set Up the Scene:
      1. In the App component, create a <Canvas> with a camera positioned at [0, 15, 15] and a 50-degree FOV.
      2. Add a directionalLight at [0, 15, 0] with shadows enabled.
      3. Include an Environment preset (“studio”) for lighting, a Model at [0, 5, 0], ContactShadows for realism, and the MetalGround rotated and positioned below.
      4. Add OrbitControls for interactive camera movement.
    import * as THREE from 'three';
    import { useRef, useState, useEffect } from 'react';
    import { Canvas, useFrame } from '@react-three/fiber';
    import { OrbitControls, Environment, MeshReflectorMaterial, ContactShadows } from '@react-three/drei';
    import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
    import { DRACOLoader } from 'three/examples/jsm/loaders/DRACOLoader.js';
    
    const modelPath = '/inflation.glb';
    
    function Model({ ...props }) {
      const [model, setModel] = useState<THREE.Group | null>(null);
      const [loading, setLoading] = useState(true);
      const [error, setError] = useState<unknown>(null);
      const mixerRef = useRef<THREE.AnimationMixer | null>(null);
      const actionsRef = useRef<THREE.AnimationAction[]>([]);
    
      const handleClick = () => {
        actionsRef.current.forEach((action) => {
          action.reset();
          action.play();
        });
      };
    
      const onPointerOver = () => {
        document.body.style.cursor = 'pointer';
      };
    
      const onPointerOut = () => {
        document.body.style.cursor = 'auto';
      };
    
      useEffect(() => {
        const loader = new GLTFLoader();
        const dracoLoader = new DRACOLoader();
        dracoLoader.setDecoderPath('https://www.gstatic.com/draco/v1/decoders/');
        loader.setDRACOLoader(dracoLoader);
    
        loader.load(
          modelPath,
          (gltf) => {
            const mesh = gltf.scene;
            const mixer = new THREE.AnimationMixer(mesh);
            mixerRef.current = mixer;
    
            if (gltf.animations && gltf.animations.length) {
              gltf.animations.forEach((clip) => {
                clip.duration = 6;
                const action = mixer.clipAction(clip);
                action.clampWhenFinished = true;
                action.loop = THREE.LoopOnce;
                action.setDuration(6);
                action.reset();
                action.play();
                actionsRef.current.push(action);
              });
            }
    
            setModel(mesh);
            setLoading(false);
          },
          (xhr) => {
            console.log(`Loading: ${(xhr.loaded / xhr.total) * 100}%`);
          },
          (error) => {
            console.error('An error happened loading the model:', error);
            setError(error);
            setLoading(false);
          }
        );
    
        return () => {
          if (mixerRef.current) {
            mixerRef.current.stopAllAction();
          }
        };
      }, []);
    
      useFrame((_, delta) => {
        if (mixerRef.current) {
          mixerRef.current.update(delta);
        }
      });
    
      if (loading || error || !model) {
        return null;
      }
    
      return (
        <primitive
          {...props}
          object={model}
          castShadow
          receiveShadow
          onClick={handleClick}
          onPointerOver={onPointerOver}
          onPointerOut={onPointerOut}
        />
      );
    }
    
    function MetalGround({ ...props }) {
      return (
        <mesh {...props} receiveShadow>
          <planeGeometry args={[100, 100]} />
          <MeshReflectorMaterial
            color="#151515"
            metalness={0.5}
            roughness={0.2}
            blur={[0, 0]}
            resolution={2048}
            mirror={0}
          />
        </mesh>
      );
    }
    
    export default function App() {
      return (
        <div id="content">
          <Canvas camera={{ position: [0, 35, 15], fov: 25 }}>
            <directionalLight position={[0, 15, 0]} intensity={1} shadow-mapSize={1024} />
    
            <Environment preset="studio" background={false} environmentRotation={[0, Math.PI / -2, 0]} />
            <Model position={[0, 5, 0]} />
            <ContactShadows opacity={0.5} scale={10} blur={5} far={10} resolution={512} color="#000000" />
            <MetalGround rotation-x={Math.PI / -2} position={[0, -0.01, 0]} />
    
            <OrbitControls
              enableZoom={false}
              enablePan={false}
              enableRotate={true}
              enableDamping={true}
              dampingFactor={0.05}
            />
          </Canvas>
        </div>
      );
    }

    And that’s it! Starting from a cloth simulation in Blender, we turned it into a button that drops into place and reacts with a bit of bounce inside a Three.js scene.

    This workflow shows how Blender’s physics simulations can be exported and combined with Three.js to create interactive, real-time experiences on the web.



    Source link

  • Interactive WebGL Backgrounds: A Quick Guide to Bayer Dithering

    Interactive WebGL Backgrounds: A Quick Guide to Bayer Dithering



    User experience relies on small, thoughtful details that fit well into the overall design without overpowering the user. This balance can be tricky, especially with technologies like WebGL. While they can create amazing visuals, they can also become too complicated and distracting if not handled carefully.

    One subtle but effective technique is the Bayer Dithering Pattern. For example, JetBrains’ recent Junie campaign page uses this approach to craft an immersive and engaging atmosphere that remains visually balanced and accessible.

    In this tutorial, I’ll introduce you to the Bayer Dithering Pattern. I’ll explain what it is, how it works, and how you can apply it to your own web projects to enhance visual depth without overpowering the user experience.

    Bayer Dithering

    The Bayer pattern is a type of ordered dithering, which lets you simulate gradients and depth using a fixed matrix.

    If we scale this matrix appropriately, we can target specific values and create basic patterns.

    Here’s a simple example:

    // 2×2 Bayer matrix pattern: returns a value in [0, 1)
    float Bayer2(vec2 a)
    {
        a = floor(a);                // Use integer cell coordinates
        return fract(a.x / 2.0 + a.y * a.y * 0.75);
        // Equivalent lookup table:
        // (0,0) → 0.0,  (1,0) → 0.5
        // (0,1) → 0.75, (1,1) → 0.25
    }

    Let’s walk through an example of how this can be used:

    // 1. Base mask: left half is a black-to-white gradient 
    float mask = uv.y;
    
    // 2. Right half: apply ordered dithering
    if (uv.x > 0.5) {
        float dither = Bayer2(fragCoord);
        mask += dither - 0.5;
        mask  = step(0.5, mask); // binary threshold
    }
    
    // 3. Output the result
    fragColor = vec4(vec3(mask), 1.0);

    So with just a small matrix, we get four distinct dithering values—essentially for free.

    See the Pen
    Bayer2x2 by zavalit (@zavalit)
    on CodePen.

    Creating a Background Effect

    This is still pretty basic—nothing too exciting UX-wise yet. Let’s take it further by creating a grid on our UV map. We’ll define the size of a “pixel” and the size of the matrix that determines whether each “pixel” is on or off using Bayer ordering.

    const float PIXEL_SIZE = 10.0; // Size of each pixel in the Bayer matrix
    const float CELL_PIXEL_SIZE = 5.0 * PIXEL_SIZE; // 5x5 matrix
    
     
    float aspectRatio = uResolution.x / uResolution.y;
       
    vec2 pixelId = floor(fragCoord / PIXEL_SIZE); 
    vec2 cellId = floor(fragCoord / CELL_PIXEL_SIZE); 
    vec2 cellCoord = cellId * CELL_PIXEL_SIZE;
    
    vec2 uv = cellCoord/uResolution * vec2(aspectRatio, 1.0);
    
    vec3 baseColor = vec3(uv, 0.0);       

    You’ll see a rendered UV grid with blue dots for pixels and white (and subsequent blocks of the same size) for the Bayer matrix.

    See the Pen
    Pixel & Cell UV by zavalit (@zavalit)
    on CodePen.

    Recursive Bayer Matrices

    Bayer’s genius was a recursively generated mask that keeps noise high-frequency and code low-complexity. So now let’s try it out, and apply also larger dithering matrix:

    float Bayer2(vec2 a) { a = floor(a); return fract(a.x / 2. + a.y * a.y * .75); }
    #define Bayer4(a)   (Bayer2(0.5 * (a)) * 0.25 + Bayer2(a))
    #define Bayer8(a)   (Bayer4(0.5 * (a)) * 0.25 + Bayer2(a))
    #define Bayer16(a)   (Bayer8(0.5 * (a)) * 0.25 + Bayer2(a))
    
    ...
      if(uv.x > .2) dither = Bayer2 (pixelId);   
      if(uv.x > .4) dither = Bayer4 (pixelId);
      if(uv.x > .6) dither = Bayer8 (pixelId);
      if(uv.x > .8) dither = Bayer16(pixelId);
    ...

    This gives us a nice visual transition from a basic UV grid to Bayer matrices of increasing complexity (2×2, 4×4, 8×8, 16×16).

    See the Pen
    Bayer Ranges Animation by zavalit (@zavalit)
    on CodePen.

    As you see, the 8×8 and 16×16 patterns are quite similar—beyond 8×8, the perceptual gain becomes minimal. So we’ll stick with Bayer8 for the next step.

    Now, we’ll apply Bayer8 to a UV map modulated by fbm noise to make the result feel more organic—just as we promised.

    See the Pen
    Bayer fbm noise by zavalit (@zavalit)
    on CodePen.

    Adding Interactivity

    Here’s where things get exciting: real-time interactivity that background videos can’t replicate. Let’s run a ripple effect around clicked points using the dithering pattern. We’ll iterate over all active clicks and compute a wave:

     for (int i = 0; i < MAX_CLICKS; ++i) {
    
        // convert this click to square‑unit UV
        vec2 pos = uClickPos[i];
        if(pos.x < 0.0 && pos.y < 0.0) continue; // skip empty clicks
            
        vec2 cuv = (((pos - uResolution * .5 - cellPixelSize * .5) / (uResolution) )) * vec2(aspectRatio, 1.0);
    
        float t = max(uTime - uClickTimes[i], 0.0);
        float r = distance(uv, cuv);
    
        float waveR = speed * t;
        float ring  = exp(-pow((r - waveR) / thickness, 2.0));
        float atten = exp(-dampT * t) * exp(-dampR * r);
    
        feed = max(feed, ring * atten);           // brightest wins
    }

    Try to click on the CodePen bellow:

    See the Pen
    Untitled by zavalit (@zavalit)
    on CodePen.

    Final Thoughts

    Because the entire Bayer-dither background is generated in a single GPU pass, it renders in under 0.2 ms even at 4K, ships in ~3 KB (+ Three.js in this case), and consumes zero network bandwidth after load. SVG can’t touch that once you have thousands of nodes, and autoplay video is two orders of magnitude heavier on bandwidth, CPU and battery. In short: this is the probably one of the lightest fully-interactive background effect you can build on the open web today.



    Source link

  • Interactive Text Destruction with Three.js, WebGPU, and TSL

    Interactive Text Destruction with Three.js, WebGPU, and TSL



    When Flash was taken from us all those years ago, it felt like losing a creative home — suddenly, there were no tools left for building truly interactive experiences on the web. In its place, the web flattened into a static world of HTML and CSS.

    But those days are finally behind us. We’re picking up where we left off nearly two decades ago, and the web is alive again with rich, immersive experiences — thanks in large part to powerful tools like Three.js.

    I’ve been working with images, video, and interactive projects for 15 years, using things like Processing, p5.js, OpenFrameworks, and TouchDesigner. Last year, I added Three.js to the mix as a creative tool, and I’ve been loving the learning process. That ongoing exploration leads to little experiments like the one I’m sharing in this tutorial.

    Project Structure

    The structure of our script is going to be simple: one function to preload assets, and another one to build the scene.

    Since we’ll be working with 3D text, the first thing we need to do is load a font in .json format — the kind that works with Three.js.

    To convert a .ttf font into that format, you can use the Facetype.js tool, which generates a .typeface.json file.

    const Resources = {
    	font: null
    };
    
    function preload() {
    
    	const _font_loader = new FontLoader();
    	_font_loader.load( "../static/font/Times New Roman_Regular.json", ( font ) => {
    
    		Resources.font = font;
    		init();
    
    	} );
    
    }
    
    function init() {
    
    }
    
    window.onload = preload;

    Scene setup & Environment

    A classic Three.js scene — the only thing to keep in mind is that we’re working with Three Shader Language (TSL), which means our renderer needs to be a WebGPURenderer.

    const scene = new THREE.Scene();
    const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
    const renderer = new THREE.WebGPURenderer({ antialias: true });
    
    document.body.appendChild(renderer.domElement);
    
    renderer.setSize(window.innerWidth, window.innerHeight);
    camera.position.z = 5;
    
    scene.add(camera);

    Next, we’ll set up the scene environment to get some lighting going.

    To keep things simple and avoid loading more assets, we’ll use the default RoomEnvironment that “comes” with Three.js. We’ll also add a DirectionalLight to the scene.

    const environment = new RoomEnvironment();
    const pmremGenerator = new THREE.PMREMGenerator(renderer);
    scene.environment = pmremGenerator.fromSceneAsync(environment).texture;
    
    scene.environmentIntensity = 0.8;
    
    const   light = new THREE.DirectionalLight("#e7e2ca",5);
    light.position.x = 0.0;
    light.position.y = 1.2;
    light.position.z = 3.86;
    
    scene.add(light);

    TextGeometry

    We’ll use TextGeometry, which lets us create 3D text in Three.js.

    It uses a JSON font file (which we loaded earlier with FontLoader) and is configured with parameters like size, depth, and letter spacing.

    const text_geo = new TextGeometry("NUEVOS",{
        font:Resources.font,
        size:1.0,
        depth:0.2,
        bevelEnabled: true,
        bevelThickness: 0.1,
        bevelSize: 0.01,
        bevelOffset: 0,
        bevelSegments: 1
    }); 
    
    const mesh = new THREE.Mesh(
        text_geo,
        new THREE.MeshStandardMaterial({ 
            color: "#656565",
            metalness: 0.4, 
            roughness: 0.3
        })
    );
    
    scene.add(mesh);

    By default, the origin of the text sits at (0, 0), but we want it centered.
    To do that, we need to compute its BoundingBox and manually apply a translation to the geometry:

    text_geo.computeBoundingBox();
    const centerOffset = - 0.5 * ( text_geo.boundingBox.max.x - text_geo.boundingBox.min.x );
    const centerOffsety = - 0.5 * ( text_geo.boundingBox.max.y - text_geo.boundingBox.min.y );
    text_geo.translate( centerOffset, centerOffsety, 0 );

    Now that we have the mesh and material ready, we can move on to the function that lets us blow everything up 💥

    Three Shader Language

    I really love TSL — it’s closed the gap between ideas and execution, in a context that’s not always the friendliest… shaders.

    The effect we’re going to implement deforms the geometry’s vertices based on the pointer’s position, and uses spring physics to animate those deformations in a dynamic way.

    But before we get to that, let’s grab a few attributes we’ll need to make everything work properly:

    //  Original position of each vertex — we’ll use it as a reference
    //  so unaffected vertices can "return" to their original spot
    const initial_position = storage( text_geo.attributes.position, "vec3", count );
    
    //  Normal of each vertex — we’ll use this to know which direction to "push" in
    const normal_at = storage( text_geo.attributes.normal, "vec3", count );
    
    //  Number of vertices in the geometry
    const count = text_geo.attributes.position.count;

    Next, we’ll create a storage buffer to hold the simulation data — and we’ll also write a function.
    But not a regular JavaScript function — this one’s a compute function, written in the context of TSL.

    It runs on the GPU and we’ll use it to set up the initial values for our buffers, getting everything ready for the simulation.

    // In this buffer we’ll store the modified positions of each vertex —
    // in other words, their current state in the simulation.
    const   position_storage_at = storage(new THREE.StorageBufferAttribute(count,3),"vec3",count);   
    
    const compute_init = Fn( ()=>{
    
    	position_storage_at.element( instanceIndex ).assign( initial_position.element( instanceIndex ) );
    
    } )().compute( count );
    
    // Run the function on the GPU. This runs compute_init once per vertex.
    renderer.computeAsync( compute_init );

    Now we’re going to create another one of these functions — but unlike the previous one, this one will run inside the animation loop, since it’s responsible for updating the simulation on every frame.

    This function runs on the GPU and needs to receive values from the outside — like the pointer position, for example.

    To send that kind of data to the GPU, we use what’s called uniforms. They work like bridges between our “regular” code and the code that runs inside the GPU shader.

    They’re defined like this:

    const u_input_pos = uniform(new THREE.Vector3(0,0,0));
    const u_input_pos_press = uniform(0.0);

    With this, we can calculate the distance between the pointer position and each vertex of the geometry.

    Then we clamp that value so the deformation only affects vertices within a certain radius.
    To do that, we use the step function — it acts like a threshold, and lets us apply the effect only when the distance is below a defined value.

    Finally, we use the vertex normal as a direction to push it outward.

    const compute_update = Fn(() => {
    
        // Original position of the vertex — also its resting position
        const base_position = initial_position.element(instanceIndex);
    
        // The vertex normal tells us which direction to push
        const normal = normal_at.element(instanceIndex);
    
        // Current position of the vertex — we’ll update this every frame
        const current_position = position_storage_at.element(instanceIndex);
    
        // Calculate distance between the pointer and the base position of the vertex
        const distance = length(u_input_pos.sub(base_position));
    
        // Limit the effect's range: it only applies if distance is less than 0.5
        const pointer_influence = step(distance, 0.5).mul(1.0);
    
        // Compute the new displaced position along the normal.
        // Where pointer_influence is 0, there’ll be no deformation.
        const disorted_pos = base_position.add(normal.mul(pointer_influence));
    
        // Assign the new position to update the vertex
        current_position.assign(disorted_pos);
    
    })().compute(count);
    

    To make this work, we’re missing two key steps: we need to assign the buffer with the modified positions to the material, and we need to make sure the renderer runs the compute function on every frame inside the animation loop.

    // Assign the buffer with the modified positions to the material
    mesh.material.positionNode = position_storage_at.toAttribute();
    
    // Animation loop
    function animate() {
    	// Run the compute function
    	renderer.computeAsync(compute_update_0);
    
    	// Render the scene
    	renderer.renderAsync(scene, camera);
    }

    Right now the function doesn’t produce anything too exciting — the geometry moves around in a kinda clunky way. We’re about to bring in springs, and things will get much better.

    // Spring — how much force we apply to reach the target value
    velocity += (target_value - current_value) * spring;
    
    // Friction controls the damping, so the movement doesn’t oscillate endlessly
    velocity *= friction;
    
    current_value += velocity;

    But before that, we need to store one more value per vertex, the velocity, so let’s create another storage buffer.

    const position_storage_at = storage(new THREE.StorageBufferAttribute(count, 3), "vec3", count);
    
    // New buffer for velocity
    const velocity_storage_at = storage(new THREE.StorageBufferAttribute(count, 3), "vec3", count);
    
    const compute_init = Fn(() => {
    
        position_storage_at.element(instanceIndex).assign(initial_position.element(instanceIndex));
        
        // We initialize it too
        velocity_storage_at.element(instanceIndex).assign(vec3(0.0, 0.0, 0.0));
    
    })().compute(count);

    We’ll also add two uniforms: spring and friction.

    const u_spring = uniform(0.05);
    const u_friction = uniform(0.9);

    Now we’ve implemented the springs in the update:

    const compute_update = Fn(() => {
    
        const base_position = initial_position.element(instanceIndex);
        const current_position = position_storage_at.element(instanceIndex);
    
        // Get current velocity
        const current_velocity = velocity_storage_at.element(instanceIndex);
    
        const normal = normal_at.element(instanceIndex);
    
        const   distance =  length(u_input_pos.sub(base_position));
        const   pointer_influence = step(distance,0.5).mul(1.5);
    
        const disorted_pos = base_position.add(normal.mul(pointer_influence));
        disorted_pos.assign((mix(base_position, disorted_pos, u_input_pos_press)));
      
        // Spring implementation
        // velocity += (target_value - current_value) * spring;
        current_velocity.addAssign(disorted_pos.sub(current_position).mul(u_spring));
        // velocity *= friction;
        current_velocity.assign(current_velocity.mul(u_friction));
        // value += velocity
        current_position.addAssign(current_velocity);
    
    
    })().compute(count);

    Now we’ve got everything we need — time to start fine-tuning.

    We’re going to add two things. First, we’ll use the TSL function mx_noise_vec3 to generate some noise for each vertex. That way, we can tweak the direction a bit so things don’t feel so stiff.

    We’re also going to rotate the vertices using another TSL function — surprise, it’s called rotate.

    Here’s what our updated compute_update function looks like:

    const compute_update = Fn(() => {
    
        const base_position = initial_position.element(instanceIndex);
        const current_position = position_storage_at.element(instanceIndex);
        const current_velocity = velocity_storage_at.element(instanceIndex);
    
        const normal = normal_at.element(instanceIndex);
    
        // NEW: Add noise so the direction in which the vertices "explode" isn’t too perfectly aligned with the normal
        const noise = mx_noise_vec3(current_position.mul(0.5).add(vec3(0.0, time, 0.0)), 1.0).mul(u_noise_amp);
    
        const distance = length(u_input_pos.sub(base_position));
        const pointer_influence = step(distance, 0.5).mul(1.5);
    
        const disorted_pos = base_position.add(noise.mul(normal.mul(pointer_influence)));
    
        // NEW: Rotate the vertices to give the animation a more chaotic feel
        disorted_pos.assign(rotate(disorted_pos, vec3(normal.mul(distance)).mul(pointer_influence)));
    
        disorted_pos.assign(mix(base_position, disorted_pos, u_input_pos_press));
    
        current_velocity.addAssign(disorted_pos.sub(current_position).mul(u_spring));
        current_position.addAssign(current_velocity);
        current_velocity.assign(current_velocity.mul(u_friction));
    
    })().compute(count);
    

    Now that the motion feels right, it’s time to tweak the material colors a bit and add some post-processing to the scene.

    We’re going to work on the emissive color — meaning it won’t be affected by lights, and it’ll always look bright and explosive. Especially once we throw some bloom on top. (Yes, bloom everything.)

    We’ll start from a base color (whichever you like), passed in as a uniform. To make sure each vertex gets a slightly different color, we’ll offset its hue a bit using values from the buffers — in this case, the velocity buffer.

    The hue function takes a color and a value to shift its hue, kind of like how offsetHSL works in THREE.Color.

    // Base emissive color
    const emissive_color = color(new THREE.Color("0000ff"));
    
    const vel_at = velocity_storage_at.toAttribute();
    const hue_rotated = vel_at.mul(Math.PI*10.0);
    
    // Multiply by the length of the velocity buffer — this means the more movement,
    // the more the vertex color will shift
    const emission_factor = length(vel_at).mul(10.0);
    
    // Assign the color to the emissive node and boost it as much as you want
    mesh.material.emissiveNode = hue(emissive_color, hue_rotated).mul(emission_factor).mul(5.0);

    Finally! Lets change scene background color and add Fog:

    scene.fog = new THREE.Fog(new THREE.Color("#41444c"),0.0,8.5);
    scene.background = scene.fog.color;

    Now, let’s spice up the scene with a bit of post-processing — one of those things that got way easier to implement thanks to TSL.

    We’re going to include three effects: ambient occlusion, bloom, and noise. I always like adding some noise to what I do — it helps break up the flatness of the pixels a bit.

    I won’t go too deep into this part — I grabbed the AO setup from the Three.js examples.

    const   composer = new THREE.PostProcessing(renderer);
    const   scene_pass = pass(scene,camera);
    
    scene_pass.setMRT(mrt({
        output:output,
        normal:normalView
    }));
    
    const   scene_color = scene_pass.getTextureNode("output");
    const   scene_depth = scene_pass.getTextureNode("depth");
    const   scene_normal = scene_pass.getTextureNode("normal");
    
    const ao_pass = ao( scene_depth, scene_normal, camera);
    ao_pass.resolutionScale = 1.0;
    
    const   ao_denoise = denoise(ao_pass.getTextureNode(), scene_depth, scene_normal, camera ).mul(scene_color);
    const   bloom_pass = bloom(ao_denoise,0.3,0.2,0.1);
    const   post_noise = (mx_noise_float(vec3(uv(),time.mul(0.1)).mul(sizes.width),0.03)).mul(1.0);
    
    composer.outputNode = ao_denoise.add(bloom_pass).add(post_noise);

    Alright, that’s it amigas — thanks so much for reading, and I hope it was useful!



    Source link

  • Invisible Forces: The Making of Phantom.land’s Interactive Grid and 3D Face Particle System

    Invisible Forces: The Making of Phantom.land’s Interactive Grid and 3D Face Particle System


    From the outset, we knew we wanted something that subverted any conventional agency website formulas. Instead,
    inspired by the unseen energy that drives creativity, connection and transformation, we arrived at the idea of
    invisible forces
    . Could we take the powerful yet intangible elements that shape our world—motion, emotion, intuition, and
    inspiration—and manifest them in a digital space?

    We were excited about creating something that included many custom interactions and a very experiential feel. However,
    our concern was picking a set of tools that would allow most of our developers to contribute to and maintain the site
    after launch.

    We chose to start from a Next / React base, as we often do at Phantom. React also has the advantage of being
    compatible with the excellent React Three Fiber library, which we used to seamlessly bridge the gap between our DOM
    components and the WebGL contexts used across the site. For styles, we are using our very own
    CSS components
    as well as SASS.

    For interactive behaviours and animation, we chose to use GSAP for two main reasons. Firstly, it contains a lot of
    plugins we know and love, such as SplitText, CustomEase and ScrollTrigger. Secondly, GSAP allows us to use a single
    animation framework across DOM and WebGL components.

    We could go on and on talking about the details behind every single animation and micro-interaction on the site, but
    for this piece we have chosen to focus our attention on two of the most unique components of our site: the homepage
    grid and the scrollable employee face particle carousel.

    The Homepage Grid

    It took us a very long time to get this view to perform and feel just how we wanted it to. In this article, we will focus on the interactive part. For more info on how we made things performant, head to our previous article: Welcome back to Phantomland

    Grid View

    The project’s grid view is integrated into the homepage by incorporating a primitive Three.js object into a React
    Three Fiber scene.

    //GridView.tsx
    const GridView = () => {
      return (
        <Canvas>
          ...
          <ProjectsGrid />
          <Postprocessing />
        </Canvas>
      );
    }
    
    //ProjectsGrid.tsx
    const ProjectsGrid = ({atlases, tiles}: Props) => {
      const {canvas, camera} = useThree();
      
      const grid = useMemo(() => {
        return new Grid(canvas, camera, atlases, tiles);
      }, [canvas, camera, atlases, tiles]);
    
      if(!grid) return null;
      return (
        <primitive object={grid} />
      );
    }

    We initially wanted to write all the code for the grid using React Three Fiber but realised that, due to the
    complexity of our grid component, a vanilla
    Three.js
    class would be easier to maintain.

    One of the key elements that gives our grid its iconic feel is our post-processing distortion effect. We implemented
    this feature by creating a custom shader pass within our post-processing pipeline:

    // Postprocessing.tsx
    const Postprocessing = () => {
      const {gl, scene, camera} = useThree();
      
      // Create Effect composer
      const {effectComposer, distortionShader} = useMemo(() => {
        const renderPass = new RenderPass(scene, camera);
        const distortionShader = new DistortionShader();
        const distortionPass = new ShaderPass(distortionShader);
        const outputPass = new OutputPass();
    
        const effectComposer = new EffectComposer(gl);
        effectComposer.addPass(renderPass);
        effectComposer.addPass(distortionPass);
        effectComposer.addPass(outputPass);
    
        return {effectComposer, distortionShader};
      }, []);
      
      // Update distortion intensity
      useEffect(() => {
        if (workgridState === WorkgridState.INTRO) {
          distortionShader.setDistortion(CONFIG.distortion.flat);
        } else {
          distortionShader.setDistortion(CONFIG.distortion.curved);
        }
      }, [workgridState, distortionShader]);
      
      // Update distortion intensity
      useFrame(() => {
        effectComposer.render();
      }, 1);
     
      return null;
    }

    When the grid transitions in and out on the site, the distortion intensity changes to make the transition feel
    natural. This animation is done through a simple tween in our
    DistortionShader
    class:

    class DistortionShader extends ShaderMaterial {
      private distortionIntensity = 0;
    
      super({
          name: 'DistortionShader',
          uniforms: {
            distortionIntensity: {value: new Vector2()},
            ...
          },
          vertexShader,
          fragmentShader,
      });
    
      update() {
        const ratio = window.innerWidth, window.innerHeight;
        this.uniforms[DistortionShaderUniforms.DISTORTION].value.set(
          this.distortionIntensity * ratio,
          this.distortionIntensity * ratio,
        );
      }
    
      setDistortion(value: number) {
        gsap.to(this, {
          distortionIntensity: value,
          duration: 1,
          ease: 'power2.out',
          onUpdate: () => this.update()    }
      }
    }

    Then the distortion is applied through our custom shader:

    // fragment.ts
    export const fragmentShader = /* glsl */ `
      uniform sampler2D tDiffuse;
      uniform vec2 distortion;
      uniform float vignetteOffset;
      uniform float vignetteDarkness;
    
      varying vec2 vUv;
      
      // convert uv range from 0 -> 1 to -1 -> 1
      vec2 getShiftedUv(vec2 uv) {
        return 2. * (uv - .5);
      }
      
      // convert uv range from -1 -> 1 to 0 -> 1
      vec2 getUnshiftedUv(vec2 shiftedUv) {
        return shiftedUv * 0.5 + 0.5;
      }
    
    
      void main() {
        vec2 shiftedUv = getShiftedUv(vUv);
        float distanceToCenter = length(shiftedUv);
        
        // Lens distortion effect
        shiftedUv *= (0.88 + distortion * dot(shiftedUv));
        vec2 transformedUv = getUnshiftedUv(shiftedUv);
        
        // Vignette effect
        float vignetteIntensity = smoothstep(0.8, vignetteOffset * 0.799,  (vignetteDarkness + vignetteOffset) * distanceToCenter);
        
        // Sample render texture and output fragment
        color = texture2D( tDiffuse, distortedUV ).rgb * vignetteIntensity;
        gl_FragColor = vec4(color, 1.);
      }

    We also added a vignette effect to our post-processing shader to darken the corners of the viewport, focusing the
    user’s attention toward the center of the screen.

    In order to make our home view as smooth as possible, we also spent a fair amount of time crafting the
    micro-interactions and transitions of the grid.

    Ambient mouse offset

    When the user moves their cursor around the grid, the grid moves slightly in the opposite direction, creating a very
    subtle ambient floating effect. This was simply achieved by calculating the mouse position on the grid and moving the
    grid mesh accordingly:

    getAmbientCursorOffset() {
      // Get the pointer coordinates in UV space ( 0 - 1 ) range
      const uv = this.navigation.pointerUv;
      const offset = uv.subScalar(0.5).multiplyScalar(0.2);
      return offset;
    }
    
    update() {
      ...
      // Apply cursor offset to grid position
      const cursorOffset = getAmbientCursorOffset();
      this.mesh.position.x += cursorOffset.x;
      this.mesh.position.y += cursorOffset.y;
    }

    Drag Zoom

    When the grid is dragged around, a zoom-out effect occurs and the camera seems to pan away from the grid. We created
    this effect by detecting when the user starts and stops dragging their cursor, then using that to trigger a GSAP
    animation with a custom ease for extra control.

    onPressStart = () => {
      this.animateCameraZ(0.5, 1);
    }
    
    onPressEnd = (isDrag: boolean) => {
      if(isDrag) {
        this.animateCameraZ(0, 1);
      }
    }
    
    animateCameraZ(distance: number, duration: number) {
      gsap.to(this.camera.position, {
        z: distance,
        duration,
        ease: CustomEase.create('cameraZoom', '.23,1,0.32,1'),
      });
    }

    Drag Movement

    Last but not least, when the user drags across the grid and releases their cursor, the grid slides through with a
    certain amount of inertia.

    drag(offset: Vector2) {
      this.dragAction = offset;
    
      // Gradually increase velocity with drag time and distance
      this.velocity.lerp(offset, 0.8);
    }
    
    // Every frame
    update() {
      // positionOffset is later used to move the grid mesh
      if(this.isDragAction) {
        // if the user is dragging their cursor, add the drag value to offset
        this.positionOffset.add(this.dragAction.clone());
      } else {
        // if the user is not dragging, add the velocity to the offset
        this.positionOffset.add(this.velocity);
      }
    
      this.dragAction.set(0, 0);
      // Attenuate velocity with time
      this.velocity.lerp(new Vector2(), 0.1);
    }

    Face Particles

    The second major component we want to highlight is our employee face carousel, which presents team members through a
    dynamic 3D particle system. Built with React Three Fiber’s
    BufferGeometry
    and custom GLSL shaders, this implementation leverages custom shader materials for lightweight performance and
    flexibility, allowing us to generate entire 3D face representations using only a 2D colour photograph and its
    corresponding depth map—no 3D models required.

    Core Concept: Depth-Driven Particle Generation

    The foundation of our face particle system lies in converting 2D imagery into volumetric 3D representations. We’ve
    kept things efficient, with each face using only two optimized 256×256 WebP images (under 15KB each).

    To capture the images, each member of the Phantom team was 3D scanned using
    RealityScan
    from Unreal Engine on iPhone, creating a 3D model of their face.

    These scans were cleaned up and then rendered from Cinema4D with a position and colour pass.

    The position pass was converted into a greyscale depth map in Photoshop, and this—along with the colour pass—was
    retouched where needed, cropped, and then exported from Photoshop to share with the dev team.

    Each face is constructed from approximately 78,400 particles (280×280 grid), where each particle’s position and
    appearance is determined by sampling data from our two source textures.

    /* generate positions attributes array */
    const POINT_AMOUNT = 280;
    
    const points = useMemo(() => {
      const length = POINT_AMOUNT * POINT_AMOUNT;
      const vPositions = new Float32Array(length * 3);
      const vIndex = new Float32Array(length * 2);
      const vRandom = new Float32Array(length * 4);
    
      for (let i = 0; i < length; i++) {
          const i2 = i * 2;
          vIndex[i2] = (i % POINT_AMOUNT) / POINT_AMOUNT;
          vIndex[i2 + 1] = i / POINT_AMOUNT / POINT_AMOUNT;
    
          const i3 = i * 3;
          const theta = Math.random() * 360;
          const phi = Math.random() * 360;
          vPositions[i3] = 1 * Math.sin(theta) * Math.cos(phi);
          vPositions[i3 + 1] = 1 * Math.sin(theta) * Math.sin(phi);
          vPositions[i3 + 2] = 1 * Math.cos(theta);
    
          const i4 = i * 4;
          vRandom.set(
            Array(4)
              .fill(0)
              .map(() => Math.random()),
            i4,
          );
      }
    
      return {vPositions, vRandom, vIndex};
    }, []);
    // React Three Fiber component structure 
    const FaceParticleSystem = ({ particlesData, currentDataIndex }) => {
      return (
        <points ref={pointsRef} position={pointsPosition}>
          <bufferGeometry>
            <bufferAttribute attach="attributes-vIndex" 
                 args={[points.vIndex, 2]} />
            <bufferAttribute attach="attributes-position"
                 args={[points.vPositions, 3]} />
            <bufferAttribute attach="attributes-vRandom"
                 args={[points.vRandom, 4]} />
          </bufferGeometry>
          
          <shaderMaterial
            blending={NormalBlending}
            transparent={true}
            fragmentShader={faceFrag}
            vertexShader={faceVert}
            uniforms={uniforms}
          />
        </points>
      );
    };

    The depth map provides normalized values (0–1) that directly translate to Z-depth positioning. A value of 0 represents
    the furthest point (background), while 1 represents the closest point (typically the nose tip).

    /* vertex shader */ 
    
    // sample depth and color data for each particle
    vec3 depthTexture1 = texture2D(depthMap1, vIndex.xy).xyz;
    
    // convert depth to Z-position
    float zDepth = (1. - depthValue.z);
    pos.z = (zDepth * 2.0 - 1.0) * zScale;

    Dynamic Particle Scaling Through Colour Analysis

    One of the key methods that brings our faces to life is utilizing colour data to influence particle scale. In our
    vertex shader, rather than using uniform particle sizes, we analyze the colour density of each pixel so that brighter,
    more colourful areas of the face (like eyes, lips, or well-lit cheeks) generate larger, more prominent particles,
    while darker areas (shadows, hair) create smaller, subtler particles. The result is a more organic, lifelike
    representation that emphasizes facial features naturally.

    /* vertex shader */ 
    
    vec3 colorTexture1 = texture2D(colorMap1, vIndex.xy).xyz;
    
    // calculate color density
    float density = (mainColorTexture.x + mainColorTexture.y + mainColorTexture.z) / 3.;
    
    // map density to particle scale
    float pScale = mix(pScaleMin, pScaleMax, density);

    The calibration below demonstrates the influence of colour (contrast, brightness, etc.) on the final 3D particle formation.

    Ambient Noise Animation

    To prevent static appearances and maintain visual interest, we apply continuous noise-based animation to all
    particles. This ambient animation system uses curl noise to create subtle, flowing movement across the entire
    face structure.

    /* vertex shader */ 
    
    // primary curl noise for overall movement 
    pos += curlNoise(pos * curlFreq1 + time) * noiseScale * 0.1;
    // animation updates in React Three Fiber
    
    useFrame((state, delta) => {
      if (!materialRef.current) return;
      
      materialRef.current.uniforms.time.value = state.clock.elapsedTime * NOISE_SPEED;
      
      // update rotation based on mouse interaction
      easing.damp(pointsRef.current.rotation, 'y', state.mouse.x * 0.12 * Math.PI, 0.25, delta);
      easing.damp(pointsRef.current.rotation, 'x', -state.pointer.y * 0.05 * Math.PI, 0.25, delta);
    
    });

    Face Transition Animation

    When transitioning between different team members, we combine timeline-based interpolation with visual effects written
    in shader materials.

    GSAP-Driven Lerp Method

    The transition foundation uses GSAP timelines to animate multiple shader parameters simultaneously:

    timelineRef.current = gsap
      .timeline()
      .fromTo(uniforms.transition, {value: 0}, {value: 1.3, duration: 1.6})
      .to(uniforms.posZ, {value: particlesParams.offset_z, duration: 1.6}, 0)
      .to(uniforms.zScale, {value: particlesParams.face_scale_z, duration: 1.6}, 0);

    And the shader handles the visual blending between two face states:

    /* vertex shader */ 
    
    // smooth transition curve
    float speed = clamp(transition * mix(0.8, .9, transition), 0., 1.0); 
    speed = smoothstep(0.0, 1.0, speed); 
    
    // blend textures 
    vec3 mainColorTexture = mix(colorTexture1, colorTexture2, speed); 
    vec3 depthValue =mix(depthTexture1, depthTexture2, speed);

    To add visual interest during transitions, we further inject additional noise that’s strongest at the midpoint of the
    transition. This creates a subtle “disturbance” effect where particles temporarily deviate from their target
    positions, making transitions feel more dynamic and organic.

    /* vertex shader */ 
    
    // secondary noise movement applied for transition
    float randomZ = vRandom.y + cnoise(pos * curlFreq2 + t2) * noiseScale2;
    
    float smoothTransition = abs(sin(speed * PI)); 
    pos.x += nxScale * randomZ * 0.1 * smoothTransition; 
    pos.y += nyScale *randomZ * 0.1 * smoothTransition;
    pos.z += nzScale * randomZ * 0.1 * smoothTransition;

    Custom Depth of Field Effect

    To enhance the three-dimensional perception, we implemented a custom depth of field effect directly in our shader
    material. It calculates view-space distance for each particle and modulates both opacity and size based on proximity
    to a configurable focus plane.

    /* vertex shader - calculate view distance */
    
    vec4 viewPosition = viewMatrix * modelPosition;
    vDistance = abs(focus +viewPosition.z); 
    
    // apply distance to point size for blur effect 
    gl_PointSize = pointSize * pScale * vDistance * blur * totalScale;
    /* fragment shader - calculate distance-based alpha for DOF */
    
    
    float alpha = (1.04 - clamp(vDistance * 1.5, 0.0, 1.0));
    gl_FragColor = vec4(color, alpha);

    Challenges: Unifying Face Scales

    One of the challenges we faced was achieving visual consistency across different team members’ photos. Each photograph
    was captured under slightly different conditions—varying lighting, camera distances, and facial proportions.
    Therefore, we went through each face to calibrate multiple scaling factors:

    • Depth scale calibration
      to ensure no nose protrudes too aggressively
    • Colour density balancing
      to maintain consistent particle size relationships
    • Focus plane optimization
      to prevent excessive blur on any individual face
    // individual face parameters requiring manual tuning 
    
    particle_params: { 
      offset_z: 0,           // overall Z-position
      z_depth_scale: 0,      // depth map scaling factor
      face_size: 0,          // overall face scale 
    }

    Final Words

    Our face particle system demonstrates how simple yet careful technical implementation can create fun visual
    experiences from minimal assets. By combining lightweight WebP textures, custom shader materials, and animations,
    we’ve created a system that transforms simple 2D portraits into interactive 3D figures.

    Check out the full site.

    Curious about what we’re up to in the Phantom studio? Or have a project you think we’d be interested in? Get in touch.



    Source link

  • How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL

    How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL


    Fragment shaders allow us to create smooth, organic visuals that are difficult to achieve with standard polygon-based rendering in WebGL. One powerful example is the metaball effect, where multiple objects blend and deform seamlessly. This can be implemented using a technique called ray marching, directly within a fragment shader.

    In this tutorial, we’ll walk you through how to create droplet-like, bubble spheres using Three.js and GLSL—an effect that responds interactively to your mouse movements. But first, take a look at the demo video below to see the final result in action.

    Overview

    Let’s take a look at the overall structure of the demo and review the steps we’ll follow to build it.

    1. Setting Up the Fullscreen Plane

    We create a fullscreen plane that covers the entire viewport.

    2. Rendering Spheres with Ray Marching

    We’ll render spheres using ray marching in the fragment shader.

    3. From Spheres to Metaballs

    We blend multiple spheres smoothly to create a metaball effect.

    4. Adding Noise for a Droplet-like Appearance

    By adding noise to the surface, we create a realistic droplet-like texture.

    5. Simulating Stretchy Droplets with Mouse Movement

    We arrange spheres along the mouse trail to create a stretchy, elastic motion.

    Let’s get started!

    1. Setup

    We render a single fullscreen plane that covers the entire viewport.

    // Output.ts
    
    const planeGeometry = new THREE.PlaneGeometry(2.0, 2.0);
    const planeMaterial = new THREE.RawShaderMaterial({
        vertexShader: base_vert,
        fragmentShader: output_frag,
        uniforms: this.uniforms,
    });
    const plane = new THREE.Mesh(planeGeometry, planeMaterial);
    this.scene.add(plane);

    We define a uniform variable named uResolution to pass the canvas size to the shader, where Common.width and Common.height represent the width and height of the canvas in pixels. This uniform will be used to normalize coordinates based on the screen resolution.

    // Output.ts
    
    this.uniforms = {
        uResolution: {
            value: new THREE.Vector2(Common.width, Common.height),
        },
    };

    When using RawShaderMaterial, you need to provide your own shaders. Therefore, we prepare both a vertex shader and a fragment shader.

    // base.vert
    
    attribute vec3 position;
    varying vec2 vTexCoord;
    
    void main() {
        vTexCoord = position.xy * 0.5 + 0.5;
        gl_Position = vec4(position, 1.0);
    }

    The vertex shader receives the position attribute.

    Since the xy components of position originally range from -1 to 1, we convert them to a range from 0 to 1 and output them as a texture coordinate called vTexCoord. This is passed to the fragment shader and used to calculate colors or effects based on the position on the screen.

    // output.frag
    
    precision mediump float;
    
    uniform vec2 uResolution;
    varying vec2 vTexCoord;
    
    void main() {
        gl_FragColor = vec4(vTexCoord, 1.0, 1.0);
    }

    The fragment shader receives the interpolated texture coordinate vTexCoord and the uniform variable uResolution representing the canvas size. Here, we temporarily use vTexCoord to output color for testing.

    Now we’re all set to start drawing in the fragment shader!
    Next, let’s move on to actually rendering the spheres.

    2. Ray Marching

    2.1. What is Ray Marching?

    As mentioned at the beginning, we will use a method called ray marching to render spheres. Ray marching proceeds in the following steps:

    1. Define the scene
    2. Set the camera (viewing) direction
    3. Cast rays
    4. Evaluate the distance from the current ray position to the nearest object in the scene.
    5. Move the ray forward by that distance
    6. Check for a hit

    For example, let’s consider a scene with three spheres. These spheres are expressed using SDFs (Signed Distance Functions), which will be explained in detail later.

    First, we determine the camera direction. Once the direction is set, we cast a ray in that direction.

    Next, we evaluate the distance to all objects from the current ray position, and take the minimum of these distances.

    After obtaining this distance, we move the ray forward by that amount.

    We repeat this process until either the ray gets close enough to an object—closer than a small threshold—or the maximum number of steps is reached.
    If the distance is below the threshold, we consider it a “hit” and shade the corresponding pixel.

    For example, in the figure above, a hit is detected on the 8th ray marching step.

    If the maximum number of steps were set to 7, the 7th step would not have hit anything yet. But since the limit is reached, the loop ends and no hit is detected.

    Therefore, nothing would be rendered at that position. If parts of an object appear to be missing in the final image, it may be due to an insufficient number of steps. However, be aware that increasing the step count will also increase the computational load.

    To better understand this process, try running this demo to see how it works in practice.

    2.2. Signed Distance Function

    In the previous section, we briefly mentioned the SDF (Signed Distance Function).
    Let’s take a moment to understand what it is.

    An SDF is a function that returns the distance from a point to a particular shape. The key characteristic is that it returns a positive or negative value depending on whether the point is outside or inside the shape.

    For example, here is the distance function for a sphere:

    float sdSphere(vec3 p, float s)
    {
        return length(p) - s;
    }

    Here, p is a vector representing the position relative to the origin, and s is the radius of the sphere.

    This function calculates how far the point p is from the surface of a sphere centered at the origin with radius s.

    • If the result is positive, the point is outside the sphere.
    • If negative, it is inside the sphere.
    • If the result is zero, the point is on the surface—this is considered a hit point (in practice, we detect a hit when the distance is less than a small threshold).

    In this demo, we use a sphere’s distance function, but many other shapes have their own distance functions as well.

    If you’re interested, here’s a great article on distance functions.

    2.3. Rendering Spheres

    Let’s try rendering spheres.
    In this demo, we’ll render two slightly overlapping spheres.

    // output.frag
    
    precision mediump float;
    
    const float EPS = 1e-4;
    const int ITR = 16;
    
    uniform vec2 uResolution;
    
    varying vec2 vTexCoord;
    
    // Camera Params
    vec3 origin = vec3(0.0, 0.0, 1.0);
    vec3 lookAt = vec3(0.0, 0.0, 0.0);
    vec3 cDir = normalize(lookAt - origin);
    vec3 cUp = vec3(0.0, 1.0, 0.0);
    vec3 cSide = cross(cDir, cUp);
    
    vec3 translate(vec3 p, vec3 t) {
        return p - t;
    }
    
    float sdSphere(vec3 p, float s)
    {
        return length(p) - s;
    }
    
    float map(vec3 p) {
        float radius = 0.5;
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
        d = min(sphere0, sphere1);
    
        return d;
    }
    
    void main() {
        vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);
    
        // Orthographic Camera
        vec3 ray = origin + cSide * p.x + cUp * p.y;
        vec3 rayDirection = cDir;
    
        float dist = 0.0;
    
        for (int i = 0; i < ITR; ++i) {
            dist = map(ray);
            ray += rayDirection * dist;
            if (dist < EPS) break;
        }
    
        vec3 color = vec3(0.0);
    
        if (dist < EPS) {
            color = vec3(1.0, 1.0, 1.0);
        }
    
        gl_FragColor = vec4(color, 1.0);
    }

    First, we normalize the screen coordinates:

    vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);

    Next, we set up the camera. This demo uses an orthographic camera (parallel projection):

    // Camera Params
    vec3 origin = vec3(0.0, 0.0, 1.0);
    vec3 lookAt = vec3(0.0, 0.0, 0.0);
    vec3 cDir = normalize(lookAt - origin);
    vec3 cUp = vec3(0.0, 1.0, 0.0);
    vec3 cSide = cross(cDir, cUp);
    
    // Orthographic Camera
    vec3 ray = origin + cSide * p.x + cUp * p.y;
    vec3 rayDirection = cDir;

    After that, inside the map function, two spheres are defined and their distances calculated using sdSphere. The variable d is initially set to a large value and updated with the min function to keep track of the shortest distance to the surface.

    float map(vec3 p) {
        float radius = 0.5;
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
        d = min(sphere0, sphere1);
    
        return d;
    }

    Then we run a ray marching loop, which updates the ray position by computing the distance to the nearest object at each step. The loop ends either after a fixed number of iterations or when the distance becomes smaller than a threshold (dist < EPS):

    for ( int i = 0; i < ITR; ++ i ) {
    	dist = map(ray);
    	ray += rayDirection * dist;
    	if ( dist < EPS ) break ;
    }

    Finally, we determine the output color. We use black as the default color (background), and render a white pixel only if a hit is detected:

    vec3 color = vec3(0.0);
    
    if ( dist < EPS ) {
    	color = vec3(1.0);
    }

    We’ve successfully rendered two overlapping spheres using ray marching!

    2.4. Normals

    Although we successfully rendered spheres in the previous section, the scene still looks flat and lacks depth. This is because we haven’t applied any shading or visual effects that respond to surface orientation.

    While we won’t implement full shading in this demo, we’ll still compute surface normals, as they’re essential for adding surface detail and other visual effects.

    Let’s look at the code first:

    vec3 generateNormal(vec3 p) {
        return normalize(vec3(
                map(p + vec3(EPS, 0.0, 0.0)) - map(p + vec3(-EPS, 0.0, 0.0)),
                map(p + vec3(0.0, EPS, 0.0)) - map(p + vec3(0.0, -EPS, 0.0)),
                map(p + vec3(0.0, 0.0, EPS)) - map(p + vec3(0.0, 0.0, -EPS))
            ));
    }

    At first glance, this may seem hard to understand. Put simply, this computes the gradient of the distance function, which corresponds to the normal vector.

    If you’ve studied vector calculus, this might be easy to understand. For many others, though, it may seem a bit difficult.

    That’s totally fine—a full understanding of the details isn’t necessary to use the result. If you just want to move on, feel free to skip ahead to the section where we debug normals by visualizing them with color.

    However, for those who are interested in how it works, we’ll now walk through the explanation in more detail.

    The gradient of a scalar function 𝑓(𝑥,𝑦,𝑧) is simply a vector composed of its partial derivatives. It points in the direction of the greatest rate of increase of the function:

    To compute this gradient numerically, we can use the central difference method. For example:

    We apply the same idea for the 𝑦 and 𝑧 components.
    Note: The factor 2𝜀 is omitted in the code since we normalize the result using normalize().

    Next, let us consider a signed distance function 𝑓(𝑥,𝑦,𝑧), which returns the shortest distance from any point in space to the surface of an object. By definition, 𝑓(𝑥,𝑦,𝑧)=0 on the surface of the object.

    Assume that 𝑓 is smooth (i.e., differentiable) in the region of interest. When the point (𝑥,𝑦,𝑧) undergoes a small displacement Δ𝒓=(Δ𝑥,Δ𝑦,Δ𝑧), the change in the function value Δ𝑓 can be approximated using the first-order Taylor expansion:

    Here,∇𝑓 is the gradient vector of 𝑓, and Δ𝒓 is an arbitrary small displacement vector.

    Now, since 𝑓=0 on the surface and remains constant as we move along the surface (i.e., tangentially), the function value does not change, so Δ𝑓=0. Therefore:

    This means that the gradient vector is perpendicular to any tangent vector Δ𝒓 on the surface. In other words, the gradient vector ∇𝑓 points in the direction of the surface normal.

    Thus, the gradient of a signed distance function gives the surface normal direction at any point on the surface.

    2.5. Visualizing Normals with Color

    To verify that the surface normals are being calculated correctly, we can visualize them using color.

    if ( dist < EPS ) {
    	vec3 normal = generateNormal(ray);
    	color = normal;
    }

    Note that within the if block, ray refers to a point on the surface of the object. So by passing ray to generateNormal, we can obtain the surface normal at the point of intersection.

    When we render the scene, you’ll notice that the surface of the sphere is shaded in red, green, and blue based on the orientation of the normal vectors. This is because we’re mapping the 𝑥, 𝑦, and 𝑧 components of the normal vector to the RGB color channels respectively.

    This is a common and intuitive way to debug normal vectors visually, helping us ensure they are computed correctly.

    When combining two spheres with the standard min() function, a hard edge forms where the shapes intersect, resulting in an unnatural boundary.
    To avoid this, we can use a blending function called smoothMin, which softens the transition by merging the distance values smoothly.

    // added
    float smoothMin(float d1, float d2, float k) {
        float h = exp(-k * d1) + exp(-k * d2);
        return -log(h) / k;
    }
    
    float map(vec3 p) {
        float radius = 0.5;
        float k = 7.; // added: smoothing factor for metaball effect
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-.4, 0.0, 0.0)), radius);
        d = smoothMin(d, sphere0, k); // modified: blend with smoothing
        d = smoothMin(d, sphere1, k); // modified
    
        return d;
    }

    This function creates a smooth, continuous connection between shapes—producing a metaball-like effect where the forms appear to merge organically.

    The parameter k controls the smoothness of the blend. A higher k value results in a sharper transition (closer to min()), while a lower k produces smoother, more gradual merging.

    For more details, please refer to the following two articles:

    1. wgld.org | GLSL: オブジェクト同士を補間して結合する
    2. Inigo Quilez :: computer graphics, mathematics, shaders, fractals, demoscene and more

    4. Adding Noise for a Droplet-like Appearance

    So far, we’ve covered how to calculate normals and how to smoothly blend objects.

    Next, let’s tune the surface appearance to make things feel more realistic.

    In this demo, we’re aiming to create droplet-like metaballs. So how can we achieve that kind of look? The key idea here is to use noise to distort the surface.

    Let’s jump right into the code:

    // output.frag
    
    uniform float uTime;
    
    // ...
    
    float rnd3D(vec3 p) {
        return fract(sin(dot(p, vec3(12.9898, 78.233, 37.719))) * 43758.5453123);
    }
    
    float noise3D(vec3 p) {
        vec3 i = floor(p);
        vec3 f = fract(p);
    
        float a000 = rnd3D(i); // (0,0,0)
        float a100 = rnd3D(i + vec3(1.0, 0.0, 0.0)); // (1,0,0)
        float a010 = rnd3D(i + vec3(0.0, 1.0, 0.0)); // (0,1,0)
        float a110 = rnd3D(i + vec3(1.0, 1.0, 0.0)); // (1,1,0)
        float a001 = rnd3D(i + vec3(0.0, 0.0, 1.0)); // (0,0,1)
        float a101 = rnd3D(i + vec3(1.0, 0.0, 1.0)); // (1,0,1)
        float a011 = rnd3D(i + vec3(0.0, 1.0, 1.0)); // (0,1,1)
        float a111 = rnd3D(i + vec3(1.0, 1.0, 1.0)); // (1,1,1)
    
        vec3 u = f * f * (3.0 - 2.0 * f);
        // vec3 u = f*f*f*(f*(f*6.0-15.0)+10.0);
    
        float k0 = a000;
        float k1 = a100 - a000;
        float k2 = a010 - a000;
        float k3 = a001 - a000;
        float k4 = a000 - a100 - a010 + a110;
        float k5 = a000 - a010 - a001 + a011;
        float k6 = a000 - a100 - a001 + a101;
        float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;
    
        return k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;
    }
    
    vec3 dropletColor(vec3 normal, vec3 rayDir) {
        vec3 reflectDir = reflect(rayDir, normal);
    
        float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
        float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);
    
        vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
        vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;
    
        float intensity = 2.3;
        vec3 color = (_color0 + _color1) * intensity;
    
        return color;
    }
    
    // ...
    
    void main() {
    	// ...
    
    	if ( dist < EPS ) {
    		vec3 normal = generateNormal(ray);
    		color = dropletColor(normal, rayDirection);
    	}
    	
    	 gl_FragColor = vec4(color, 1.0);
    }

    To create the droplet-like texture, we’re using value noise. If you’re unfamiliar with these noise techniques, the following articles provide helpful explanations:

    3D value noise is generated by interpolating random values placed at the eight vertices of a cube. The process involves three stages of linear interpolation:

    1. Bottom face interpolation: First, we interpolate between the four corner values on the bottom face of the cube
    2. Top face interpolation: Similarly, we interpolate between the four corner values on the top face
    3. Final z-axis interpolation: Finally, we interpolate between the results from the bottom and top faces along the z-axis

    This triple interpolation process is called trilinear interpolation.

    The following code demonstrates the trilinear interpolation process for 3D value noise:

    float n = mix(
    	mix( mix( a000, a100, u.x ), mix( a010, a110, u.x ), u.y ),
    	mix( mix( a001, a101, u.x ), mix( a011, a111, u.x ), u.y ),
    	u.z
    );

    The nested mix() functions above can be converted into an explicit polynomial form for better performance:

    float k0 = a000;
    float k1 = a100 - a000;
    float k2 = a010 - a000;
    float k3 = a001 - a000;
    float k4 = a000 - a100 - a010 + a110;
    float k5 = a000 - a010 - a001 + a011;
    float k6 = a000 - a100 - a001 + a101;
    float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;
    
    float n = k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;

    By sampling this noise using the reflection vector as coordinates, we can create a realistic water droplet-like texture. Note that we are using the surface normal obtained earlier to compute this reflection vector. To add time-based variation, we generate noise at positions offset by uTime:

    vec3 reflectDir = reflect(rayDir, normal);
    
    float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
    float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);

    Finally, we blend two noise-influenced colors and scale the result:

    vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
    vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;
    
    float intensity = 2.3;
    vec3 color = (_color0 + _color1) * intensity;

    It’s starting to look quite like a water droplet! However, it still appears a bit murky.
    To improve this, let’s add the following post-processing step:

    // output.frag
    
    if ( dist < EPS ) {
    	vec3 normal = generateNormal(ray);
    	color = dropletColor(normal, rayDirection);
    }
    
    vec3 finalColor = pow(color, vec3(7.0)); // added
    
    gl_FragColor = vec4(finalColor, 1.0); // modified

    Using pow(), darker regions are suppressed, allowing the highlights to pop and creating a more glass-like, translucent surface.

    5. Simulating Stretchy Droplets with Mouse Movement

    Finally, let’s make the droplet stretch and follow the mouse movement, giving it a soft and elastic feel.

    We’ll achieve this by placing multiple spheres along the mouse trail.

    // Output.ts
    
    constructor() {
    	// ...
    	this.trailLength = 15;
    	this.pointerTrail = Array.from({ length: this.trailLength }, () => new THREE.Vector2(0, 0));
    	
    	this.uniforms = {
    	    uTime: { value: Common.time },
    	    uResolution: {
    	        value: new THREE.Vector2(Common.width, Common.height),
    	    },
    	    uPointerTrail: { value: this.pointerTrail },
    	};
    }
    
    // ...
    
    /**
     * # rAF update
     */
    update() {
      this.updatePointerTrail();
      this.render();
    }
    
    /**
     * # Update the pointer trail
     */
    updatePointerTrail() {
      for (let i = this.trailLength - 1; i > 0; i--) {
         this.pointerTrail[i].copy(this.pointerTrail[i - 1]);
      }
      this.pointerTrail[0].copy(Pointer.coords);
    }
    // output.frag
    
    const int TRAIL_LENGTH = 15; // added
    uniform vec2 uPointerTrail[TRAIL_LENGTH]; // added
    
    // ...
    
    // modified
    float map(vec3 p) {
        float baseRadius = 8e-3;
        float radius = baseRadius * float(TRAIL_LENGTH);
        float k = 7.;
        float d = 1e5;
    
        for (int i = 0; i < TRAIL_LENGTH; i++) {
            float fi = float(i);
            vec2 pointerTrail = uPointerTrail[i] * uResolution / min(uResolution.x, uResolution.y);
    
            float sphere = sdSphere(
                    translate(p, vec3(pointerTrail, .0)),
                    radius - baseRadius * fi
                );
    
            d = smoothMin(d, sphere, k);
        }
    
        float sphere = sdSphere(translate(p, vec3(1.0, -0.25, 0.0)), 0.55);
        d = smoothMin(d, sphere, k);
    
        return d;
    }

    Conclusion

    In this tutorial, we explored how to create a dynamic, droplet-like effect using ray marching and shading techniques. Here’s what we covered:

    1. Used ray marching to render spheres in 3D space.
    2. Applied smoothMin to blend the spheres into seamless metaballs.
    3. Added surface noise to give the spheres a more organic appearance.
    4. Simulated stretchy motion by arranging spheres along the mouse trail.

    By combining these techniques, we achieved a soft, fluid visual that responds to user interaction.

    Thanks for following along—I hope you find these techniques useful in your own projects!



    Source link