In this post, we’ll take a closer look at the dithering-shader project: a minimal, real-time ordered dithering effect built using GLSL and the Post Processing library.
Rather than just creating a one-off visual effect, the goal was to build something clean, composable, and extendable: a drop-in shader pass that brings pixel-based texture into modern WebGL pipelines.
What It Does
This shader applies ordered dithering as a postprocessing effect. It transforms smooth gradients into stylized, binary (or quantized) pixel patterns, simulating the visual language of early bitmap displays, dot matrix printers, and 8-bit games.
It supports:
Dynamic resolution via pixelSize
Optional grayscale mode
Composability with bloom, blur, or other passes
Easy integration via postprocessing‘s Effect class
Fragment Shader
Our dithering shader implementation consists of two main components:
1. The Core Shader
The heart of the effect lies in the GLSL fragment shader that implements ordered dithering:
bool getValue(float brightness, vec2 pos) {
// Early return for extreme values
if (brightness > 16.0 / 17.0) return false;
if (brightness < 1.0 / 17.0) return true;
// Calculate position in 4x4 dither matrix
vec2 pixel = floor(mod(pos.xy / gridSize, 4.0));
int x = int(pixel.x);
int y = int(pixel.y);
// 4x4 Bayer matrix threshold map
// ... threshold comparisons based on matrix position
}
The getValue function is the core of the dithering algorithm. It:
Takes brightness and position: Uses the pixel’s luminance value and screen position
Maps to dither matrix: Calculates which cell of the 4×4 Bayer matrix the pixel belongs to
Applies threshold: Compares the brightness against a predetermined threshold for that matrix position
Returns binary decision: Whether the pixel should be black or colored
Key Shader Features
gridSize: Controls the size of the dithering pattern
pixelSizeRatio: Adds pixelation effect for enhanced retro feel
grayscaleOnly: Converts the image to grayscale before dithering
invertColor: Inverts the final colors for different aesthetic effects
The shader combines dithering with optional pixelation, creating a compound retro effect that’s perfect for game-like visuals.
Creating a Custom Postprocessing Effect
The shader is wrapped using the Effect base class from the postprocessing library. This abstracts away the boilerplate of managing framebuffers and passes, allowing the shader to be dropped into a scene with minimal setup.
export class DitheringEffect extends Effect {
uniforms: Map<string, THREE.Uniform<number | THREE.Vector2>>;
constructor({
time = 0,
resolution = new THREE.Vector2(1, 1),
gridSize = 4.0,
luminanceMethod = 0,
invertColor = false,
pixelSizeRatio = 1,
grayscaleOnly = false
}: DitheringEffectOptions = {}) {
const uniforms = new Map<string, THREE.Uniform<number | THREE.Vector2>>([
["time", new THREE.Uniform(time)],
["resolution", new THREE.Uniform(resolution)],
["gridSize", new THREE.Uniform(gridSize)],
["luminanceMethod", new THREE.Uniform(luminanceMethod)],
["invertColor", new THREE.Uniform(invertColor ? 1 : 0)],
["ditheringEnabled", new THREE.Uniform(1)],
["pixelSizeRatio", new THREE.Uniform(pixelSizeRatio)],
["grayscaleOnly", new THREE.Uniform(grayscaleOnly ? 1 : 0)]
]);
super("DitheringEffect", ditheringShader, { uniforms });
this.uniforms = uniforms;
}
...
}
Optional: Integrating with React Three Fiber
Once defined, the effect is registered and applied using @react-three/postprocessing. Here’s a minimal usage example with bloom and dithering:
<Canvas>
{/* ... your scene ... */}
<EffectComposer>
<Bloom intensity={0.5} />
<Dithering pixelSize={2} grayscale />
</EffectComposer>
</Canvas>
You can also tweak pixelSize dynamically to scale the effect with resolution, or toggle grayscale mode based on UI controls or scene context.
Extending the Shader
This shader is intentionally kept simple, a foundation rather than a full system. It’s easy to customize or extend. Here are some ideas you can try:
Add color quantization: convert color.rgb to indexed palettes
Pack depth-based dither layers for fake shadows
Animate the pattern for VHS-like shimmer
Interactive pixelation: use mouse proximity to affect u_pixelSize
Why Not Use a Texture?
Some dithering shaders rely on threshold maps or pre-baked noise textures. This one doesn’t. The matrix pattern is deterministic and screen-space based, which means:
No texture loading required
Fully procedural
Clean pixel alignment
It’s not meant for photorealism. It’s for styling and flattening. Think more zine than render farm.
Final Thoughts
This project started as a side experiment to explore what it would look like to bring tactile, stylized “non-photorealism” back into postprocessing workflows. But I found it had broader use cases, especially in cases where design direction favors abstraction or controlled distortion.
If you’re building UIs, games, or interactive 3D scenes where “perfect” isn’t the goal, maybe a little pixel grit is exactly what you need.
You’ve probably seen this kind of scroll effect before, even if it doesn’t have a name yet. (Honestly, we need a dictionary for all these weird and wonderful web interactions. If you’ve got a talent for naming things…do it. Seriously. The internet is waiting.)
Imagine a grid of images. As you scroll, the columns don’t move uniformly but instead, the center columns react faster, while those on the edges trail behind slightly. It feels soft, elastic, and physical, almost like scrolling with weight, or elasticity.
You can see this amazing effect on sites like yzavoku.com (and I’m sure there’s a lot more!).
So what better excuse to use the now-free GSAP ScrollSmoother? We can recreate it easily, with great performance and full control. Let’s have a look!
What We’re Building
We’ll take CSS grid based layout and add some magic:
Inertia-based scrolling using ScrollSmoother
Per-column lag, calculated dynamically based on distance from the center
A layout that adapts to column changes
HTML Structure
Let’s set up the markup with figures in a grid:
<div class="grid">
<figure class="grid__item">
<div class="grid__item-img" style="background-image: url(assets/1.webp)"></div>
<figcaption class="grid__item-caption">Zorith - L91</figcaption>
</figure>
<!-- Repeat for more items -->
</div>
Inside the grid, we have many .grid__item figures, each with a background image and a label. These will be dynamically grouped into columns by JavaScript, based on how many columns CSS defines.
In our JavaScript then, we’ll change the DOM structure by inserting .grid__column wrappers around groups of items, one per colum, so we can control their motion individually. Why are we doing this? It’s a bit lighter to move columns rather then each individual item.
This method groups your grid items into arrays, one for each visual column, using the actual number of columns calculated from the CSS.
3. Create Column Wrappers and Assign Lag
const buildGrid = (columns, numColumns) => {
const fragment = document.createDocumentFragment(); // Efficient DOM batch insertion
const mid = (numColumns - 1) / 2; // Center index (can be fractional)
const columnContainers = [];
// Loop over each column
columns.forEach((column, i) => {
const distance = Math.abs(i - mid); // Distance from center column
const lag = baseLag + distance * lagScale; // Lag based on distance from center
const columnContainer = document.createElement('div'); // New column wrapper
columnContainer.className = 'grid__column';
// Append items to column container
column.forEach((item) => columnContainer.appendChild(item));
fragment.appendChild(columnContainer); // Add to fragment
columnContainers.push({ element: columnContainer, lag }); // Save for lag effect setup
});
grid.appendChild(fragment); // Add all columns to DOM at once
return columnContainers;
};
The lag value increases the further a column is from the center, creating that elastic “catch up” feel during scroll.
4. Apply Lag Effects to Each Column
const applyLagEffects = (columnContainers) => {
columnContainers.forEach(({ element, lag }) => {
smoother.effects(element, { speed: 1, lag }); // Apply individual lag per column
});
};
ScrollSmoother handles all the heavy lifting, we just pass the desired lag.
5. Handle Layout on Resize
// Rebuild the layout only if the number of columns has changed on window resize
window.addEventListener('resize', () => {
const newColumnCount = getColumnCount();
if (newColumnCount !== currentColumnCount) {
init();
}
});
This ensures our layout stays correct across breakpoints and column count changes (handled via CSS).
Now, there’s lots of ways to build upon this and add more jazz!
For example, you could:
add scroll-triggered opacity or scale animations
use scroll velocity to control effects (see demo 2)
adapt this pattern for horizontal scroll layouts
Exploring Variations
Once you have the core concept in place, there are four demo variations you can explore. Each one shows how different lag values and scroll-based interactions can influence the experience.
You can adjust which columns respond faster, or play with subtle scaling and transforms based on scroll velocity. Even small changes can shift the rhythm and tone of the layout in interesting ways. And don’t forget: changing the look of the grid itself, like the image ratio or gaps, will give this a whole different feel!
Now it’s your turn. Tweak it, break it, rebuild it, and make something cool.
I really hope you enjoy this effect! Thanks for checking by 🙂
I hadn’t planned on creating a fashion interface. I just needed a reboot. At the time, I was leading art direction at the studio, juggling multiple projects, and emotionally, I was simply exhausted. I joined an Awwwards Masterclass to rediscover the joy of playing with design. I wanted to learn Webflow. I wanted to explore GSAP. But more than that, I wanted to create something unapologetically weird and beautiful.
That seed grew into DICH™, Design Independent Creative House. What started as a design playground became a statement.
We began with light. Not darkness. No glitch, no noise. Just a breath of fresh air.
Designing the Unfuturistic Future
We made a conscious decision: no dark mode. No glitch filters. Most futuristic UIs feel cold. We wanted warmth, softness, a vision of the future that is poetic, not synthetic.
Each section had its own visual temperature. Soft gradients, air, pastel dust. Typography was crucial. The T-12 font had those strange numeric ligatures that felt alien but elegant. Video, color, typography — all speaking the same language.
Each color palette tells a chapter of DICH — sunlit, strange, alive.
We built moodboards, UX pillars, and rhythm plans. That process, taught in the Masterclass, changed how we approached layout. It wasn’t about grids. It was about flow.
Building the Entry Ritual (Preloader)
The preloader wasn’t just an aesthetic flex. It solved three key problems:
Our media-heavy site needed time to load
Browsers block autoplaying audio without user interaction
We wanted to introduce mood and rhythm before the scroll even began
It was animated in After Effects and exported to Lottie, then embedded into Webflow and animated using GSAP.
The Enter button also triggered sound. It was our “permission point” for browser playback.
Live coordinate HUD under the cursor — perfectly suited to our site’s theme, so we decided to include it.
document.addEventListener('DOMContentLoaded', function () {
if (window.innerWidth <= 768) return;
const xCoord = document.getElementById('x-coordinate');
const yCoord = document.getElementById('y-coordinate');
let mouseX = 0;
let mouseY = 0;
let lastX = -1;
let lastY = -1;
let ticking = false;
function formatNumber(num) {
return num.toString().padStart(4, '0');
}
function updateCoordinates() {
if (mouseX !== lastX || mouseY !== lastY) {
xCoord.textContent = formatNumber(mouseX % 10000);
yCoord.textContent = formatNumber(mouseY % 10000);
lastX = mouseX;
lastY = mouseY;
}
ticking = false;
}
document.addEventListener('mousemove', (event) => {
mouseX = event.clientX;
mouseY = event.clientY;
if (!ticking) {
ticking = true;
requestAnimationFrame(updateCoordinates);
}
});
});
Stones That Scroll
We placed a 3D stone (also from Blender) into Spline, gave it orbital motion, and connected it to scroll using Webflow Interactions.
It felt like motion with gravity — guided, yet organic.
Pixel Tracer
With coordinate tracking already in place, we easily applied it to our section and later enhanced it with a pixel tracer inspired by Jean Mazouni’s displacement effect.
Unicorn Everywhere
The cursor wasn’t just a pointer, it became a vibe.
We used Unicorn Studio to create custom cursor trails and animations that followed the user like echoes of intent. Three variations in total:
One for the landing screen — minimal, hypnotic.
One for the project case study — denser, electric.
One for transitions — barely-there glimmer, like a memory.
Each version added tension and curiosity. It wasn’t flashy for the sake of it — it gave rhythm to hovering, a pulse to the interaction. Suddenly, the cursor wasn’t just a tool. It was part of the interface’s voice.
Footer Letters with Physics
Our footer was a personal moment. We wanted the word “DICH” to be hidden inside animated lines and revealed on hover using canvas and brightness sampling.
This one took the longest. We tried Perlin noise, sine curves, and springs, but none worked as we’d hoped or produced results that were sufficiently readable — until we found an old Domestika course that showed getImageData() logic.
Every visual block was lazy-loaded using IntersectionObserver
Safari compatibility issues — reworked unsupported animations for Safari and added fallbacks for AVIF images (even lighter than WebP) to maximize optimization.
Heavy sections only rendered after the preloader finished
We treated the error page like a runway — not a dead-end, but an invitation. Instead of a sad emoji or a bland “page not found,” you get a full-screen glitch-dream: warped typography, soft scans, and a single message that flickers like a memory.
Technically, it was simple — a standalone Webflow page. But visually, it extended the DICH world: same typographic tension, same surreal softness. We even debated adding background audio, but silence won — it made the page feel like a moment suspended in time.
What We Learned
File formats matter more than you think
Glitches aren’t as magical as thoughtful motion
GSAP is our best friend
Webflow is powerful when paired with code
You don’t need a big plan to make something that matters
Closing
I almost gave up. More than once. But every time the team cracked a bug, designed a transition, or made a visual more strange — it reminded me why we build.
DICH™ was a challenge, a love letter, and a reset. And now it’s yours to explore.
I’ve always been interested in finding simple ways to bring more depth into web interfaces, not just through visuals, but through interaction and space.
In this demo, I explored how flat UI cards can become interactive 3D scenes using GLB models, Three.js, and Webflow. Each card starts as a basic layout but reveals a small, self-contained environment built with real-time rendering and subtle motion.
It’s a lightweight approach to adding spatial storytelling to familiar components, using tools many designers already work with.
Welcome to My Creative World
I’m always drawn to visuals that mix the futuristic with the familiar — space-inspired forms, minimal layouts, and everyday elements seen from a different angle.
Most of my projects start this way: by reimagining ordinary ideas through a more immersive or atmospheric lens.
It All Started with a Moodboard
This one began with a simple inspiration board:
From that board, I picked a few of my favorite visuals and ran them through an AI tool that converts images into GLB 3D models.
The results were surprisingly good! Abstract, textured, and full of character.
The Concept: Flat to Deep
When I saw the output from the AI-generated GLB models, I started thinking about how we perceive depth in UI design, not just visually, but interactively.
That led to a simple idea: what if flat cards could reveal a hidden spatial layer? Not through animation alone, but through actual 3D geometry, lighting, and camera movement.
I designed three UI cards, each styled with minimal HTML and CSS in Webflow. On interaction, they load a unique GLB model into a Three.js scene directly within the card container. Each model is lit, framed, and animated to create the feeling of a self-contained 3D space.
Building the Web Experience
The layout was built in Webflow using a simple flexbox structure with three cards inside a wrapper. Each card contains a div that serves as the mounting point for a 3D object.
The GLB models are rendered using Three.js, which is integrated into the project with custom JavaScript. Each scene is initialized and handled separately, giving each card its own interactive 3D space while keeping the layout lightweight and modular.
Scene Design with Blender
Each GLB model was prepared in Blender, where I added a surrounding sphere to create a sense of depth and atmosphere. This simple shape helps simulate background contrast and encloses the object in a self-contained space.
Lighting played an important role; especially with reflective materials like glass or metal. Highlights and soft shadows were used to create that subtle, futuristic glow.
The result is that each 3D model feels like it lives inside its own ambient environment, even when rendered in a small card.
Bringing It Together with Three.js
Once the models were exported from Blender as .glb files, I used Three.js to render them inside each card. Each card container acts as its own 3D scene, initialized through a custom JavaScript function.
The setup involves creating a basic scene with a perspective camera, ambient and directional lighting, and a WebGL renderer. I used GLTFLoader to load each .glb file and OrbitControls to enable subtle rotation. Zooming and panning are disabled to keep the interaction focused and controlled.
Each model is loaded into a separate container, making it modular and easy to manage. The camera is offset slightly for a more dynamic starting view, and the background is kept dark to help the lighting pop.
Here’s the full JavaScript used to load and render the models:
// Import required libraries
import * as THREE from 'three';
import { OrbitControls } from 'three/addons/controls/OrbitControls.js';
import { GLTFLoader } from 'three/addons/loaders/GLTFLoader.js';
import gsap from 'gsap';
/**
* This function initializes a Three.js scene inside a given container
* and loads a .glb model into it.
*/
function createScene(containerSelector, glbPath) {
const container = document.querySelector(containerSelector);
// 1. Create a scene
const scene = new THREE.Scene();
scene.background = new THREE.Color(0x202020); // dark background
// 2. Set up the camera with perspective
const camera = new THREE.PerspectiveCamera(
45, // Field of view
container.clientWidth / container.clientHeight, // Aspect ratio
0.1, // Near clipping plane
100 // Far clipping plane
);
camera.position.set(2, 0, 0); // Offset to the side for better viewing
// 3. Create a renderer and append it to the container
const renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(container.clientWidth, container.clientHeight);
container.appendChild(renderer.domElement);
// 4. Add lighting
const light = new THREE.DirectionalLight(0xffffff, 4);
light.position.set(30, -10, 20);
scene.add(light);
const ambientLight = new THREE.AmbientLight(0x404040); // soft light
scene.add(ambientLight);
// 5. Set up OrbitControls to allow rotation
const controls = new OrbitControls(camera, renderer.domElement);
controls.enableZoom = false; // no zooming
controls.enablePan = false; // no dragging
controls.minPolarAngle = Math.PI / 2; // lock vertical angle
controls.maxPolarAngle = Math.PI / 2;
controls.enableDamping = true; // smooth movement
// 6. Load the GLB model
const loader = new GLTFLoader();
loader.load(
glbPath,
(gltf) => {
scene.add(gltf.scene); // Add model to the scene
},
(xhr) => {
console.log(`${containerSelector}: ${(xhr.loaded / xhr.total) * 100}% loaded`);
},
(error) => {
console.error(`Error loading ${glbPath}`, error);
}
);
// 7. Make it responsive
window.addEventListener("resize", () => {
camera.aspect = container.clientWidth / container.clientHeight;
camera.updateProjectionMatrix();
renderer.setSize(container.clientWidth, container.clientHeight);
});
// 8. Animate the scene
function animate() {
requestAnimationFrame(animate);
controls.update(); // updates rotation smoothly
renderer.render(scene, camera);
}
animate(); // start the animation loop
}
// 9. Initialize scenes for each card (replace with your URLs)
createScene(".div", "https://yourdomain.com/models/yourmodel.glb");
createScene(".div2", "https://yourdomain.com/models/yourmodel2.glb");
createScene(".div3", "https://yourdomain.com/models/yourmodel3.glb");
This script is added via a <script type="module"> tag, either in the Webflow page settings or as an embedded code block. Each call to createScene() initializes a new card, linking it to its corresponding .glb file.
How This Works in Practice
In Webflow, create three containers with the classes .div, .div2, and .div3. Each one will act as a canvas for a different 3D scene.
Embed the JavaScript module shown above by placing it just before the closing </body> tag in your Webflow project, or by using an Embed block with <script type="module">.
Once the page loads, each container initializes its own Three.js scene and loads the corresponding GLB model. The result: flat UI cards become interactive, scrollable 3D objects — all directly inside Webflow.
This approach is lightweight, clean, and performance-conscious, while still giving you the flexibility to work with real 3D content.
Important Note for Webflow Users
This setup works in Webflow, but only if you structure it correctly.
To make it work, you’ll need to:
Host your Three.js code externally using a bundler like Vite, Parcel, or Webpack
Or bundle the JavaScript manually and embed it as a <script type="module"> in your exported site
Keep in mind: Webflow’s Designer does not support ES module imports (import) directly. Pasting the code into an Embed block won’t work unless it’s already built and hosted elsewhere.
You’ll need to export your Webflow project or host the script externally, then link it via your project settings.
Final Thoughts
Thanks for following along with this project. What started as a simple moodboard turned into a small experiment in mixing UI design with real-time 3D.
Taking flat cards and turning them into interactive scenes was a fun way to explore how much depth you can add with just a few tools: Webflow, Three.js, and GLB models.
If this gave you an idea or made you want to try something similar, that’s what matters most. Keep experimenting, keep learning, and keep building.
This project primarily serves as a technical demo and learning material. It began when I decided to start learning Blender. I followed a few tutorials, then decided to do a small project using it—so I chose to create the Canon F-1 camera!
After that, I decided to export the project to Three.js to add some cool post-processing shader effects. I wanted to create a sketch effect similar to what I had seen in some repair guides.
After spending a few hours experimenting with it, I decided to integrate it into a fully functional website featuring some cool shaders and 3D effects!
In this article, I’m going to walk through some of the key features of the site and provide a technical breakdown, assuming you already have a basic or beginner-level understanding of Three.js and shaders.
1. The Edge Detection Shader
Three.js includes a built-in edge detection shader called SobelOperatorShader. Basically, it detects edges based on color contrast—it draws a line between two areas with a strong enough difference in color.
To make my effect work the way I want, I need to assign a unique color to each area I want to highlight on my model. This way, Three.js will draw a line around those areas.
Here’s my model with all the materials applied:
This way, Three.js can accurately detect each area I want to highlight!
As you can see, the lines are not all the same intensity—some are white, while others are light gray. This is because, by default, line intensity depends on contrast: edges with lower contrast appear with lighter lines. To fix this, I manually modified the post-processing shader to make all lines fully white, regardless of contrast.
What I’m doing here is moving all the edge detection logic into the Sobel function. Then, I pass the tDiffuse texture—which is the composer’s render—to this function.
This way, I can modify the output of the edge detection shader before passing it back to the composer:
float G = sobel(t,texel);
G= G > 0.001 ? 1. : 0.;
G represents the intensity of the edge detection. It’s a single value because the lines are monochrome. G ranges from 0 to 1, where 0 means full black (no edge detected) and 1 means full white (strong contrast detected).
As mentioned earlier, this value depends on the contrast. What I’m doing in the second line is forcing G to be 1 if it’s above a certain threshold (I chose 0.001, but you could pick a smaller value if you want).
This way I can get all the edges to have the same intensity.
Here’s how I’m applying the custom fragment shader to the Sobel Operator shader pass:
Next, let’s take a look at the lens parts section.
This is mainly achieved using a Three.js utility called RenderTarget.
A render target is a buffer where the GPU draws pixels for a scene being rendered off-screen. It’s commonly used in effects like post-processing, where the rendered image is processed before being displayed on the screen.
Basically, this allows me to render my scene twice per frame: once with only the highlighted mesh, and once without it.
In the onSelectMesh method, I set the value of this.selectedMeshName to the name of the mesh group that contains the target mesh from the Raycaster (I’m using names to refer to groups of meshes).
This way, in my render loop, I can create two distinct renders:
One render (renderTargetA) with all the meshes except the hovered mesh
Another render (renderTargetB) with only the hovered mesh
As you can see, I’m sending both renders as texture uniforms to the effectSobel shader. The post-processing shader then “merges” these two renders into a single output.
At this point, we have two renders of the scene, and the post-processing shader needs to decide which one to display. Initially, I thought of simply combining them by adding the two textures together, but that didn’t produce the correct result:
What I needed was a way to hide the pixels of one render when they are “covered” by pixels from another render.
To achieve this, I used the distance of each vertex from the camera. This meant I had to go through all the meshes in the model and modify their materials. However, since the mesh colors are important for the edge detection effect, I couldn’t change their colors.
Instead, I used the alpha channel of each individual vertex to set the distance from the camera.
First, the luminance function is a built-in Three.js shader utility imported from the <common> module. It’s recommended to use this function with the Sobel effect to improve edge detection results.
The uColor value represents the initial color of the mesh.
The dist value calculates the distance between the vertex position (passed from the vertex shader via a varying) and the camera, using the built-in cameraPosition variable in Three.js shaders.
Finally, I pass this distance through the alpha channel. Since the alpha value can’t exceed 1, I use a normalized version of the distance.
And here is the updated logic for the postprocessing shader:
Now that the alpha channel of the textures contains the distance to the camera, I can simply compare them and display the render that have the closer vertices to the camera.
3. The Film Roll Effect
Next is this film roll component that moves and twist on scroll.
This effect is achieved using only shaders, the component is a single plane component with a shader material.
All the data is sent to the shader through uniforms:
export default class Film {
constructor() {
//...code
}
createGeometry() {
this.geometry = new THREE.PlaneGeometry(
60,
2,
100,
10
)
}
createMaterial() {
this.material = new THREE.ShaderMaterial({
vertexShader,
fragmentShader,
side: THREE.DoubleSide,
transparent: true,
depthWrite: false,
blending: THREE.CustomBlending,
blendEquation: THREE.MaxEquation,
blendSrc: THREE.SrcAlphaFactor,
blendDst: THREE.OneMinusSrcAlphaFactor,
uniforms: {
uPlaneWidth: new THREE.Uniform(this.geometry.parameters.width),
uRadius: new THREE.Uniform(2),
uXZfreq: new THREE.Uniform(3.525),
uYfreq: new THREE.Uniform(2.155),
uOffset: new THREE.Uniform(0),
uAlphaMap: new THREE.Uniform(
window.preloader.loadTexture(
"./alpha-map.jpg",
"film-alpha-map",
(texture) => {
texture.wrapS = THREE.RepeatWrapping
const { width, height } = texture.image
this.material.uniforms.uAlphaMapResolution.value =
new THREE.Vector2(width, height)
}
)
),
//uImages: new THREE.Uniform(new THREE.Vector4()),
uImages: new THREE.Uniform(
window.preloader.loadTexture(
"/film-texture.png",
"film-image-texture",
(tex) => {
tex.wrapS = THREE.RepeatWrapping
}
)
),
uRepeatFactor: new THREE.Uniform(this.repeatFactor),
uImagesCount: new THREE.Uniform(this.images.length * this.repeatFactor),
uAlphaMapResolution: new THREE.Uniform(new THREE.Vector2()),
uFilmColor: new THREE.Uniform(window.colors.orange1),
},
})
}
createMesh() {
this.mesh = new THREE.Mesh(this.geometry, this.material)
this.scene.add(this.mesh)
}
}
The main vertex shader uniforms are:
uRadius is the radius of the cylinder shape
uXZfreq is the frequency of the twists on the (X,Z) plane
uYfreq is a cylinder height factor
uOffset is the vertical offset of the roll when you scroll up and down
As you can see they are used to modify the initial position attribute to give it the shape of a cylinder. the modified position’s X Y and Z factors are using uOffset in their frequency. this uniform is linked to a Scrolltrigger timeline that will give the twist on scroll effect.
That’s it for the most part! Don’t feel frustrated if you don’t understand everything right away—I often got stuck for days on certain parts and didn’t know every technical detail before I started building.
I learned so much from this project, and I hope you’ll find it just as useful!
Thank you for reading, and thanks to Codrops for featuring me again!
Hi, my alias is “MisterPrada,” and I’d like to share a short story about how I came to the decision to become a creative developer.
Background
I’m just about to turn 30, and over the years I’ve come to many realizations that I’d like to share as echoes of my journey. I’ve been consciously programming for about 14 years, and I’ve been using Windows since childhood—battling the infamous “blue screen of death.”
From a young age, I knew who I wanted to be—a programmer. In my childhood, nothing was more exciting than a computer. However, my academic skills weren’t strong enough to get into university easily. I was never particularly gifted in any subject; my grades were average or worse.
Somehow, I managed to get accepted into a university for an engineering program related to programming. I tried hard, but nothing worked—I ended up copying others just to pass exams. After some time, I realized it was time to get serious. I had no special talents, no head start—just the need for hard work. I wrote my first function, my first loop over a two-dimensional array, my first structure, my first doubly linked list—and I realized I liked it. I really, really liked the fact that I was starting to make progress.
I didn’t stop copying completely, but I began writing my own programs. We studied C++, C#, Assembly, databases, and lots of things I couldn’t yet apply in real life. So I bought a book on PHP, JS, and MySQL and realized I could build websites using WordPress and other popular CMS platforms at the time like Joomla, Drupal, etc. And you know what? That made money—and it was insanely cool. I just took on any work I could find. Since I had spent all of university copying code, I found it really easy to understand and adapt other people’s code.
Years passed, and I was building simple websites—tweaking templates downloaded from torrents, grabbing CSS styles from random websites, and so on. Something like these:
Eventually, I realized that my growth had stalled and I needed to act fast. I started reading various books, trying to improve my skills and learn new, trending technologies. This mostly broadened my technical horizons—I understood more, copied more, and tried harder to boost my self-esteem.
At one point, I felt confident, thinking I was pretty good and could handle anything. But then something happened during the final year of university. A classmate told me he had gone for an interview at a major company, and they asked him to implement a binary tree. I was shocked—I had no idea what a binary tree was, how to build one, or why I was even supposed to know it.
Honestly, it hit me hard. I started questioning everything—was I even a real programmer? Maybe I was third, fourth, or even fifth-rate at best, especially with my modest PHP/JS skill set…
No matter how tough things got, I never felt like this wasn’t for me. I never thought of quitting or doing something else. I just accepted that I wasn’t the best, not the smartest, and unlikely to be in Steve Jobs’ dream dev team. And you know what? Something strange happened.
One day, while playing my favorite game, World of Warcraft, I decided I wanted to become a cheater. And it wasn’t just a casual thought or curiosity—it became a full-blown obsession. I was just a regular programmer with average web development knowledge, yet I decided to write a cheat, dive into hacking, and understand how it all worked.
For a whole year, I obsessively studied the C++ source code of the game—despite not really using C++ at all. I explored how the server worked, dug into Assembly, network traffic, data packets, and hex code. I read books on cybersecurity and anything even remotely related. It felt like an endless world of discovery. I could spend months trying to understand things that didn’t make sense to me at first—occasionally achieving small victories, but victories nonetheless.
I started building a toolkit of tools like IDA Pro, xDbg, and even something as simple as https://hexed.it/, which let me quickly modify binary files.
After achieving real success—writing my first memory manipulation programs for protected software—I realized that what really makes a difference is a mix of luck, hard work, and a genuine passion for what you’re doing. And I had both of those things.
That became a kind of guiding principle for my further development. Sure, I’m not the most talented or naturally gifted, but I began to understand that even without full knowledge, with persistence and effort, you can achieve goals that seem impossible at first—or even at second or third glance.
Getting to Work
I got a job at an outsourcing company, and honestly, I felt confident thanks to my freelance commercial experience. At work, I handled whatever tasks the client needed—it didn’t matter whether I already knew how to do it or not. My goals were simple: learn more and earn money. What did I work on? Pretty much everything, except I always thought of myself as more of a logic guy, and frontend wasn’t really my thing. It was easier for me to deploy and configure a server than to write 10 lines of CSS.
So I focused mostly on backend logic, building systems, and I’d often hand off frontend tasks to others. Still, I was always afraid of losing touch with those skills, so I made an effort to study Vue, React, Angular, and various frontend libraries—just to understand the logic behind it.
I read a lot of books, mostly on JavaScript, DevOps, and hacking. At work, I grew horizontally, gaining experience based on the clients’ needs. In my personal time, I was deeply interested in hacking and reverse engineering—not because of any grand ambition, but simply because I loved it. I saw myself in it, because I was good at it. I definitely had some luck—I could click randomly through code and somehow land on exactly what I needed. It’s comforting to know that not everything is hopeless.
Years went by, and as backend developers and DevOps engineers, we often felt invisible. Over time, the huge amount of backend code I wrote stopped bringing the same satisfaction. There were more systems, more interfaces, and less recognition—because no one really sees what you do behind the scenes. So why not switch to frontend? Well, I just hate CSS. And building simple landing pages or generic websites with nothing unique? That’s just not interesting. I need something bold and impressive—something that grabs me the way watching *Dune* does. Game development? Too complex, and I never had the desire to make games.
But then, at work, I was given a task to create a WebAR experience for a client. It required at least some basic 3D knowledge, which I didn’t have. So I dove in blindly and started building the app using 8thWall. That’s when I discovered A-Frame, which was super easy and incredibly fun—seeing results so different from anything I had done before. When A-Frame became limiting, I started using Three.js directly on commercial projects. I had zero understanding of vector math, zero 3D modeling experience (like in Blender), but I still managed to build something. Some things worked, some didn’t—but in the end, the client was happy.
WebAR experience
After creating dozens of such projects and nearly a hundred backend projects, I eventually grew tired of both. Out of boredom, I started reading books on Linux Bash, Kubernetes, WebAssembly, Security, and code quality—good and bad.
All of this only expanded my technical perspective. I didn’t become a hero or some programming guru, but I felt like I was standing alone at the summit of my own mountain. There was this strange emptiness—an aimless desire to keep learning, and yet I kept doing it day after day. Some topics I studied still haven’t revealed their meaning to me, while others only made sense years later, or proved useful when I passed that knowledge on to others.
Over the years, I became a team lead—not because I was naturally suited for it, but because there was simply no one else. I took on responsibility, began teaching others what to do, even though I wasn’t always sure what was right or wrong—I just shared my logic and experience.
Alongside trends, I had to learn CI/CD and Docker to solve tasks more efficiently—tasks that used to be handled differently. And you know what? I really learned something from this period: that most tools are quite similar, and you don’t need to master all of them to solve real business problems. In my mind, they became just that—tools.
All you need is to read the documentation, run a few basic examples, and you’re good to go. I’m simply not one of those people who wants to stick to one technology for life and squeeze value out of it forever. That’s not me. For over 5 years, I built 70–80 websites using just WordPress and Laravel—covering everything from custom themes and templating systems to multisites and even deep dives into the WordPress core. I worked with some truly awful legacy code that I wouldn’t wish on anyone.
Eventually, I decided to move on. The developers I worked with came and went, and that cycle never ended—it’s still ongoing to this day. Then came my “day X.” I was given a project I couldn’t turn down. It involved GLSL shaders. I had to create a WebAR scene with a glass beverage placed on a table. The challenge was that it was a glass cup, and around version 130 of Three.js, this couldn’t be done using a simple material. The client provided ready-made code written in Three.js with custom shaders. I looked at it and saw nothing but math—math I couldn’t understand. It was way too complex. The developer who created it had written a shader for glass, water, ice, and other elements. My task was to integrate this scene into WebAR. I was lucky enough to get a call with the developer who built it, and I asked what seemed like a straightforward question at the time:
(Me) – How did you manage to create such effects using pure math? Can you actually visualize it all in your head? (Shader Developer) – Yeah, it looks complicated, but if you start writing shaders, borrowing small snippets from elsewhere and understanding how different effects work, eventually you start to look at that mathematical code and visualize those parts in your head.
His answer blew me away. I realized—this guy is brilliant. And I honestly hadn’t seen anyone cooler. I barely understood anything about what he’d done—it was all incredibly hard to grasp. Back then, I didn’t have ChatGPT or anything like it to help. I started searching for books on the subject, but there were barely any. It was like this secret world where everyone knew everything but never shared. And if they did, it was in dry, unreadable math-heavy documentation that someone like me just couldn’t digest. At that point, I thought maybe I was simply too weak to write anything like that, and I went back to what I was doing before.
The Beginning of the Creative Developer Journey
About a year later, I came across this website, which struck me with its minimalistic and stylish design—totally my vibe. Without hesitation, I bought the course by Bruno Simon, not even digging into the details. If he said he’d teach shaders, I was all in. My obsession was so intense that I completed the course in just two weeks, diving into every single detail. Thanks to my background, most of the lessons were just a pleasant refresher—but the shader sections truly changed my life.
So, I finished the course. What now? I didn’t yet have real-world projects that matched the new skills I had gained, so I decided to just start coding and releasing my own work. I spent a long time thinking about what my first project should be. Being a huge fan of the Naruto universe, I chose to dedicate my first creative project to my favorite character—Itachi.
I already had some very basic skills in Blender, and of course, there was no way I could create a model like that myself. Luckily, I stumbled upon one on Sketchfab and managed to download it (haha). I built the project almost the way I envisioned it, though I lacked the experience for some finer details. Still, I did everything I could at the time. God rays were already available in the Three.js examples, so creating a project like that was pretty straightforward. And man, it was so cool—the feeling of being able to build something immersive was just amazing.
Next, I decided to create something in honor of my all-time favorite game, which I’ve been playing for over 15 years—World of Warcraft.
In this project, the real challenge for me was linking the portal shader to sound, as well as creating particle motion along Bézier curves. But by this point, I already had ChatGPT—and my capabilities skyrocketed. This is my favorite non-commercial project. Still, copying and modifying something isn’t the same as creating it from scratch.
The shaders I used here were pieced together from different sources—I borrowed some from Bruno Simon’s projects, and in other cases, I reverse-engineered other projects just to figure out what I could replicate instead of truly engaging my own thinking. It was like always taking the path of least resistance. Ironically, reverse engineering a Webpack-compiled site often takes more time than simply understanding the problem yourself. But that was my default mode—copy, modify, move on.
For this particular project, it wasn’t a big deal, but I’ve had projects in the past that got flagged for copyright issues. I knew everything lived on the frontend and could be broken down and analyzed bit by bit—especially shaders. You might not know this, but in Safari on a MacBook, you can use developer tools to view all the shaders used on a site and even modify them in real time. Naturally, I used every trick I knew to reach my goals.
That shader developer’s comment—about being able to read math and visualize it—kept echoing in my mind. After Bruno’s course, I started to believe he might have been right. I was beginning to understand fragments of shader code, even if not all of it. I ended up watching every single video on the YouTube channel “The Art Of Code“.
After watching those videos, I started to notice my growth in writing shaders. I began to see, understand, and even visualize what I was writing. So I decided to create a fragment shader based on my own experience:
Along my shader-writing journey, I came across someone everyone in the shader world knows—Inigo Quilez. Man, what an absolute legend. There’s this overwhelming feeling that you’ll never reach his level. His understanding of mathematics and computer graphics is just on another planet compared to mine. For a long time, that thought really got to me—20 years ago, he was creating things I still can’t do today, despite programming for so long. But looking back, I realized something: some of the people I once admired, I’ve actually surpassed in some ways—not because I aimed to, but simply by moving forward every day. And I came to believe that if I keep going, maybe I’ll reach my own peak—one where my ideas can be truly useful to others.
So here I am, moving forward, and creating what I believe is a beautiful shader of the aurora.
I realized that I could now create shaders based on models made in Blender—and do it with a full understanding of what’s going on. I was finally capable of building something entirely on my own.
So what’s next? I dove back into Three.js and began trying to apply everything I had learned to create something new. You can find a list of those projects here.
Public Projects
I bought and completed all the courses by Simon Dev. By then, the shader course wasn’t anything groundbreaking for me anymore, but the math course was something I really needed. I wanted to deepen my understanding of how to apply math in practice. I also played through this game, which demonstrates how vector math works—highly recommended for anyone struggling with the concept. It really opened my eyes to things I hadn’t understood before.
I became obsessed with making sure I didn’t miss anything shared by the people who helped shape my knowledge. I watched 100% of the videos on his YouTube channel and those of other creators who were important to me in this field. And to this day, I keep learning, studying other developers’ techniques, and growing in the field of computer graphics.
Interesting Projects
I really enjoy working with particles—and I also love motion blur. I came up with an approach where each particle blurs in the direction of its movement based on its velocity. I left some empty space on the plane where the particle is drawn so the blur effect wouldn’t get cut off.
Using particles and distance-based blur effects in commercial projects.
After watching Dune, I decided to play around with sound.
I really enjoy playing with light sources.
Or even creating custom light sources using TSL.
I consider this project my most underrated one. I’m a huge fan of the Predator and Alien universes. I did borrow the plasma shader from CodePen, but honestly, that’s not the most important detail here. At the time I made this project, Three.js had just introduced a new material property called AlphaHash, which allowed me to create an awesome laser effect. It really looks great. Maybe no one notices such small details, but for me, it was an achievement to come up with that solution right as the new version of Three.js was released. That’s where my luck comes in—I had no idea how I’d implement the laser at the start of the project and thought, “Oh well, I’ll figure something out.” And luckily, the engine developers delivered exactly what I needed just in time.
You may have already noticed that I don’t build full frontend solutions with lots of interfaces and traditional layout work—that just doesn’t interest me, so I don’t do it. In commercial development, I focus on solving niche problems—problems other developers won’t spend hours watching videos to figure out. I create concepts that later get integrated into projects. You might have already seen some 3D scenes or visual effects I’ve built—without even knowing it. A lot of development happens through two, three, or even four layers of hands. That’s why, sometimes, creating something for Coca-Cola is more realistic than making a simple online store for a local business.
And what have I learned from this journey?
Never give up. Be like Naruto—better to fail 100 times than never try at all.
I’m not a saint of a developer—I forget things just like you, I use ChatGPT, I get lazy, and sometimes, in trying to do more than I’m capable of, I give in to the temptation of borrowing code. And yes, that has sometimes ended badly for me.
I assure you, even top developers—the ones who seem untouchably brilliant—also borrow or adapt code. I’ve reverse-engineered projects and clearly seen others use code they didn’t write, even while they rake in thousands of views and win awwwards. Meanwhile, the original authors stay invisible. That’s why I now try to focus more on creating things that are truly mine, to grow the ability to create rather than just consume. And to you, I say—do whatever helps you get better. The takeaway for me is this: share what you’ve made today, because tomorrow it might be irrelevant. And believe me, if someone really wants what you’ve built, they’ll take it anyway—and you won’t even know.
Even if your job makes you build projects that don’t excite you, don’t assume it’s someone else’s job to teach you. You have to sit down, start learning on your own, and work toward what truly inspires you.
Don’t be afraid to forget things—remembering something isn’t the same as learning it from scratch, especially with ChatGPT around.
See new technologies as tools to reach your goals. Don’t fear them—use everything, including AI, as long as it helps you move forward. Making mistakes is the most normal thing that can happen to you.
Nothing is impossible—it’s just a matter of time you personally need to spend to understand something that currently feels incomprehensible.
When using ChatGPT, think critically and read what it outputs. Don’t blindly copy and paste code—I’ve done that, and it cost me a lot of time. If I had just thought it through, I could’ve solved it in five minutes.
If new technologies seem absurd to you, maybe you’re starting to age—or refusing to accept change. Try to shake yourself up and think critically. If you don’t do it, someone else will—and they’ll leave you behind.
Hard work and determination beat talent (Inigo Quilez is still out of reach for now), but the price is your time.
In the pursuit of your own achievements, don’t forget about your family, loved ones, and friends—otherwise your 30s will fly by even faster than mine did.
The more techniques you learn in digital art, the more you’ll want to understand math and physics—and many things you once found boring may suddenly gain new meaning and purpose.
Ideas that you create yourself may become more valuable to you than everything you’ve ever studied.
Programming books are often so huge that you don’t even want to buy them—but you don’t have to read them cover to cover. Learn to filter information. Don’t worry about skipping something—if you miss it, GPT can explain it later. So feel free to skip the chapters you don’t need right now or won’t retain anyway.
In the past, it was important to know what a certain technology could do and how to use it by memory or with references. Today, it’s enough to simply know what’s possible—documentation and ChatGPT can help you figure out the rest. Don’t memorize things that will be irrelevant or replaced by new tech in a few days.
Start gradually learning TSL—the node-based system will make it easier to create materials designed by artists in Blender. (Year 2025)
Don’t be afraid to dig into the core to read or even modify something. The people who build the tools you use are just people too, and they write readable code. Take Three.js, for example—when you dive into the material declarations, the hierarchy becomes much clearer, something that wasn’t obvious to me when I first started learning Three.js. Or with TSL—even though the documentation is still weak, looking at function declarations often reveals helpful comments that make it easier to understand how to use different features.
To be honest, I didn’t really want to write about myself—but Manoela pushed me, so I decided to help. And you know, helping people often comes back around as luck 🍀—and that always comes in handy later!
Alright, I won’t bore you any longer—just take a look at my cat ♥️
Two weeks. No 3D Visuals. No panic. We built the OXI ONE MKII website using nothing but structure and type. All to meet the deadline for the product launch and its debut in Berlin.
Creating a website for the launch of a new flagship product is already a high-stakes task; doing it in under 14 days, with no flawless renders, raises the bar even higher. When OXI Instruments approached us, the ONE MKII was entering its final development stage. The product was set to premiere in Berlin, and the website had to be live by that time, no extensions, no room for delay. At the same time, there was no finalized imagery, no video, and no product renders ready for use.
We had to
Build a bold, functional website without relying on visual assets
Reflect the character and philosophy of the ONE MKII — modular, live, expressive
Craft a structure that would be clear to musicians and intuitive across devices
Work in parallel with the OXI team, adjusting to changes and updates in real time
This wasn’t just about speed. It was about designing clarity under pressure, with a strict editorial mindset, where every word, margin, and interaction had to work harder than usual. These are the kinds of things you’d never guess as an outside observer or a potential customer. But constraints like these are truly a test of resilience.
The Approach
If you’ve seen other websites we’ve launched with various teams, you’ll notice they often include 3D graphics or other rich visual layers. This project, however, was a rare exception.
It was crucial to make the right call early on and to hit expectations spot-on during the concept stage. A couple of wrong turns wouldn’t be fatal, but too many missteps could easily lead to missing the deadline and delivering an underwhelming result.
We focused on typography, photography, and rhythm. Fortunately, we were able to shape the art direction for the photos in parallel with the design process. Big thanks to Candace Janee (OXI project manager) who coordinated between me, the photographers, and everyone involved to quickly arrange compositions, lighting setups, and other details for the shoot.
Another layer of complexity was planning the broader interface and future platform in tandem with this launch. While we were only releasing two core pages at this stage, we knew the site would eventually evolve into a full eCommerce platform. Every design choice had to consider the long game from homepage and support pages to product detail layouts and checkout flows. That also meant thinking ahead about how systems like Webflow, WordPress, WooCommerce, and email automation would integrate down the line.
Typography
With no graphics to lean on, typography had to carry more weight than usual not just in terms of legibility, but in how it communicates tone, energy, and brand attitude. We opted for a bold, editorial rhythm. Headlines drive momentum across the layout, while smaller supporting text helps guide the eye without clutter.
We selected both typefaces from the same designer, Wei Huang, a type designer from Australia. Work Sans for headlines and body copy, and Fragment Mono for supporting labels and detailed descriptions.The two fonts complement each other well and are completely free to use, which allowed us to rely on Google Fonts without worrying about file formats or load sizes.
CMS System
Even though we were only launching two pages initially, the CMS was built with a full content ecosystem in mind. Product specs, updates, videos, and future campaigns all had a place in the structure. Instead of hardcoding static blocks, we built flexible content types that could evolve alongside the product line.
The idea was simple: avoid rework later. The CMS wasn’t just a backend; it was the foundation of a scalable platform. Whether we were thinking of Webflow’s CMS collections or potential integrations with WordPress and WooCommerce, the goal was to create a system that was clean, extensible, and future-ready.
Sketches. Early explorations.
I really enjoy the concept phase. It’s the moment where different directions emerge and key patterns begin to form. Whether it’s alignment, a unique sense of ornamentation, asymmetry, or something else entirely. This stage is where the visual language starts to take shape.
Here’s a look at some of the early concepts we explored. The OXI website could’ve turned out very differently.
OXI Instruments Concept #1
OXI Instruments Concept #2
We settled on a dark version of the design partly due to the founder’s preference, and partly because the brand’s core colors (which were off-limits for changes) worked well with it. Additionally, cutting out the device from photos made it easier to integrate visuals into the layout and mask any imperfections.
Rhythm & Layout
When planning the rhythm and design, it’s important not to go overboard with creativity. As designers, we often want to add that “wow” factor but sometimes, the business just doesn’t need it.
The target audience, people in the music world, already get their visual overload during performances by their favorite artists. But when they’re shopping for a new device, they’re not looking for spectacle. They want to see the product. The details. The specs. Everything that matters.
All of it needs to be delivered clearly and accessibly. We chose the simplest approach: alternating between center-aligned and left-aligned sections, giving us the flexibility to structure the layout intuitively. Photography helps break up the technical content, and icons quickly draw attention to key features. People don’t read, they scan. We designed with that in mind.
A few shots highlighting some of my favorite sections.
Result
The results were genuinely rewarding. The team felt a boost in motivation, and the brand’s audience and fans immediately noticed the shift highlighting how the update pushed OXI into a more professional direction.
According to my information, the pre-orders for the device sold out in less than a week. It’s always a great feeling when you’re proud of the outcome, the team is happy, and the audience responds positively. That’s what matters most.
Looking Ahead / Part Two
This was just the beginning. The second part of the project (a full eCommerce experience) is currently in the works. The core will expand, but the principles will remain the same.
I hope you’ll find the full relaunch of OXI Instruments just as exciting. Stay tuned on updates.
Creating a third-person character controller involves more than just moving an object around a 3D scene. Realistic movement, grounded physics, responsive jumping, and animation blending are essential for a polished feel. This article explores how these elements can be assembled — not through traditional manual coding, but via AI-assisted development using Bolt.new, a browser-based AI-assisted development tool that generates web code from natural language prompts, backed by Claude 3.7 Sonnet and Claude 3.5 Sonnet LLMs. It provides a lightweight environment where developers can focus on describing functionality rather than writing boilerplate.
For this character controller, Bolt handled tasks like setting up physics, integrating animations, and managing input systems, making it easier to test ideas and iterate quickly without switching between tools or writing everything from scratch.
If you’re curious to learn more, check out this article on Codrops, which also explores the platform’s capabilities and showcases another real-world project built entirely with AI.
The final project is powered by React Three Fiber, Three.js, and Rapier, and showcases how a designer or developer can create complex, interactive 3D experiences by guiding AI — focusing on behavior and structure rather than syntax.
Step 1: Setting Up Physics with a Capsule and Ground
The character controller begins with a simple setup: a capsule collider for the player and a ground plane to interact with. Rapier, a fast and lightweight physics engine built in WebAssembly, handles gravity, rigid body dynamics, and collisions. This forms the foundation for player movement and world interaction.
The capsule shape was chosen for its stability when sliding across surfaces and climbing over small obstacles — a common pattern in real-time games.
Step 2: Real-Time Tuning with a GUI
To enable rapid iteration and balance gameplay feel, a visual GUI was introduced (using Leva.js). This panel exposes parameters such as:
Player movement speed
Jump force
Gravity scale
Follow camera offset
Debug toggles
By integrating this directly into the experience, developers can tune the controller live without needing to edit or recompile code, speeding up testing and design decisions.
Step 3: Ground Detection with Raycasting
A raycast is used to detect whether the player is grounded. This simple yet effective check prevents the character from jumping mid-air or triggering multiple jumps in sequence.
The logic is executed on every frame, casting a ray downward from the base of the capsule collider. When contact is confirmed, the jump input is enabled. This technique also allows smooth transitions between grounded and falling states in the animation system.
Step 4: Integrating a Rigged Character with Animation States
The visual character uses a rigged GLB model via Mixamo, with three key animations: Idle, Run, and Fall. These are integrated as follows:
The GLB character is attached as a child of the capsule collider
The animation state switches dynamically based on velocity and grounded status
Transitions are handled via animation blending for a natural feel
This setup keeps the visuals in sync with physics, while preserving modular control over the physical capsule.
Step 5: World Building and Asset Integration
The environment was arranged in Blender, then exported as a single .glb file and imported into the bolt.new project scene. This approach allows for efficient scene composition while keeping asset management simple.
For web, using .glb keeps geometry and textures bundled together. To maintain performance, it’s recommended to keep textures at 1024×1024 resolution or other square power-of-two sizes (e.g. 256, 512, 2048). This ensures optimal GPU memory usage and faster load times across devices.
Special thanks to KayLousberg for the low-poly 3D kit used for prototyping.
Step 6: Cross-Platform Input Support
The controller was designed to work seamlessly across desktop, mobile, and gamepad platforms — all built using AI-generated logic through Bolt.
Gamepad support was added using the Gamepad API, allowing players to plug in a controller and play with analog input.
On desktop, the controller uses standard keyboard input (WASD or arrow keys) and mouse movement for camera control.
On mobile, AI-generated code enabled an on-screen joystick and jump button, making the game fully touch-compatible.
All input types control the same physics-driven character, ensuring consistent behavior across devices — whether you’re playing on a laptop, touchscreen, or game controller.
This cross-platform support was implemented entirely through natural language prompts, showcasing how AI can translate high-level intent into working input systems.
The Role of AI in the Workflow
What makes this controller unique isn’t the mechanics — it’s the process. Every system was generated by AI through descriptive prompts, allowing the developer to work more like a creative director than a traditional engineer.
AI handled the boilerplate, the physics setup, the animation switching logic — all based on clear creative goals. This opens new doors for prototyping and interactive design, where iteration speed matters more than syntax.
This character controller demo includes:
Capsule collider with physics
Grounded detection via raycast
State-driven animation blending
GUI controls for tuning
Environment interaction with static/dynamic objects
Cross-Platform Input Support
It’s a strong starting point for creating browser-based games, interactive experiences, or prototyping new ideas — all with the help of AI.
Check out the full game built using this setup as a base: 🎮 Demo Game
My (design) partner, Gaetan Ferhah, likes to send me his design and motion experiments throughout the week. It’s always fun to see what he’s working on, and it often sparks ideas for my own projects. One day, he sent over a quick concept for making a product grid feel a bit more creative and interactive. 💬 The idea for this tutorial came from that message.
We’ll explore a “grid to preview” hover interaction that transforms product cards into a full preview. As with many animations and interactions, there are usually several ways to approach the implementation—ranging in complexity. It can feel intimidating (or almost impossible) to recreate a designer’s vision from scratch. But I’m a huge fan of simplifying wherever possible and leaning on optical illusions (✨ fake it ’til you make it ✨).
All images are generated with DALL-E (sad, I know.. I wanted everything to be real too 💔)
For this tutorial, I knew I wanted to keep things straightforward and recreate the effect of puzzle pieces shifting into place using a combination of clip-path animation and an image overlay.
Let’s break it down in a few steps:
Layout and Overlay (HTML, CSS)Set up the initial layout and carefully match the position of the preview overlay to the grid.
Build JavaScript structure (JavaScript)Creating some classes to keep us organised, add some interactivity (event listeners).
Clip-Path Creationand Animation (CSS, JS, GSAP)Adding and animating the clip-path, including some calculations on resize—this forms a key part of the puzzle effect.
Moving Product Cards (JS, GSAP)Set up animations to move the product cards towards each other on hover.
Preview Image Scaling (JS, GSAP)Slightly scaling down the preview overlay in response to the inward movement of the other elements.
Adding Images (HTML, JS, GSAP)Enough with the solid colours, let’s add some images and a gallery animation.
Debouncingevents (JS)Debouncing the mouse-enter event to prevent excessive triggering and reduce jitter.
Final tweaks Crossed the t’s and dotted the i’s—small clean-ups and improvements.
Layout and Overlay
At the foundation of every good tutorial is a solid HTML structure. In this step, we’ll create two key elements: the product grid and the overlay for the preview cards. Since both need a similar layout, we’ll place them inside the same container (.products).
Our grid will consist of 8 products (4 columns by 2 rows) with a gutter of 5vw. To keep things simple, I’m only adding the corresponding li elements for the products, but not yet adding any other elements. In the HTML, you’ll notice there are two preview containers: one for the left side and one for the right. If you want to see the preview overlays right away, head to the CodePen and set the opacity of .product-preview to 1.
Why I Opted for Two Containers
At first, I planned to use just one preview container and move it to the opposite side of the hovered card by updating the grid-column-start. That approach worked fine—until I got to testing.
When I hovered over a product card on the left and quickly switched to one on the right, I realised the problem: with only one container, I also had just one timeline controlling everything inside it. That made it basically impossible to manage the “in/out” transition between sides smoothly.
So, I decided to go with two containers—one for the left side and one for the right. This way, I could animate both sides independently and avoid timeline conflicts when switching between them.
See the Pen
Untitled by Gwen Bogaert (@gwen-bo)
on CodePen.
JavaScript Set-up
In this step, we’ll add some classes to keep things structured before adding our event listeners and initiating our timelines. To keep things organised, let’s split it into two classes: ProductGrid and ProductPreview.
ProductGrid will be fairly basic, responsible for handling the split between left and right, and managing top-level event listeners (such as mouseenter and mouseleave on the product cards, and a general resize).
ProductPreview is where the magic happens. ✨ This is where we’ll control everything that happens once a mouse event is triggered (enter or leave). To pass the ‘active’ product, we’ll define a setProduct method, which, in later steps, will act as the starting point for controlling our GSAP animation(s).
Splitting Products (Left – Right)
In the ProductGrid class, we will split all the products into left and right groups. We have 8 products arranged in 4 columns, with each row containing 4 items. We are splitting the product cards into left and right groups based on their column position.
this.ui.products.filter((_, i) => i % 4 === 2 || i % 4 === 3)
The logic relies on the modulo or remainder operator. The line above groups the product cards on the right. We use the index (i) to check if it’s in the 3rd (i % 4 === 2) or 4th (i % 4 === 3) position of the row (remember, indexing starts at 0). The remaining products (with i % 4 === 0 or i % 4 === 1) will be grouped on the left.
Now that we know which products belong to the left and right sides, we will initiate a ProductPreview for both sides and pass along the products array. This will allow us to define productPreviewRight and productPreviewLeft.
To finalize this step, we will define event listeners. For each product, we’ll listen for mouseenter and mouseleave events, and either set or unset the active product (both internally and in the corresponding ProductPreview class). Additionally, we’ll add a resize event listener, which is currently unused but will be set up for future use.
This is where we’re at so far (only changes in JavaScript):
See the Pen
Tutorial – step 2 (JavaScript structure) by Gwen Bogaert (@gwen-bo)
on CodePen.
Clip-path
At the base of our effect lies the clip-path property and the ability to animate it with GSAP. If you’re not familiar with using clip-path to clip content, I highly recommend this article by Sarah Soueidan.
Even though I’ve used clip-path in many of my projects, I often struggle to remember exactly how to define the shape I’m looking for. As before, I’ve once again turned to the wonderful tool Clippy, to get a head start on defining (or exploring) clip-path shapes. For me, it helps demystify which value influences which part of the shape.
Let’s start with the cross (from Clippy) and modify the points to create a more mathematical-looking cross (✚) instead of the religious version (✟).
Feel free to experiment with some of the values, and soon you’ll notice that with small adjustments, we can get much closer to the desired shape! For example, by stretching the horizontal arms completely to the sides (set to 10% and 90% before) and shifting everything more equally towards the center (with a 10% difference from the center — so either 40% or 60%).
And bada bing, bada boom! This clip-path almost immediately creates the illusion that our single preview container is split into four parts — exactly the effect we want to achieve! Now, let’s move on to animating the clip-path to get one step closer to our final result:
Animating Clip-paths
The concept of animating clip-paths is relatively simple, but there are a few key things to keep in mind to ensure a smooth transition. One important consideration is that it’s best to define an equal number of points for both the start and end shapes.
The idea is fairly straightforward: we begin with the clipped parts hidden, and by the end of the animation, we want the clip-path to disappear, revealing the entire preview container (by making the arms of the cross so thin that they’re barely visible or not visible at all). This can be achieved easily with a fromTo animation in GSAP (though it’s also supported in CSS animations).
The Catch
You might think, “That’s it, we’re done!” — but alas, there’s a catch when it comes to using this as our puzzle effect. To make it look realistic, we need to ensure that the shape of the cross aligns with the underlying product grid. And that’s where a bit of JavaScript comes in!
We need to factor in the gutter of our grid (5vw) to calculate the width of the arms of our cross shape. It could’ve been as simple as adding or subtracting (half!) of the gutter to/from the 50%, but… there’s a catch in the catch!
We’re not working with a square, but with a rectangle. Since our values are percentages, subtracting 2.5vw (half of the gutter) from the center wouldn’t give us equal-sized arms. This is because there would still be a difference between the x and y dimensions, even when using the same percentage value. So, let’s take a look at how to fix that:
In the code above (triggered on each resize), we get the width and height of the preview container (which spans 4 product cards — 2 columns and 2 rows). We then calculate what percentage 5vw would be, relative to both the width and height.
To conclude this step, we would have something like:
See the Pen
Tutorial – step 3 (clip path) by Gwen Bogaert (@gwen-bo)
on CodePen.
Moving Product Cards
Another step in the puzzle effect is moving the visible product cards together so they appear to form one piece. This step is fairly simple — we already know how much they need to move (again, gutter divided by 2 = 2.5vw). The only thing we need to figure out is whether a card needs to move up, down, left, or right. And that’s where GSAP comes to the rescue!
We need to define both the vertical (y) and horizontal (x) movement for each element based on its index in the list. Since we only have 4 items, and they need to move inward, we can check whether the index is odd or even to determine the desired value for the horizontal movement. For vertical movement, we can decide whether it should move to the top or bottom depending on the position (top or bottom).
In GSAP, many properties (like x, y, scale, etc.) can accept a function instead of a fixed value. When you pass a function, GSAP calls it for each target element individually.
Horizontal (x): cards with an even index (0, 2) get shifted right by 2.5vw, the other (two) move to the left. Vertical (y): cards with an index lower than 2 (0,1) are located at the top, so need to move down, the other (two) move up.
See the Pen
Tutorial – step 3 (clip path) by Gwen Bogaert (@gwen-bo)
on CodePen.
Preview Image (Scaling)
Cool, we’re slowly getting there! We have our clip-path animating in and out on hover, and the cards are moving inward as well. However, you might notice that the cards and the image no longer have an exact overlap once the cards have been moved. To fix that and make everything more seamless, we’ll apply a slight scale to the preview container.
This is where a bit of extra calculation comes in, because we want it to scale relative to the gutter. So we take into account the height and width of the container.
This calculation determines a scale factor to shrink our preview container inward, matching the cards coming together. First, the rectangle’s width/height (in pixels) is converted into viewport width units (vw) by dividing it by the pixel value of 1vw. Next, the shrink amount (5vw) is subtracted from that width/height. Finally, the result is divided by the original width in vw to calculate the scale factor (which will be slightly below 1). Since we’re working with a rectangle, the scale factor for the x and y axes will be slightly different.
In the codepen below, you’ll see the puzzle effect coming along nicely on each container. Pink are the product cards (not moving), red and blue are the preview containers.
See the Pen
Tutorial – step 4 (moving cards) by Gwen Bogaert (@gwen-bo)
on CodePen.
Adding Pictures
Let’s make our grid a little more fun to look at!
In this step, we’re going to add the product images to our grid, and the product preview images inside the preview container. Once that’s done, we’ll start our image gallery on hover.
The HTML changes are relatively simple. We’ll add an image to each product li element and… not do anything with it. We’ll just leave the image as is.
The rest of the magic will happen inside the preview container. Each container will hold the preview images of the products from the other side (those that will be visible). So, the left container will contain the images of the 4 products on the right, and the right container will contain the images of the 4 products on the left. Here’s an example of one of these:
Once that’s done, we can initialise by querying those images in the constructor of the ProductPreview, sorting them by their dataset.id. This will allow us to easily access the images later via the data-index attribute that each product has. To sum up, at the end of our animate-in timeline, we can call startPreviewGallery, which will handle our gallery effect.
startPreviewGallery(id) {
const images = this.ui.previewImagesPerID[id]
const timeline = gsap.timeline({ repeat: -1 })
// first image is already visible (do not hide)
gsap.set([...images].slice(1), { opacity: 0 })
images.forEach((image) => {
timeline
.set(images, { opacity: 0 }) // Hide all images
.set(image, { opacity: 1 }) // Show only this one
.to(image, { duration: 0, opacity: 1 }, '+=0.5')
})
this.galleryTimeline = timeline
}
Debouncing
One thing I’d like to do is debounce hover effects, especially if they are more complex or take longer to complete. To achieve this, we’ll use a simple (and vanilla) JavaScript approach with setTimeout. Each time a hover event is triggered, we’ll set a very short timer that acts as a debouncer, preventing the effect from firing if someone is just “passing by” on their way to the product card on the other side of the grid.
I ended up using a 100ms “cooldown” before triggering the animation, which helped reduce unnecessary animation starts and minimise jitter when interacting with the cards.
productMouseEnter(product, preview) {
// If another timer (aka hover) was running, cancel it
if (this.hoverDelay) {
clearTimeout(this.hoverDelay)
this.hoverDelay = null
}
// Start a new timer
this.hoverDelay = setTimeout(() => {
this.activeProduct = product
preview.setProduct(product)
this.hoverDelay = null // clear reference
}, 100)
}
productMouseLeave() {
// If user leaves before debounce completes
if (this.hoverDelay) {
clearTimeout(this.hoverDelay)
this.hoverDelay = null
}
if (this.activeProduct) {
const preview = this.getProductSide(this.activeProduct)
preview.setProduct(null)
this.activeProduct = null
}
}
Final Tweaks
I can’t believe we’re almost there! Next up, it’s time to piece everything together and add some small tweaks, like experimenting with easings, etc. The final timeline I ended up with (which plays or reverses depending on mouseenter or mouseleave) is:
While this interaction may look cool and visually engaging, it’s important to be mindful of usability and accessibility. In its current form, this effect relies quite heavily on motion and hover interactions, which may not be ideal for all users. Here are a few things that should be considered if you’d be planning on implementing a similar effect:
Motion sensitivity: Be sure to respect the user’s prefers-reduced-motion setting. You can easily check this with a media query and provide a simplified or static alternative for users who prefer minimal motion.
Keyboard navigation: Since this interaction is hover-based, it’s not currently accessible via keyboard. If you’d like to make it more inclusive, consider adding support for focus events and ensuring that all interactive elements can be reached and triggered using a keyboard.
Think of this as a playful, exploratory layer — not a foundation. Use it thoughtfully, and prioritise accessibility where it counts. 💛
Acknowledgements
I am aware that this tutorial assumes an ideal scenario of only 8 products, because what happens if you have more? I didn’t test it out myself, but the important part is that the preview containers feel like an exact overlay of the product grid. If more cards are present, you could try ‘mapping’ the coordinates of the preview container to the 8 products that are completely in view. Or.. go crazy with your own approach if you had another idea. That’s the beauty of it, there’s always many approaches that would lead to the same (visual) outcome. 🪄
Thank you so much for following along! A big thanks to Codrops for giving me the opportunity to contribute. I’m excited to see what you’ll create when inspired by this tutorial! If you have any questions, feel free to drop me a line!
🎨✨💻 Stay ahead of the curve with handpicked, high-quality frontend development and design news, picked freshly every single day. No fluff, no filler—just the most relevant insights, inspiring reads, and updates to keep you in the know.
Prefer a weekly digest in your inbox? No problem, we got you covered. Just subscribe here.