This tutorial walks through creating an interactive animation: starting in Blender by designing a button and simulating a cloth-like object that drops onto a surface and settles with a soft bounce.
After baking the cloth simulation, the animation is exported and brought into a Three.js project, where it becomes an interactive scene that can be replayed on click.
By the end, you’ll have a user-triggered animation that blends Blender’s physics simulations with Three.js rendering and interactivity.
Let’s dive in!
Step 1: Create a Cube and Add Subdivisions
Start a New Project: Open Blender and delete the default cube (select it and press X, then confirm).
Add a Cube: Press Shift + A > Mesh > Cube to create a new cube.
Enter Edit Mode: Select the cube, then press Tab to switch to Edit Mode.
Subdivide the Cube: Press Ctrl + R to add a loop cut, hover over the cube, and scroll your mouse wheel to increase the number of cuts.
Apply Subdivision: With the cube still selected in Object Mode, go to the Modifiers panel (wrench icon), and click Add Modifier > Subdivision Surface. Set the Levels to 2 or 3 for a smoother result, then click Apply.
Step 2: Add Cloth Physics and Adjust Settings
Select the Cube: Ensure your subdivided cube is selected in Object Mode.
Add Cloth Physics: Go to the Physics tab in the Properties panel. Click Cloth to enable cloth simulation.
Pin the Edges (Optional): If you want parts of the cube to stay fixed (e.g., the top), switch to Edit Mode, select the vertices you want to pin, go back to the Physics tab, and under Cloth > Shape, click Pin to assign those vertices to a vertex group.
Adjust Key Parameters:
Quality Steps: Set to 10-15 for smoother simulation (higher values increase accuracy but slow down computation).
Mass: Set to around 0.2-0.5 kg for a lighter, more flexible cloth.
Pressure: Under Cloth > Pressure, enable it and set a positive value (e.g., 2-5) to simulate inflation. This will make the cloth expand as if air is pushing it outward.
Stiffness: Adjust Tension and Compression (e.g., 10-15) to control how stiff or loose the cloth feels.
Test the Simulation: Press the Spacebar to play the animation and see the cloth inflate. Tweak settings as needed.
Step 3: Add a Ground Plane with a Collision
Create a Ground Plane: Press Shift + A > Mesh > Plane. Scale it up by pressing S and dragging (e.g., scale it to 5-10x) so it’s large enough for the cloth to interact with.
Position the Plane: Move the plane below the cube by pressing G > Z > -5 (or adjust as needed).
Enable Collision: Select the plane, go to the Physics tab, and click Collision. Leave the default settings.
Run the Simulation: Press the Spacebar again to see the cloth inflate and settle onto the ground plane.
Step 4: Adjust Materials and Textures
Select the Cube: In Object Mode, select the cloth (cube) object.
Add a Material: Go to the Material tab, click New to create a material, and name it.
Set Base Color/UV Map: In the Base Color slot, choose a fabric-like color (e.g., red or blue) or connect an image texture by clicking the yellow dot next to Base Color and selecting Image Texture. Load a texture file if you have one.
Adjust Roughness and Specular: Set Roughness to 0.1-0.3 for a soft fabric look.
Apply to Ground (Optional): Repeat the process for the plane, using a simple gray or textured material for contrast.
Step 5: Export as MDD and Generate Shape Keys for Three.js
To use the cloth animation in a Three.js project, we’ll export the physics simulation as an MDD file using the NewTek MDD plugin, then re-import it to create Shape Keys. Follow these steps:
Enable the NewTek MDD Plugin:
Go to Edit > Preferences > Add-ons.
Search for “NewTek” or “MDD” and enable the “Import-Export: NewTek MDD format” add-on by checking the box. Close the Preferences window.
Apply All Modifiers and All Transform:
In Object Mode, select the cloth object.
Go to the Modifiers panel (wrench icon). For each modifier (e.g., Subdivision Surface, Cloth), click the dropdown and select Apply. This “freezes” the mesh with its current shape and physics data.
Ensure no unapplied deformations (e.g., scale) remain: Press Ctrl + A > All Transforms to apply location, rotation, and scale.
Export as MDD:
With the cloth object selected, go to File > Export > Lightwave Point Cache (.mdd).
In the export settings (bottom left):
Set FPS (frames per second) to match your project (e.g., 24, 30, or 60).
Set the Start/End Frame of your animation.
Choose a save location (e.g., “inflation.mdd”) and click Export MDD.
Import the MDD:
Go to File > Import > Lightwave Point Cache (.mdd), and load the “inflation.mdd” file.
In the Physics and Modifiers panel, remove any cloth simulation-related options, as we now have shape keys.
Step 6: Export the Cloth Simulation Object as GLB
After importing the MDD, select the cube with the animation data.
Export as glTF 2.0 (.glb/.gltf): Go to File > Export > glTF 2.0 (.glb/.gltf).
Check Shape Keys and Animation
Under the Data section, check Shape Keys to include the morph targets generated from the animation.
Check Animations to export the animation data tied to the Shape Keys.
Export: Choose a save location (e.g., “inflation.glb”) and click Export glTF 2.0. This file is now ready for use in Three.js.
Step 7: Implement the Cloth Animation in Three.js
In this step, we’ll use Three.js with React (via @react-three/fiber) to load and animate the cloth inflation effect from the inflation.glb file exported in Step 6. Below is the code with explanations:
Set Up Imports and File Path:
Import necessary libraries: THREE for core Three.js functionality, useRef, useState, useEffect from React for state and lifecycle management, and utilities from @react-three/fiber and @react-three/drei for rendering and controls.
Import GLTFLoader from Three.js to load the .glb file.
Define the model path: const modelPath = ‘/inflation.glb’; points to the exported file (adjust the path based on your project structure).
Create the Model Component:
Define the Model component to handle loading and animating the .glb file.
Use state variables: model for the loaded 3D object, loading to track progress, and error for handling issues.
Use useRef to store the AnimationMixer (mixerRef) and animation actions (actionsRef) for controlling playback.
Load the Model with Animations:
In a useEffect hook, instantiate GLTFLoader and load inflation.glb.
On success (gltf callback):
Extract the scene (gltf.scene) and create an AnimationMixer to manage animations.
For each animation clip in gltf.animations:
Set duration to 6 seconds (clip.duration = 6).
Create an AnimationAction (mixer.clipAction(clip)).
Configure the action: clampWhenFinished = true stops at the last frame, loop = THREE.LoopOnce plays once, and setDuration(6) enforces the 6-second duration.
Reset and play the action immediately, storing it in actionsRef.current.
Update state with the loaded model and set loading to false.
Log loading progress with the xhr callback.
Handle errors in the error callback, updating error state.
Clean up the mixer on component unmount.
Animate the Model:
Use useFrame to update the mixer each frame with mixerRef.current.update(delta), advancing the animation based on time.
Add interactivity:
handleClick: Resets and replays all animations on click.
onPointerOver/onPointerOut: Changes the cursor to indicate clickability.
Render the Model:
Return null if still loading, an error occurs, or no model is loaded.
Return a <primitive> element with the loaded model, enabling shadows and attaching event handlers.
Create a Reflective Ground:
Define MetalGround as a mesh with a plane geometry (args={[100, 100]}).
Apply MeshReflectorMaterial with properties like metalness=0.5, roughness=0.2, and color=”#202020″ for a metallic, reflective look. Adjust blur, strength, and resolution as needed.
Set Up the Scene:
In the App component, create a <Canvas> with a camera positioned at [0, 15, 15] and a 50-degree FOV.
Add a directionalLight at [0, 15, 0] with shadows enabled.
Include an Environment preset (“studio”) for lighting, a Model at [0, 5, 0], ContactShadows for realism, and the MetalGround rotated and positioned below.
Add OrbitControls for interactive camera movement.
And that’s it! Starting from a cloth simulation in Blender, we turned it into a button that drops into place and reacts with a bit of bounce inside a Three.js scene.
This workflow shows how Blender’s physics simulations can be exported and combined with Three.js to create interactive, real-time experiences on the web.
SOAP (Simple Object Access Protocol) might sound intimidating (or funny) but it is actually a straightforward way for systems to exchange structured messages using XML. In this article, I am introducing SOAP through YouTube video, where it is explored through 2 different angles – first in the Chrome browser console, then with Python and Jupyter Notebook.
The SOAP Exchange Mechanism uses requests and response.
Part 1 – Soap in the Chrome Browser Console
We start by sending SOAP requests directly from the browser’s JS console. This is a quick way to see the raw XML
<soap> envelopes in action. Using a public integer calculator web service, we perform basic operations – additions, subtraction, multiplication, division – and observe how the requests and responses happen in real time!
For the browser, the entire SOAP journey looks like that:
Chrome Browser -> HTTP POST -> SOAP XML -> Server (http://www.dneonline.com/calculator.asmx?WSDL) -> SOAP XML -> Chrome Browser
Here we jump into Python. With the help of libaries, we load the the WSDL (Web Services Description Language) file, inspect the available operations, and call the same calculator service programmatically.
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Say that you have an array of N items and you need to access an element counting from the end of the collection.
Usually, we tend to use the Length property of the array, and then subtract the number corresponding to the position we want to reach:
Yes, that’s just syntactic sugar, but it can help make your code more readable. In fact, if you have a look at the IL code generated by both examples, they are perfectly identical. IL is quite difficult to read and understand, but you can acknowledge that both syntaxes are equivalent by looking at the decompiled C# code:
Performance is not affected by this operator, so it’s just a matter of readability.
Clearly, you still have to take care of array bounds – if you access values[^55] you’ll get an IndexOutOfRangeException.
Using ^ is a nice trick that many C# developers don’t know. There are some special characters that can help us but are often not used. Like the @ operator!
Interactive web animations have become essential for modern websites, but choosing the right implementation approach can be challenging. CSS, Video and JavaScript are the familiar methods and each certainly has its place in a developer’s toolkit. When you need your site to have unique custom interactions (while remaining light and performant, of course), that’s where Rive shines.
Rive animations, whether vector or raster, look crisp at any size, are lightweight (often smaller than equivalent Lottie files), and can respond to user interactions and real-time data through a straightforward JavaScript API.
This tutorial will walk you through Rive’s workflow and implementation process using three practical examples. We’ll build them step-by-step using a fictional smart plant care company called “TapRoot” as our case study, so you can see exactly how Rive fits into a real development process and decide if it’s right for your next project.
There are countless ways to use Rive, but we’ll focus on these three patterns:
Animated Hero Images create an immediate emotional connection and brand personality
Interactive CTAs increase conversion rates by providing clear, satisfying feedback
Flexible Layouts combine elements into an experience that works at any size
Each pattern builds on the previous one, teaching you progressively more sophisticated Rive techniques while solving real-world UX challenges.
Pattern 1: The Living Hero Image
The Static Starting Point
A static hero section for TapRoot could feature a photo of their smart plant pot with overlay text. It show’s the product, but we can do better.
Creating the Rive Animation
Let’s create an animated version that transforms this simple scene into a revealing experience that literally shows what makes TapRoot “smarter than it looks.” The animation features:
Gently swaying leaves: Constant, subtle motion brings a sense of life to the page.
Interior-reveal effect: Hovering over the pot reveals the hidden root system and embedded sensors
Product Feature Callouts: Key features are highlighted with interactive callouts
Although Rive is vector-based, you can also import JPG, PNG, and PSD files. With an embedded image, a mesh can be constructed and a series of bones can be bound to it. Animating the bones gives the subtle motion of the leaves moving. We’ll loop it at a slow speed so the motion is noticeable, but not distracting.
Adding Interactivity
Next we’ll add a hover animation that reveals the inside of the pot. By clipping the image of the front of the pot to a rectangle, we can resize the shape to reveal the layers underneath. Using a joystick allows us to have an animation follow the cursor when it’s in within the hit area of the pot and snap back to normal when the cursor leaves the area.
Feature Callouts
With a nested artboard, it is easy to build a single layout to create multiple versions of an element. In this case, a feature callout has an updated icon, title, and short description for three separate features.
The Result
What was once a simple product photo is now an interactive revelation of TapRoot’s hidden intelligence. The animation embodies the brand message—”smarter than it looks”—by literally revealing the sophisticated technology beneath a beautifully minimal exterior.
Pattern 2: The Conversion-Boosting Interactive CTA
Beyond the Basic Button
Most CTAs are afterthoughts—a colored rectangle with text. But your CTA is often the most important element on your page. Let’s make it irresistible.
Idle State: Clean, minimal button with an occasional “shine” animation
Hover State: Fingerprint icon begins to follow the cursor
Click State: An animated “tap” of the button
Pattern 3: Flexible Layout
Next we can combine the elements into a responsive animated layout that works on any device size. Rive’s layout features familiar row and column arrangements and lets you determine how your animated elements fit within areas as they resize.
The web is becoming more interactive and alive. By understanding how to implement Rive animations—from X-ray reveals to root network interactions—you’re adding tools that create experiences users remember and share.
The difference between a good website and a great one often comes down to these subtle details: the satisfying feedback of a button click, the smooth transition between themes, the curiosity sparked by hidden technology. These micro-interactions connect with users on an emotional level while providing genuine functional value.
“Aurel’s Grand Theater” is an experimental, unconventional solo portfolio project that invites users to read case
studies, solve mysteries to unlock secret pages, or freely explore the theater – jumping around and even smashing
things!
I had an absolute blast working on it, even though it took much longer than I anticipated. Once I finally settled on a
creative direction, the project took about a year to complete – but reaching that direction took nearly two years on
its own. Throughout the journey, I balanced a full-time job as a lead web developer, freelance gigs, and an unexpected
relocation to the other side of the world. The cherry on top? I went through way
too many artistic iterations. It ‘s my longest solo project to date, but also one of the most fun and creatively
rewarding. It gave me the chance to dive deep into creative coding and design.
This article takes you behind the scenes of the project – covering everything from design to code, including tools,
inspiration, project architecture, design patterns, and even feature breakdowns with code snippets you can adapt for
your own work.
The Creative Process: Behind the Curtain
Genesis
After eight years, my portfolio no longer reflected my skills or creativity. I wanted to create something unconventional – an experience where visitors become active participants rather than passive observers. Most importantly, I wanted it to be something I ‘d genuinely enjoy building. I was wrapping up “ Leap for Mankind” at the time and had a blast working on it, blending storytelling with game and interactive elements. I wanted to create another experimental website that combines game mechanics with a narrative experience.
From the beginning, I envisioned a small character that could freely explore its environment – smashing objects, interacting with surrounding elements, and navigating not just the floor but also vertical spaces by jumping onto tables and chairs. The goal was to transform the portfolio from a passive viewing experience into a fun, interactive one. At the same time, I recognized that some content demands clarity over creativity. For example, case studies require a more traditional format that emphasizes readability.
One of the key challenges, then, was designing a portfolio that could seamlessly transition between an immersive 3D game world and more conventional documentation pages – without disrupting the overall experience.
Building the Foundation
I had a general concept of the website in mind, so I started coding a proof of concept (POC) for the game back in
2022. In this early version, the player could move around, bump into objects, and jump – laying the foundation for the
interactive world I envisioned. Interestingly, much of the core code structure from that POC made it into the final
product. While the technical side was coming together, I still hadn ‘t figured out the artistic direction at that
point.
Early Proof Of Concept
Trials and Errors
As a full-time web developer, I rarely find myself wrestling with artistic direction. Until now, every freelance and
side project I took on began with a clear creative vision that simply needed technical execution.
This time was different. At first, I leaned toward a cartoonish aesthetic with bold outlines, thinking it would
emphasize my creativity. I tried to convince myself it worked, but something felt off – especially when pairing the
visual style with the user interface. The disconnect between my vision and its execution was unfamiliar territory, and
it led me down a long and winding path of creative exploration.
Early artistic direction
I experimented with other styles too, like painterly visuals, which held promise but proved too time-consuming. Each
artistic direction felt either not suitable for me or beyond my practical capabilities as a developer moonlighting as
a designer.
The theater concept – which ultimately became central to the portfolio ‘s identity – arrived surprisingly late. It
wasn ‘t part of the original vision but surfaced only after countless iterations and discarded ideas. In total,
finding an artistic direction that truly resonated took nearly two years – a journey further complicated by a major
relocation across continents, ongoing work and freelance commitments, and personal responsibilities.
The extended timeline wasn ‘t due to technical complexity, but to an unexpected battle with creative identity. What
began as a straightforward portfolio refresh evolved into a deeper exploration of how to merge professional
presentation with personal expression – pushing me far beyond code and into the world of creative direction.
Tools & Inspiration: The Heart of Creation
After numerous iterations and abandoned concepts, I finally arrived at a creative direction that resonated with my
vision. Rather than detailing every artistic detour, I ‘ll focus on the tools and direction that ultimately led to the
final product.
Design Stack
Below is the stack I use to design my 3D projects:
UI/UX & Visual Design
Figma
: When I first started, everything was laid out in a Photoshop file. Over the years, I tried various design tools,
but I ‘ve been using Figma consistently since 2018 – and I ‘ve been really satisfied with it ever since.
Miro
: reat for moodboarding and early ideation. It helps me visually organize thoughts and explore concepts during the
initial phase.
3D Modeling & Texturing
Blender
: My favorite tool for 3D modeling. It ‘s incredibly powerful and flexible, though it does have a steep learning
curve at first. Still, it ‘s well worth the effort for the level of creative control it offers.
Adobe Substance 3D Painter
: The gold standard in my workflow for texture painting. It’s expensive, but the quality and precision it delivers
make it indispensable.
Image Editing
Krita
: I only need light photo editing, and Krita handles that perfectly without locking me into Adobe ‘s ecosystem – a
practical and efficient alternative.
Drawing Inspiration from Storytellers
While I drew inspiration from many sources, the most influential were Studio Ghibli and the mystical world of Harry
Potter. Ghibli ‘s meticulous attention to environmental detail shaped my understanding of atmosphere, while the
enchanting realism of the Harry Potter universe helped define the mood I wanted to evoke. I also browsed platforms
like ArtStation and Pinterest for broader visual inspiration, while sites like Behance, FWA, and Awwwards influenced
the more granular aspects of UX/UI design.
Initially, I organized these references on an InVision board. However, when the platform shut down mid-project, I had
to migrate everything to Miro – an unexpected transition and symbolic disruption that echoed the broader delays in the
project.
Mood board of Aurel’s Grand Theater
Designing the Theater
The theater concept emerged as the perfect metaphor for a portfolio: a space where different works could be presented
as “performances,” while maintaining a cohesive environment. It also aligned beautifully with the nostalgic,
pre-digital vibe inspired by many of my visual references.
Environment design is a specialized discipline I wasn ‘t very familiar with initially. To create a theater that felt
visually engaging and believable, I studied techniques from the FZD School
. These approaches were invaluable in conceptualizing spaces that truly feel alive: places where you can sense people
living their lives, working, and interacting with the environment.
To make the environment feel genuinely inhabited, I incorporated details that suggest human presence: scattered props,
tools, theater posters, food items, pamphlets, and even bits of miscellaneous junk throughout the space. These
seemingly minor elements were crucial in transforming the static 3D model into a setting rich with history, mood, and
character.
The 3D Modeling Process
Optimizing for Web Performance
Creating 3D environments for the web comes with unique challenges that differ significantly from video modelling. When
scenes need to be rendered in real-time by a browser, every polygon matters.
To address this, I adopted a strict low-poly approach and focused heavily on building reusable modular components.
These elements could be instantiated throughout the environment without duplicating unnecessary geometry or textures.
While the final result is still relatively heavy, this modular system allowed me to construct more complex and
detailed scenes while maintaining reasonable download sizes and rendering performance, which wouldn ‘t have been
possible without this approach.
Scaffolds models
Scaffolds models merged with the tower, hanok house and walls props
Texture Over Geometry
Rather than modeling intricate details that would increase polygon counts, I leveraged textures to suggest complexity.
Adobe Substance 3D became my primary tool for creating rich material surfaces that could convey detail without
overloading the renderer. This approach was particularly effective for elements like the traditional Hanok windows
with their intricate wooden lattice patterns. Instead of modeling each panel, which would have been
performance-prohibitive, I painted the details into textures and applied them to simple geometric forms.
Hanok model’s verticesHanok model painted using 3d Substance Painter
Frameworks & Patterns: Behind the Scenes of Development
Tech Stack
This is a comprehensive overview of the technology stack I used for Aurel’s Grand Theater website, leveraging my
existing expertise while incorporating specialized tools for animation and 3D effects.
Core Framework
Vue.js
: While I previously worked with React, Vue has been my primary framework since 2018. Beyond simply enjoying and
loving this framework, it makes sense for me to maintain consistency between the tools I use at work and on my side
projects. I also use Vite and Pinia.
Animation & Interaction
GSAP
: A cornerstone of my development toolkit for nearly a decade, primarily utilized on this project for:
ScrollTrigger functionality
MotionPath animations
Timeline and tweens
As a personal challenge, I created my own text-splitting functionality for this project (since it wasn ‘t client
work), but I highly recommend GSAP ‘s SplitText for most use cases.
Lenis
: My go-to library for smooth scrolling. It integrates beautifully with scroll animations, especially when working
with Three.js.
3D Graphics & Physics
Three.js
: My favorite 3D framework and a key part of my toolkit since 2015. I enjoy using it to bring interactive 3D
elements to the web.
Cannon.js
: Powers the site ‘s physics simulations. While I considered alternatives like Rapier, I stuck with Cannon.js since
it was already integrated into my 2022 proof-of-concept. Replacing it would have introduced unnecessary delays.
Styling
Queso
: A headless CSS framework developed at MamboMambo (my workplace). I chose it for its comprehensive starter
components and seamless integration with my workflow. Despite being in beta, it ‘s already reliable and flexible.
This tech stack strikes a balance between familiar tools and specialized libraries that enable the visual and
interactive elements that define the site’s experience.
Architecture
I follow Clean Code principles and other industry best practices, including aiming to keep my files small,
independent, reusable, concise, and testable.
I’ve also adopted the component folder architecture developed at my workplace. Instead of placing Vue
files directly inside the ./components
directory, each component resides in its own folder. This folder contains the Vue
file along with related types, unit tests, supporting files, and any child components.
Although initially designed for Vue
components, I ‘ve found this structure works equally well for organizing logic with Typescript
files, utilities
, directives
, and more. It ‘s a clean, consistent system that improves code readability, maintainability, and scalability.
This structured approach helps me manage the code base efficiently and maintain clear separation of concerns
throughout the codebase, making both development and future maintenance significantly more straightforward.
Design Patterns
Singleton
Singletons play a key role in this type of project architecture, enabling efficient code reuse without incurring
performance penalties.
import Experience from "@/three/Experience/Experience";
import type { Scene } from "@/types/three.types";
let instance: SingletonExample | null = null;
export default class SingletonExample {
private scene: Scene;
private experience: Experience;
constructor() {
if (instance) {
return instance;
}
instance = this;
this.experience = new Experience();
this.scene = this.experience.scene;
}
init() {
// initialize the singleton
}
someMethod() {
// some method
}
update() {
// update the singleton
}
update10fps() {
// Optional: update methods capped at 10FPS
}
destroySingleton() {
// clean up three.js + destroy the singleton
}
}
Split Responsibility Architecture
As shown earlier in the project architecture section, I deliberately separated physics management from model handling
to produce smaller, more maintainable files.
World Management Files:
These files are responsible for initializing factories and managing meshes within the main loop. They may also include
functions specific to individual world items.
Here’s an example of one such file:
// src/three/Experience/Theater/mockFileModel/mockFileModel.ts
import Experience from "@/three/Experience/Theater/Experience/Experience";
import type {
List,
LoadModel
} from "@/types/experience/experience.types";
import type { Scene } from "@/types/three.types";
import type Physics from "@/three/Experience/Theater/Physics/Physics";
import type { Resources } from "@/three/Experience/Utils/Ressources/Resources";
import type { MaterialGenerator } from "@/types/experience/materialGeneratorType";
let instance: mockWorldFile | null = null;
export default class mockWorldFile {
private experience: Experience;
private list: List;
private physics: Physics;
private resources: Resources;
private scene: Scene;
private materialGenerator: MaterialGenerator;
public loadModel: LoadModel;
constructor() {
// Singleton
if (instance) {
return instance;
}
instance = this;
this.experience = new Experience();
this.scene = this.experience.scene;
this.resources = this.experience.resources;
this.physics = this.experience.physics;
// factories
this.materialGenerator = this.experience.materialGenerator;
this.loadModel = this.experience.loadModel;
// Most of the material are init in a file called sharedMaterials
const bakedMaterial = this.experience.world.sharedMaterials.bakedMaterial;
// physics infos such as position, rotation, scale, weight etc.
const paintBucketPhysics = this.physics.items.paintBucket;
// Array of objects of models. This will be used to update it's position, rotation, scale, etc.
this.list = {
paintBucket: [],
...
};
// get the resource file
const resourcePaintBucket = this.resources.items.paintBucketWhite;
//Reusable code to add models with physics to the scene. I will talk about that later.
this.loadModel.setModels(
resourcePaintBucket.scene,
paintBucketPhysics,
"paintBucketWhite",
bakedMaterial,
true,
true,
false,
false,
false,
this.list.paintBucket,
this.physics.mock,
"metalBowlFalling",
);
}
otherMethod() {
...
}
destroySingleton() {
...
}
}
Physics Management Files
These files trigger the factories to apply physics to meshes, store the resulting physics bodies, and update mesh
positions on each frame.
// src/three/Experience/Theater/pathTo/mockFilePhysics
import Experience from "@/three/Experience/Theater/Experience/Experience";
import additionalShape from "./additionalShape.json";
import type {
PhysicsResources,
TrackName,
List,
modelsList
} from "@/types/experience/experience.types";
import type { cannonObject } from "@/types/three.types";
import type PhysicsGenerator from "../Factories/PhysicsGenerator/PhysicsGenerator";
import type UpdateLocation from "../Utils/UpdateLocation/UpdateLocation";
import type UpdatePositionMesh from "../Utils/UpdatePositionMesh/UpdatePositionMesh";
import type AudioGenerator from "../Utils/AudioGenerator/AudioGenerator";
let instance: MockFilePhysics | null = null;
export default class MockFilePhysics {
private experience: Experience;
private list: List;
private physicsGenerator: PhysicsGenerator;
private updateLocation: UpdateLocation;
private modelsList: modelsList;
private updatePositionMesh: UpdatePositionMesh;
private audioGenerator: AudioGenerator;
constructor() {
// Singleton
if (instance) {
return instance;
}
instance = this;
this.experience = new Experience();
this.debug = this.experience.debug;
this.physicsGenerator = this.experience.physicsGenerator;
this.updateLocation = this.experience.updateLocation;
this.updatePositionMesh = this.experience.updatePositionMesh;
this.audioGenerator = this.experience.audioGenerator;
// Array of objects of physics. This will be used to update the model's position, rotation, scale etc.
this.list = {
paintBucket: [],
};
}
setModelsList() {
//When the load progress reaches a certain percentage, we can set the models list, avoiding some potential bugs or unnecessary conditional logic. Please note that the method update is never run until the scene is fully ready.
this.modelsList = this.experience.world.constructionToolsModel.list;
}
addNewItem(
element: PhysicsResources,
listName: string,
trackName: TrackName,
sleepSpeedLimit: number | null = null,
) {
// factory to add physics, I will talk about that later
const itemWithPhysics = this.physicsGenerator.createItemPhysics(
element,
null,
true,
true,
trackName,
sleepSpeedLimit,
);
// Additional optional shapes to the item if needed
switch (listName) {
case "broom":
this.physicsGenerator.addMultipleAdditionalShapesToItem(
itemWithPhysics,
additionalShape.broomHandle,
);
break;
}
this.list[listName].push(itemWithPhysics);
}
// this methods is called everyfame.
update() {
// reusable code to update the position of the mesh
this.updatePositionMesh.updatePositionMesh(
this.modelsList["paintBucket"],
this.list["paintBucket"],
);
}
destroySingleton() {
...
}
}
Since the logic for updating mesh positions is consistent across the project, I created reusable code that can be
applied in nearly all physics-related files.
// src/three/Experience/Utils/UpdatePositionMesh/UpdatePositionMesh.ts
export default class UpdatePositionMesh {
updatePositionMesh(meshList: MeshList, physicList: PhysicList) {
for (let index = 0; index < physicList.length; index++) {
const physic = physicList[index];
const model = meshList[index].model;
model.position.set(
physic.position.x,
physic.position.y,
physic.position.z
);
model.quaternion.set(
physic.quaternion.x,
physic.quaternion.y,
physic.quaternion.z,
physic.quaternion.w
);
}
}
}
Factory Patterns
To avoid redundant code, I built a system around reusable code. While the project includes multiple factories, these
two are the most essential:
Model Factory
: LoadModel
With few exceptions, all models—whether instanced or regular, with or without physics—are added through this factory.
// src/three/Experience/factories/LoadModel/LoadModel.ts
import * as THREE from "three";
import Experience from "@/three/Experience/Theater/Experience/Experience";
import type {
PhysicsResources,
TrackName,
List,
modelListPath,
PhysicsListPath
} from "@/types/experience/experience.type";
import type { loadModelMaterial } from "./types";
import type { Material, Scene, Mesh } from "@/types/Three.types";
import type Progress from "@/three/Experience/Utils/Progress/Progress";
import type AddPhysicsToModel from "@/three/Experience/factories/AddPhysicsToModel/AddPhysicsToModel";
let instance: LoadModel | null = null;
export default class LoadModel {
public experience: Experience;
public progress: Progress;
public mesh: Mesh;
public addPhysicsToModel: AddPhysicsToModel;
public scene: Scene;
constructor() {
if (instance) {
return instance;
}
instance = this;
this.experience = new Experience();
this.scene = this.experience.scene;
this.progress = this.experience.progress;
this.addPhysicsToModel = this.experience.addPhysicsToModel;
}
async setModels(
model: Model,
list: PhysicsResources[],
physicsList: string,
bakedMaterial: LoadModelMaterial,
isCastShadow: boolean = false,
isReceiveShadow: boolean = false,
isIntancedModel: boolean = false,
isDoubleSided: boolean = false,
modelListPath: ModelListPath,
physicsListPath: PhysicsListPath,
trackName: TrackName = null,
sleepSpeedLimit: number | null = null,
) {
const loadedModel = isIntancedModel
? await this.addInstancedModel(
model,
bakedMaterial,
true,
true,
isDoubleSided,
isCastShadow,
isReceiveShadow,
list.length,
)
: await this.addModel(
model,
bakedMaterial,
true,
true,
isDoubleSided,
isCastShadow,
isReceiveShadow,
);
this.addPhysicsToModel.loopListThenAddModelToSceneThenToPhysics(
list,
modelListPath,
physicsListPath,
physicsList,
loadedModel,
isIntancedModel,
trackName,
sleepSpeedLimit,
);
}
addModel = (
model: Model,
material: Material,
isTransparent: boolean = false,
isFrustumCulled: boolean = true,
isDoubleSided: boolean = false,
isCastShadow: boolean = false,
isReceiveShadow: boolean = false,
isClone: boolean = true,
) => {
model.traverse((child: THREE.Object3D) => {
!isFrustumCulled ? (child.frustumCulled = false) : null;
if (child instanceof THREE.Mesh) {
child.castShadow = isCastShadow;
child.receiveShadow = isReceiveShadow;
material
&& (child.material = this.setMaterialOrCloneMaterial(
isClone,
material,
))
child.material.transparent = isTransparent;
isDoubleSided ? (child.material.side = THREE.DoubleSide) : null;
isReceiveShadow ? child.geometry.computeVertexNormals() : null; // https://discourse.threejs.org/t/gltf-model-shadows-not-receiving-with-gltfmeshstandardsgmaterial/24112/9
}
});
this.progress.addLoadedModel(); // Update the number of items loaded
return { model: model };
};
setMaterialOrCloneMaterial(isClone: boolean, material: Material) {
return isClone ? material.clone() : material;
}
addInstancedModel = () => {
...
};
// other methods
destroySingleton() {
...
}
}
Physics Factory: PhysicsGenerator
This factory has a single responsibility: creative physics properties for meshes.
// src/three/Experience/Utils/PhysicsGenerator/PhysicsGenerator.ts
import Experience from "@/three/Experience/Theater/Experience/Experience";
import * as CANNON from "cannon-es";
import CannonUtils from "@/utils/cannonUtils.js";
import type {
Quaternion,
PhysicsItemPosition,
PhysicsItemType,
PhysicsResources,
TrackName,
CannonObject,
} from "@/types/experience/experience.types";
import type { Scene, ConvexGeometry } from "@/types/three.types";
import type Progress from "@/three/Experience/Utils/Progress/Progress";
import type AudioGenerator from "@/three/Experience/Utils/AudioGenerator/AudioGenerator";
import type Physics from "@/three/Experience/Theater/Physics/Physics";
import type { physicsShape } from "./PhysicsGenerator.types"
let instance: PhysicsGenerator | null = null;
export default class PhysicsGenerator {
public experience: Experience;
public physics: Physics;
public currentScene: string | null = null;
public progress: Progress;
public audioGenerator: AudioGenerator;
constructor() {
// Singleton
if (instance) {
return instance;
}
instance = this;
this.experience = new Experience();
this.resources = this.experience.resources;
this.audioGenerator = this.experience.audioGenerator;
this.physics = this.experience.physics;
this.progress = this.experience.progress;
this.currentScene = this.experience.currentScene;
}
//#region add physics to an object
createItemPhysics(
source: PhysicsResources, // object containing physics info such as mass, shape, position....
convex?: ConvexGeometry | null = null,
allowSleep?: boolean = true,
isBodyToAdd?: boolean = true,
trackName?: TrackName = null,
sleepSpeedLimit?: number | null = null
) {
const setSpeedLimit = sleepSpeedLimit ?? 0.15;
// For this project I needed to detect if the user was in the Mont-Saint-Michel, Leap For Mankind, About or Archives scene.
const localCurrentScene = source.locations[this.currentScene]
? this.currentScene
: "about";
switch (source.type as physicsShape) {
case "box": {
const boxShape = new CANNON.Box(new CANNON.Vec3(...source.shape));
const boxBody = new CANNON.Body({
mass: source.mass,
position: new CANNON.Vec3(
source.locations[localCurrentScene].position.x,
source.locations[localCurrentScene].position.y,
source.locations[localCurrentScene].position.z
),
allowSleep: allowSleep,
shape: boxShape,
material: source.material
? source.material
: this.physics.physics.defaultMaterial,
sleepSpeedLimit: setSpeedLimit,
});
source.locations[localCurrentScene].quaternion
&& (boxBody.quaternion.y =
source.locations[localCurrentScene].quaternion.y);
this.physics.physics.addBody(boxBody);
this.updatedLoadedItem();
// Add optional SFX that will be played if the item collides with another physics item
trackName
&& this.audioGenerator.addEventListenersToObject(boxBody, TrackName);
return boxBody;
}
// Then it's basicly the same logic for all other cases
case "sphere": {
...
}
case "cylinder": {
...
}
case "plane": {
...
}
case "trigger": {
...
}
case "torus": {
...
}
case "trimesh": {
...
}
case "polyhedron": {
...
}
default:
...
break;
}
}
updatedLoadedItem() {
this.progress.addLoadedPhysicsItem(); // Update the number of item loaded (physics only)
}
//#endregion add physics to an object
// other
destroySingleton() {
...
}
}
FPS Capping
With over 100 models and approximately 150 physics items loaded in the main scene, Aurel’s Grand Theater required
performance-driven coding from the outset.
I were to rebuild the project today, I would leverage GPU computing much more intensively. However, when I started the
proof of concept in 2022, GPU computing for the web was still relatively new and not fully mature—at least, that was
my perception at the time. Rather than recoding everything, I worked with what I had, which also presented a great
personal challenge. In addition to using low-poly models and employing classic optimization techniques, I extensively
used instanced meshes for all small, reusable items—even those with physics. I also relied on many other
under-the-hood techniques to keep the performance as smooth as possible on this CPU-intensive website.
One particularly helpful approach I implemented was adaptive frame rates. By capping the FPS to different levels (60,
30, or 10), depending on whether the logic required rendering at those rates, I optimized performance. After all, some
logic doesn ‘t require rendering every frame. This is a simple yet effective technique that can easily be incorporated
into your own project.
Now, let ‘s take a look at the file responsible for managing time in the project.
// src/three/Experience/Utils/Time/Time.ts
import * as THREE from "three";
import EventEmitter from "@/three/Experience/Utils/EventEmitter/EventEmitter";
let instance: Time | null = null;
let animationFrameId: number | null = null;
const clock = new THREE.Clock();
export default class Time extends EventEmitter {
private lastTick60FPS: number = 0;
private lastTick30FPS: number = 0;
private lastTick10FPS: number = 0;
private accumulator60FPS: number = 0;
private accumulator30FPS: number = 0;
private accumulator10FPS: number = 0;
public start: number = 0;
public current: number = 0;
public elapsed: number = 0;
public delta: number = 0;
public delta60FPS: number = 0;
public delta30FPS: number = 0;
public delta10FPS: number = 0;
constructor() {
if (instance) {
return instance;
}
super();
instance = this;
}
tick() {
const currentTime: number = clock.getElapsedTime() * 1000;
this.delta = currentTime - this.current;
this.current = currentTime;
// Accumulate the time that has passed
this.accumulator60FPS += this.delta;
this.accumulator30FPS += this.delta;
this.accumulator10FPS += this.delta;
// Trigger uncapped tick event using the project's EventEmitter class
this.trigger("tick");
// Trigger 60FPS tick event
if (this.accumulator60FPS >= 1000 / 60) {
this.delta60FPS = currentTime - this.lastTick60FPS;
this.lastTick60FPS = currentTime;
// Same logic as "this.trigger("tick")" but for 60FPS
this.trigger("tick60FPS");
this.accumulator60FPS -= 1000 / 60;
}
// Trigger 30FPS tick event
if (this.accumulator30FPS >= 1000 / 30) {
this.delta30FPS = currentTime - this.lastTick30FPS;
this.lastTick30FPS = currentTime;
this.trigger("tick30FPS");
this.accumulator30FPS -= 1000 / 30;
}
// Trigger 10FPS tick event
if (this.accumulator10FPS >= 1000 / 10) {
this.delta10FPS = currentTime - this.lastTick10FPS;
this.lastTick10FPS = currentTime;
this.trigger("tick10FPS");
this.accumulator10FPS -= 1000 / 10;
}
animationFrameId = window.requestAnimationFrame(() => {
this.tick();
});
}
}
Then, in the Experience.ts
file, we simply place the methods according to the required FPS.
constructor() {
if (instance) {
return instance;
}
...
this.time = new Time();
...
// The game loops (here called tick) are updated when the EventEmitter class is triggered.
this.time.on("tick", () => {
this.update();
});
this.time.on("tick60FPS", () => {
this.update60();
});
this.time.on("tick30FPS", () => {
this.update30();
});
this.time.on("tick10FPS", () => {
this.update10();
});
}
update() {
this.renderer.update();
}
update60() {
this.camera.update60FPS();
this.world.update60FPS();
this.physics.update60FPS();
}
update30() {
this.physics.update30FPS();
this.world.update30FPS();
}
update10() {
this.physics.update10FPS();
this.world.update10FPS();
}
Inspired by techniques from the film industry, the transitions between the 3D game and the more traditionally
structured pages, such as the Case Studies, About, and Credits pages, were carefully designed to feel seamless and
cinematic.
The first-time visit animation provides context and immerses users into the website experience. Meanwhile, the other
page transitions play a crucial role in ensuring a smooth shift between the game and the more conventional layout of
the Case Studies and About page, preserving immersion while naturally guiding users from one experience to the next.
Without these transitions, it would feel like abruptly jumping between two entirely different worlds.
I’ll do a deep dive into the code for the animation when the user returns from the basement level. It’s a bit simpler
than the other cinematic transitions but the underlying logic is the same, which makes it easier for you to adapt it
to another project.
The init
method, called from another file, initiates the creation of the animation. At first, we set the path for the
animation, then the timeline.
init() {
this.camera = this.experience.camera.instance;
this.initPath();
}
initPath() {
// create the path for the camera
const pathPoints = new CatmullRomCurve3([
new Vector3(CAMERA_POSITION_SEAT[0], CAMERA_POSITION_SEAT[1], 15),
new Vector3(5.12, 4, 8.18),
new Vector3(...RETURNING_PLAYER_CAMERA_FINAL_POSITION),
]);
// init the timeline
this.initTimeline(pathPoints);
}
initTimeline(path: CatmullRomCurve3) {
...
}
The timeline animation is split into two: a) The camera moves vertically from the basement to the theater, above the
seats.
...
initTimeline(path: CatmullRomCurve3) {
// get the points
const pathPoints = path.getPoints(30);
// create the gsap timeline
this.timelineAnimation
// set the initial position
.set(this.camera.position, {
x: CAMERA_POSITION_SEAT[0],
y: CAMERA_POSITION_SEAT[1] - 3,
z: 15,
})
.add(() => {
this.camera.lookAt(3.5, 1, 0);
})
// Start the animation! In this case the camera is moving from the basement to above the seat
.to(this.camera.position, {
x: CAMERA_POSITION_SEAT[0],
y: CAMERA_POSITION_SEAT[1],
z: 15,
duration: 3,
ease: "elastic.out(0.1,0.1)",
})
.to(
this.camera.position,
{
...
},
)
...
}
b) The camera follows a path while smoothly transitioning its view to the final location.
.to(
this.camera.position,
{
// then we use motion path to move the camera to the player behind the raccoon
motionPath: {
path: pathPoints,
curviness: 0,
autoRotate: false,
},
ease: "power1.inOut",
duration: DURATION_RETURNING_FORWARD,
onUpdate: function () {
const progress = this.progress();
// wait until progress reaches a certain point to rotate to the camera at the player LookAt
if (
progress >=
1 -
DURATION_LOOKAT_RETURNING_FORWARD /
DURATION_RETURNING_FORWARD &&
!this.lookAtTransitionStarted
) {
this.lookAtTransitionStarted = true;
// Create a new Vector3 to store the current look direction
const currentLookAt = new Vector3();
// Get the current camera's forward direction (where it's looking)
instance!.camera.getWorldDirection(currentLookAt);
// Extend the look direction by 100 units and add the camera's position
// This creates a point in space that the camera is currently looking at
currentLookAt.multiplyScalar(100).add(instance!.camera.position);
// smooth lookAt animation
createSmoothLookAtTransition(
currentLookAt,
new Vector3(...RETURNING_PLAYER_CAMERA_FINAL_LOOKAT),
DURATION_LOOKAT_RETURNING_FORWARD,
this.camera
);
}
},
},
)
.add(() => {
// animation is completed, you can add some code here
});
As you noticed, I used a utility function called smoothLookAtTransition
since I needed this functionality in multiple places.
With everything ready, the animation sequence is run when playAnimation()
is triggered.
playAnimation() {
// first set the position of the player
this.setPositionPlayer();
// then play the animation
this.timelineAnimation.play();
}
setPositionPlayer() {
// an simple utils to update the position of the player when the user land in the scene, return or switch scene.
setPlayerPosition(this.experience, {
position: PLAYER_POSITION_RETURNING,
quaternion: RETURNING_PLAYER_QUATERNION,
rotation: RETURNING_PLAYER_ROTATION,
});
}
Scroll-Triggered Animations: Showcasing Books on About Pages
While the game is fun and filled with details, the case studies and about pages are crucial to the overall experience,
even though they follow a more standardized format. These pages still have their own unique appeal. They are filled
with subtle details and animations, particularly scroll-triggered effects such as split text animations when
paragraphs enter the viewport, along with fade-out effects on SVGs and other assets. These animations create a vibe
that mirrors the mysterious yet intriguing atmosphere of the game, inviting visitors to keep scrolling and exploring.
While I can’t cover every animation in detail, I ‘d like to share the technical approach behind the book animations
featured on the about page. This effect blends DOM scroll event tracking with a Three.js scene, creating a seamless
interaction between the user ‘s scrolling behavior and the 3D-rendered books. As visitors scroll down the page, the
books transition elegantly and respond dynamically to their movement.
Before we dive into the Three.js
file, let ‘s look into the Vue
component.
//src/components/BookGallery/BookGallery.vue
<template>
<!-- the ID is used in the three.js file -->
<div class="book-gallery" id="bookGallery" ref="bookGallery"></div>
</template>
<script setup lang="ts">
import { onBeforeUnmount, onMounted, onUnmounted, ref } from "vue";
import gsap from "gsap";
import { ScrollTrigger } from "gsap/ScrollTrigger";
import type { BookGalleryProps } from "./types";
gsap.registerPlugin(ScrollTrigger);
const props = withDefaults(defineProps<BookGalleryProps>(), {});
const bookGallery = ref<HTMLBaseElement | null>(null);
const setupScrollTriggers = () => {
...
};
const triggerAnimation = (index: number) => {
...
};
onMounted(() => {
setupScrollTriggers();
});
onUnmounted(() => {
...
});
</script>
<style lang="scss" scoped>
.book-gallery {
position: relative;
height: 400svh; // 1000svh * 4 books
}
</style>
Thresholds are defined for each book to determine which one will be active – that is, the book that will face the
camera.
// src/three/Experience/Basement/World/Books/Books.ts
import * as THREE from "three";
import Experience from "@/three/Experience/Basement/Experience/Experience";
import { SCROLL_RATIO } from "@/constant/scroll";
import { gsap } from "gsap";
import type { Book } from "./books.types";
import type { Material, Scene, Texture, ThreeGroup } from "@/types/three.types";
import type { Sizes } from "@/three/Experience/Utils/Sizes/types";
import type LoadModel from "@/three/Experience/factories/LoadModel/LoadModel";
import type MaterialGenerator from "@/three/Experience/factories/MaterialGenerator/BasicMaterialGenerator";
import type Resources from "@/three/Experience/Utils/Ressources/Resources";
const GSAP_EASE = "power2.out";
const GSAP_DURATION = 1;
const NB_OF_VIEWPORTS_BOOK_SECTION = 5;
let instance: Books | null = null;
export default class Books {
public scene: Scene;
public experience: Experience;
public resources: Resources;
public loadModel: LoadModel;
public sizes: Sizes;
public materialGenerator: MaterialGenerator;
public resourceDiffuse: Texture;
public resourceNormal: Texture;
public bakedMaterial: Material;
public startingPostionY: number;
public originalPosition: Book[];
public activeIndex: number = 0;
public isAnimationRunning: boolean = false;
public bookGalleryElement: HTMLElement | null = null;
public bookSectionHeight: number;
public booksGroup: ThreeGroup;
constructor() {
if (instance) {
return instance;
}
instance = this;
this.experience = new Experience();
this.scene = this.experience.sceneSecondary; // I am using a second scene for the books, so it's not affected by the primary scene (basement in the background)
this.sizes = this.experience.sizes;
this.resources = this.experience.resources;
this.materialGenerator = this.experience.materialGenerator;
this.init();
}
init() {
...
}
initModels() {
...
}
findPosition() {
...
}
setBookSectionHeight() {
...
}
initBooks() {
...
}
initBook() {
...
}
createAnimation() {
...
}
toggleIsAnimationRunning() {
...
}
...
destroySingleton() {
...
}
}
When the file is initialized, we set up the textures and positions of the books.
init() {
this.initModels();
this.findPosition();
this.setBookSectionHeight();
this.initBooks();
}
initModels() {
this.originalPosition = [
{
name: "book1",
meshName: null, // the name of the mesh from Blender will dynamically be written here
position: { x: 0, y: -0, z: 20 },
rotation: { x: 0, y: Math.PI / 2.2, z: 0 }, // some rotation on y axis so it looks more natural when the books are pilled
},
{
name: "book2",
meshName: null,
position: { x: 0, y: -0.25, z: 20 },
rotation: { x: 0, y: Math.PI / 1.8, z: 0 },
},
{
name: "book3",
meshName: null,
position: { x: 0, y: -0.52, z: 20 },
rotation: { x: 0, y: Math.PI / 2, z: 0 },
},
{
name: "book4",
meshName: null,
position: { x: 0, y: -0.73, z: 20 },
rotation: { x: 0, y: Math.PI / 2.3, z: 0 },
},
];
this.resourceDiffuse = this.resources.items.bookDiffuse;
this.resourceNormal = this.resources.items.bookNormal;
// a reusable class to set the material and normal map
this.bakedMaterial = this.materialGenerator.setStandardMaterialAndNormal(
this.resourceDiffuse,
this.resourceNormal
);
}
//#region position of the books
// Finds the initial position of the book gallery in the DOM
findPosition() {
this.bookGalleryElement = document.getElementById("bookGallery");
if (this.bookGalleryElement) {
const rect = this.bookGalleryElement.getBoundingClientRect();
this.startingPostionY = (rect.top + window.scrollY) / 200;
}
}
// Sets the height of the book section based on viewport and scroll ratio
setBookSectionHeight() {
this.bookSectionHeight =
this.sizes.height * NB_OF_VIEWPORTS_BOOK_SECTION * SCROLL_RATIO;
}
//#endregion position of the books
Each book mesh is created and added to the scene as a THREE.Group
.
Each time a book enters
or reenters
its thresholds, the triggers from the Vue
file run the animation createAnimation
in this file, which rotates the active book in front of the camera and stacks the other books into a pile.
The game is the main attraction of the website. The entire concept began back in 2022, when I set out to build a small
mini-game where you could jump on tables and smash things and it was my favorite part to work on.
Beyond being fun to develop, the interactive physics elements make the experience more engaging, adding a whole new
layer of excitement and exploration that simply isn’t possible in a flat, static environment.
While I can ‘t possibly cover all the physics-related elements, one of my favorites is the rope system near the menu.
It’s a subtle detail, but it was one of the first things I coded when I started leaning into a more theatrical,
artistic direction.
The ropes were also built with performance in mind—optimized to look and behave convincingly without dragging down the
framerate.
This is the base file for the meshes:
// src/three/Experience/Theater/World/Theater/Rope/RopeModel.ts
import * as THREE from "three";
import Experience from "@/three/Experience/Theater/Experience/Experience";
import RopeMaterialGenerator from "@/three/Experience/Factories/MaterialGenerator/RopeMaterialGenerator";
import ropesLocation from "./ropesLocation.json";
import type { Location, List } from "@/types/experience/experience.types";
import type { Scene, Resources, Physics, RopeMesh, CurveQuad } from "@/types/three.types";
let instance: RopeModel | null = null;
export default class RopeModel {
public scene: Scene;
public experience: Experience;
public resources: Resources;
public physics: Physics;
public material: Material;
public list: List;
public ropeMaterialGenerator: RopeMaterialGenerator;
public ropeLength: number = 20;
public ropeRadius: number = 0.02;
public ropeRadiusSegments: number = 8;
constructor() {
// Singleton
if (instance) {
return instance;
}
instance = this;
this.experience = new Experience();
this.scene = this.experience.scene;
this.resources = this.experience.resources;
this.physics = this.experience.physics;
this.ropeMaterialGenerator = new RopeMaterialGenerator();
this.ropeLength = this.experience.physics.rope.numberOfSpheres || 20;
this.ropeRadius = 0.02;
this.ropeRadiusSegments = 8;
this.list = {
rope: [],
};
this.initRope();
}
initRope() {
...
}
createRope() {
...
}
setArrayOfVertor3() {
...
}
setYValues() {
...
}
setMaterial() {
...
}
addRopeToScene() {
...
}
//#region update at 60FPS
update() {
...
}
updateLineGeometry() {
...
}
//#endregion update at 60FPS
destroySingleton() {
...
}
}
Mesh creation is initiated inside the constructor.
// src/three/Experience/Theater/World/Theater/Rope/RopeModel.ts
constructor() {
...
this.initRope();
}
initRope() {
// Generate the material that will be used for all ropes
this.setMaterial();
// Create a rope at each location specified in the ropesLocation configuration
ropesLocation.forEach((location) => {
this.createRope(location);
});
}
createRope(location: Location) {
// Generate the curve that defines the rope's path
const curveQuad = this.setArrayOfVertor3();
this.setYValues(curveQuad);
const tube = new THREE.TubeGeometry(
curveQuad,
this.ropeLength,
this.ropeRadius,
this.ropeRadiusSegments,
false
);
const rope = new THREE.Mesh(tube, this.material);
rope.geometry.attributes.position.needsUpdate = true;
// Add the rope to the scene and set up its physics. I'll explain it later.
this.addRopeToScene(rope, location);
}
setArrayOfVertor3() {
const arrayLimit = this.ropeLength;
const setArrayOfVertor3 = [];
// Create points in a vertical line, spaced 1 unit apart
for (let index = 0; index < arrayLimit; index++) {
setArrayOfVertor3.push(new THREE.Vector3(10, 9 - index, 0));
if (index + 1 === arrayLimit) {
return new THREE.CatmullRomCurve3(
setArrayOfVertor3,
false,
"catmullrom",
0.1
);
}
}
}
setYValues(curve: CurveQuad) {
// Set each point's Y value to its index, creating a vertical line
for (let i = 0; i < curve.points.length; i++) {
curve.points[i].y = i;
}
}
setMaterial(){
...
}
Since the rope texture is used in multiple places, I use a factory pattern for efficiency.
...
setMaterial() {
this.material = this.ropeMaterialGenerator.generateRopeMaterial(
"rope",
0x3a301d, // Brown color
1.68, // Normal Repeat
0.902, // Normal Intensity
21.718, // Noise Strength
1.57, // UV Rotation
9.14, // UV Height
this.resources.items.ropeDiffuse, // Diffuse texture map
this.resources.items.ropeNormal // Normal map for surface detail
);
}
// src/three/Experience/Shaders/Rope/vertex.glsl
uniform float uNoiseStrength; // Controls the intensity of noise effect
uniform float uNormalIntensity; // Controls the strength of normal mapping
uniform float uNormalRepeat; // Controls the tiling of normal map
uniform vec3 uLightColor; // Color of the light source
uniform float uShadowStrength; // Intensity of shadow effect
uniform vec3 uLightPosition; // Position of the light source
uniform float uvRotate; // Rotation angle for UV coordinates
uniform float uvHeight; // Height scaling for UV coordinates
uniform bool isShadowBothSides; // Flag for double-sided shadow rendering
varying float vNoiseStrength; // Passes noise strength to fragment shader
varying float vNormalIntensity; // Passes normal intensity to fragment shader
varying float vNormalRepeat; // Passes normal repeat to fragment shader
varying vec2 vUv; // UV coordinates for texture mapping
varying vec3 vColorPrimary; // Primary color for the material
varying vec3 viewPos; // Position in view space
varying vec3 vLightColor; // Light color passed to fragment shader
varying vec3 worldPos; // Position in world space
varying float vShadowStrength; // Shadow strength passed to fragment shader
varying vec3 vLightPosition; // Light position passed to fragment shader
// Helper function to create a 2D rotation matrix
mat2 rotate(float angle) {
return mat2(cos(angle), -sin(angle), sin(angle), cos(angle));
}
void main() {
// Calculate rotation angle and its sine/cosine components
float angle = 1.0 * uvRotate;
float s = sin(angle);
float c = cos(angle);
// Create rotation matrix for UV coordinates
mat2 rotationMatrix = mat2(c, s, -s, c);
// Define pivot point for UV rotation
vec2 pivot = vec2(0.5, 0.5);
// Transform vertex position to clip space
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
// Apply rotation and height scaling to UV coordinates
vUv = rotationMatrix * (uv - pivot) + pivot;
vUv.y *= uvHeight;
// Pass various parameters to fragment shader
vNormalRepeat = uNormalRepeat;
vNormalIntensity = uNormalIntensity;
viewPos = vec3(0.0, 0.0, 0.0); // Initialize view position
vNoiseStrength = uNoiseStrength;
vLightColor = uLightColor;
vShadowStrength = uShadowStrength;
vLightPosition = uLightPosition;
}
Once the material is created and added to the mesh, the addRopeToScene
function adds the rope to the scene, then calls the addPhysicsToRope
function from the physics file.
// src/three/Experience/Theater/World/Theater/Rope/RopeModel.ts
addRopeToScene(mesh: Mesh, location: Location) {
this.list.rope.push(mesh); //Add the rope to an array, which will be used by the physics file to update the mesh
this.scene.add(mesh);
this.physics.rope.addPhysicsToRope(location); // same as src/three/Experience/Theater/Physics/Theater/Rope/Rope.addPhysicsToRope(location)
}
Let ‘s now focus on the physics file.
// src/three/Experience/Theater/Physics/Theater/Rope/Rope.ts
import * as CANNON from "cannon-es";
import Experience from "@/three/Experience/Theater/Experience/Experience";
import type { Location } from "@/types/experience.types";
import type Physics from "@/three/Experience/Theater/Physics/Physics";
import type { Scene, SphereBody } from "@/types/three.types";
let instance: Rope | null = null;
const SIZE_SPHERE = 0.05;
const ANGULAR_DAMPING = 1;
const DISTANCE_BETWEEN_SPHERES = SIZE_SPHERE * 5;
const DISTANCE_BETWEEN_SPHERES_BOTTOM = 2.3;
const DISTANCE_BETWEEN_SPHERES_TOP = 6;
const LINEAR_DAMPING = 0.5;
const NUMBER_OF_SPHERES = 20;
export default class Rope {
public experience: Experience;
public physics: Physics;
public scene: Scene;
public list: list[];
constructor() {
// Singleton
if (instance) {
return instance;
}
instance = this;
this.experience = new Experience();
this.scene = this.experience.scene;
this.physics = this.experience.physics;
this.list = {
rope: [],
};
}
//#region add physics
addPhysicsToRope() {
...
}
setRopePhysics() {
...
}
setMassRope() {
...
}
setDistanceBetweenSpheres() {
...
}
setDistanceBetweenConstraints() {
...
}
addConstraints() {
...
}
//#endregion add physics
//#region update at 60FPS
update() {
...
}
loopRopeWithPhysics() {
...
}
updatePoints() {
...
}
//#endregion update at 60FPS
destroySingleton() {
...
}
}
The rope’s physics is created from the mesh file using the methods addPhysicsToRope
, called using this.physics.rope.addPhysicsToRope(location);.
addPhysicsToRope(location: Location) {
this.setRopePhysics(location);
}
setRopePhysics(location: Location) {
const sphereShape = new CANNON.Sphere(SIZE_SPHERE);
const rope = [];
let lastBody = null;
for (let index = 0; index < NUMBER_OF_SPHERES; index++) {
// Create physics body for each sphere in the rope. The spheres will be what collide with the player
const spherebody = new CANNON.Body({ mass: this.setMassRope(index) });
spherebody.addShape(sphereShape);
spherebody.position.set(
location.x,
location.y - index * DISTANCE_BETWEEN_SPHERES,
location.z
);
this.physics.physics.addBody(spherebody);
rope.push(spherebody);
spherebody.linearDamping = LINEAR_DAMPING;
spherebody.angularDamping = ANGULAR_DAMPING;
// Create constraints between consecutive spheres
lastBody !== null
? this.addConstraints(spherebody, lastBody, index)
: null;
lastBody = spherebody;
if (index + 1 === NUMBER_OF_SPHERES) {
this.list.rope.push(rope);
}
}
}
setMassRope(index: number) {
return index === 0 ? 0 : 2; // first sphere is fixed (mass 0)
}
setDistanceBetweenSpheres(index: number, locationY: number) {
return locationY - DISTANCE_BETWEEN_SPHERES * index;
}
setDistanceBetweenConstraints(index: number) {
// since the user only interact the spheres are the bottom, so the distance between the spheres is gradualy increasing from the bottom to the top//Since the user only interacts with the spheres that are at the bottom, the distance between the spheres is gradually increasing from the bottom to the top
if (index <= 2) {
return DISTANCE_BETWEEN_SPHERES * DISTANCE_BETWEEN_SPHERES_TOP;
}
if (index > 2 && index <= 8) {
return DISTANCE_BETWEEN_SPHERES * DISTANCE_BETWEEN_SPHERES_BOTTOM;
}
return DISTANCE_BETWEEN_SPHERES;
}
addConstraints(
sphereBody: CANNON.Body,
lastBody: CANNON.Body,
index: number
) {
this.physics.physics.addConstraint(
new CANNON.DistanceConstraint(
sphereBody,
lastBody,
this.setDistanceBetweenConstraints(index)
)
);
}
When configuring physics parameters, strategy is key. Although users won ‘t consciously notice during gameplay, they
can only interact with the lower portion of the rope. Therefore, I concentrated more physics detail where it matters –
by adding more spheres to the bottom of the rope.
Since the user only interacts with the bottom of the rope, the density of the physics sphere is higher at the bottom
of the rope than at the top of the rope.
Rope meshes are then updated every frame from the physics file.
//#region update at 60FPS
update() {
this.loopRopeWithPhysics();
}
loopRopeWithPhysics() {
for (let index = 0; index < this.list.rope.length; index++) {
this.updatePoints(this.list.rope[index], index);
}
}
updatePoints(element: CANNON.Body[], indexParent: number) {
element.forEach((item: CANNON.Body, index: number) => {
// Update the mesh with the location of each of the physics spheres
this.experience.world.rope.list.rope[
indexParent
].geometry.parameters.path.points[index].copy(item.position);
});
}
//#endregion update at 60FPS
Animations in the DOM – ticket tearing particles
While the website heavily relies on Three.js to create an immersive experience, many elements remain DOM-based. One of
my goals for this portfolio was to combine both worlds: the rich, interactive 3D environments and the efficiency of
traditional DOM elements. Furthermore, I genuinely enjoy coding DOM-based micro-interactions, so skipping out on them
wasn ‘t an option!
One of my favorite DOM animations is the ticket-tearing effect, especially the particles flying away. It ‘s subtle,
but adds a bit of charm. The effect is not only fun to watch but also relatively easy to adapt to other projects.
First, let ‘s look at the structure of the components.
TicketBase.vue
is a fairly simple file with minimal styling. It handles the tearing animation and a few basic functions. Everything
else related to the ticket such as the style is handled by other components passed through slots.
To make things clearer, I ‘ve cleaned up my TicketBase.vue
file a bit to highlight how the particle effect works.
The createParticles
function creates a few new <div>
elements, which act as the little particles. These divs are then appended to either the main part of the ticket or the
torn part.
const createParticles = (containerSelector: HTMLElement, direction: string) => {
const numParticles = 5;
for (let i = 0; i < numParticles; i++) {
const particle = document.createElement("div");
particle.className = "particle";
// Calculate left position based on index and add small random offset
const baseLeft = (i / numParticles) * 100;
const randomOffset = (Math.random() - 0.5) * 10;
particle.style.left = `calc(${baseLeft}% + ${randomOffset}%)`;
// Assign unique animation properties
const duration = Math.random() * 0.3 + 0.1;
const translateY = (i / numParticles) * -20 - 2;
const scale = Math.random() * 0.5 + 0.5;
const delay = ((numParticles - i - 1) / numParticles) * 0;
particle.style.animation = `flyAway ${duration}s ${delay}s ease-in forwards`;
particle.style.setProperty("--translateY", `${translateY}px`);
particle.style.setProperty("--scale", scale.toString());
if (direction === "bottom") {
particle.style.animation = `flyAwayBottom ${duration}s ${delay}s ease-in forwards`;
}
containerSelector.appendChild(particle);
// Remove particle after animation ends
particle.addEventListener("animationend", () => {
particle.remove();
});
}
};
The particles are animated using a CSS keyframes animation called flyAway
or flyAwayBottom
.
There are so many features, details easter eggs and animation I wanted to cover in this article, but it’s simply not
possible to go through everything as it would be too much and many deserve their own tutorial.
That said, here are some of my favorites to code. They definitely deserve a spot in this article.
Some features I had a blast working on: radial blur, cursor trail, particles, 404 page, paws/bird animation,
navigation animation, collision animation.
Reflections on Aurel’s Grand Theater
Even though it took longer than I originally anticipated, Aurel ‘s Grand Theater was an incredibly fun and rewarding
project to work on. Because it wasn ‘t a client project, it offered a rare opportunity to freely experiment, explore
new ideas, and push myself outside my comfort zone, without the usual constraints of budgets or deadlines.
Looking back, there are definitely things I ‘d approach differently if I were to start again. I ‘d spend more time
defining the art direction upfront, lean more heavily into GPU, and perhaps implement Rapier. But despite these
reflections, I had an amazing time building this project and I ‘m satisfied with the final result.
While recognition was never the goal, I ‘m deeply honored that the site was acknowledged. It received FWA of the Day,
Awwwards Site of the Day and Developer Award, as well as GSAP’s Site of the Week and Site of the Month.
I ‘m truly grateful for the recognition, and I hope this behind-the-scenes look and shared code snippets inspire you
in your own creative coding journey.
We assume that by now you’ve all read the wonderful news about GSAP now becoming 100% free, for everyone. Thanks to Webflow’s support, all of the previously paid plugins in GSAP are now accessible to everyone. That’s why today, Osmo, Codrops and GSAP are teaming up to bring you 5 demos, available both as a Webflow cloneable and CodePen. We hope these will provide a fun intro to some cool plugins and spark a few ideas!
What you’ll learn:
SplitText basics: Break text into lines, words, or letters—with the new automatic resizing and built-in masking options!
DrawSVG scribbles: Add a playful, randomized underline to links (or anything) on hover using DrawSVG.
Physics2D text smash: Combine SplitText + Physics2D so your headline shatters into letters that tumble off the top of the viewport like a roof.
Inertia dot grid: Create an interactive, glowing dot matrix that springs and flows with your cursor for a dynamic background effect.
MorphSVG toggle: Build a seamless play/pause button that morphs one SVG into another in a single tween.
Before we dive in, let’s make sure you have the GSAP core included in your project. I will let you know the exact plugins you need per demo! You can use the official GSAP Install Helper if you need the correct npm commands or CDN links. If you’re following this as a Webflow user and you want to build from scratch, Webflow has made it super easy to integrate GSAP into your project. If you want, you can read more here. When using this approach, just make sure to add your custom code somewhere in the before </body> section of the page or project settings.
Perfect, with that set, let’s start building an interactive SplitText demo!
Interactive SplitText Demo
Before we dive into code, a couple notes:
Plugins needed: GSAP core, SplitText, and (optionally) CustomEase.
The CustomEase plugin isn’t required—feel free to swap in any ease or omit it entirely—but we’ll use it here to give our animation a distinctive feel.
Demo purpose: We’re building an interactive demo here, with buttons to trigger different reveal styles. If you just want a one-off split-text reveal (e.g. on scroll or on load), you can skip the buttons and wire your tween directly into ScrollTrigger, Click handlers, etc.
HTML and CSS Setup
<div class="text-demo-wrap">
<h1 data-split="heading" class="text-demo-h">
We’re using GSAP’s SplitText to break this content into lines, words, and individual characters. Experiment with staggered tweens, custom ease functions, and dynamic transforms to bring your headlines to life.
</h1>
<div class="text-demo-buttons">
<button data-split="button" data-split-type="lines" class="text-demo-button"><span>Lines</span></button>
<button data-split="button" data-split-type="words" class="text-demo-button"><span>Words</span></button>
<button data-split="button" data-split-type="letters" class="text-demo-button"><span>Letters</span></button>
</div>
</div>
This single call does the heavy lifting: it splits your <h1> into three levels of granularity, wraps each line in a masked container, and keeps everything in sync on resize.
const heading = document.querySelector('[data-split="heading"]');
SplitText.create(heading, {
type: "lines, words, chars", // split by lines, words & characters
mask: "lines", // optional: wraps each line in an overflow-clip <div> for a mask effect later
linesClass: "line",
wordsClass: "word",
charsClass: "letter"
});
mask: "lines" wraps each line in its own container so you can do masked reveals without extra markup.
3. Hook up the buttons
Since this is a showcase, we’ve added three buttons. One each for “Lines”, “Words” and “Letters”—to let users trigger each style on demand. In a real project you might fire these tweens on scroll, on page load, or when another interaction occurs.
To keep our code a bit cleaner, we define a config object that maps each split type to its ideal duration and stagger. Because lines, words, and letters have vastly different counts, matching your timing to the number of elements ensures each animation feels tight and responsive.
If you used the same stagger for letters as you do for lines, animating dozens (or hundreds) of chars would take forever. Tailoring the stagger to the element count keeps the reveal snappy.
function animate(type) {
// 1) Clean up any running tween so clicks “restart” cleanly
if (currentTween) {
currentTween.kill();
gsap.set(currentTargets, { yPercent: 0 });
}
// 2) Pull the right timing from our config
const { duration, stagger } = config[type];
// 3) Match the button’s data-split-type to the CSS class
// Our SplitText call used linesClass="line", wordsClass="word", charsClass="letter"
const selector = type === "lines" ? ".line"
: type === "words" ? ".word"
: ".letter";
// 4) Query the correct elements and animate
currentTargets = heading.querySelectorAll(selector);
currentTween = gsap.fromTo(
currentTargets,
{ yPercent: 110 },
{ yPercent: 0, duration, stagger, ease: "osmo-ease" }
);
}
Notice how type (the button’s data-split-type) directly aligns with our config keys and the class names we set on each slice. This tidy mapping means you can add new types (or swap class names) without rewriting your logic—just update config (and your SplitText options) and the function auto-adapts.
Finally, tie it all together with event listeners:
Let’s put all of our JS together in one neat function, and call it as soon as our fonts are loaded. This way we avoid splitting text while a fallback font is visible, and with that, we avoid any unexpected line breaks.
Give it a spin yourself! Find this demo on CodePen and grab the Webflow cloneable below. For a deep dive into every available option, check out the official SplitText docs, and head over to the CustomEase documentation to learn how to craft your own easing curves.
We’ll continue next with the Physics2D Text Smash demo—combining SplitText with another GSAP plugin for a totally different effect.
Physics2D Text Smash Demo
If you weren’t aware already, with the recent Webflow × GSAP announcements, SplitText received a major overhaul—packed with powerful new options, accessibility improvements, and a dramatically smaller bundle size. Check out the SplitText docs for all the details.
Unlike our previous demo (which was more of an interactive playground with buttons), this effect is a lot closer to a real-world application; as you scroll, each heading “breaks” into characters and falls off of your viewport like it’s hit a roof—thanks to ScrollTrigger and Physics2DPlugin.
Before we dive into code, a couple notes:
Plugins needed: GSAP core, SplitText, ScrollTrigger, and Physics2DPlugin.
Assets used: We’re using some squiggly, fun, 3D objects from a free pack on wannathis.one. Definitely check out their stuff, they have more fun things!
Demo purpose: We’re combining SplitText + Physics2D on scroll so your headings shatter into characters and “fall” off the top of the viewport, as if they hit a ‘roof’.
HTML & CSS Setup
<div class="drop-wrapper">
<div class="drop-section">
<h1 data-drop-text="" class="drop-heading">
This is just a
<span data-drop-img="" class="drop-heading-img is--first"><img loading="lazy" src="https://cdn.prod.website-files.com/681a615bf5a0f1ba3cb1ca38/681a62d0bb34b74d3514ecab_shape-squigle-1.png" alt=""></span>
random quote
<span data-drop-img="" class="drop-heading-img is--second"><img loading="lazy" src="https://cdn.prod.website-files.com/681a615bf5a0f1ba3cb1ca38/681a62d0bb34b74d3514ecad_shape-squigle-2.png" alt=""></span>
we used
</h1>
</div>
<div class="drop-section">
<h1 data-drop-text="" class="drop-heading">
See how our window acts like
<span data-drop-img="" class="drop-heading-img is--third"><img loading="lazy" src="https://cdn.prod.website-files.com/681a615bf5a0f1ba3cb1ca38/681a62d0bb34b74d3514ecaf_shape-squigle-3.png" alt=""></span>
a roof?
</h1>
</div>
<div class="drop-section">
<h1 data-drop-text="" class="drop-heading">So much fun!</h1>
</div>
</div>
We’re using aria: true here to automatically add an aria-label on the wrapper and hide split spans from screen readers. Since the latest update, aria: true is the default, so you don’t necessarily have to add it here—but we’re highlighting it for the article.
We split the text as soon as the code runs, so that we can attach a callback to the new onSplit function, but more on that in step 3.
new SplitText("[data-drop-text]", {
type: "lines, chars",
autoSplit: true, // re-split if the element resizes and it's split by lines
aria: true, // default now, but worth highlighting!
linesClass: "line",
});
With the recent SplitText update, there’s also a new option called autoSplit—which takes care of resize events, and re-splitting your text.
An important caveat for the autoSplit option; you should always create your animations in the (also new!) onSplit() callback so that if your text re-splits (when the container resizes or a font loads in), the resulting animations affect the freshly-created line/word/character elements instead of the ones from the previous split. If you’re planning on using a non-responsive font-size or just want to learn more about this (awesome) new feature that takes care of responsive line splitting, check out the documentation here.
3. Trigger on scroll
In our onSplit callback, we loop over each line in the heading, inside of a context. This context, which we return at the end, makes sure GSAP can clean up this animation whenever the text re-splits.
In our loop, we create a ScrollTrigger for each line, and we set once: true, so our animation only fires once. In step 4 we’ll add our animation!
It’s worth playing around with the start values to really nail the moment where your text visually ‘touches’ the top of the window. For our font, size, and line-height combo, an offset of 10px worked great.
new SplitText("[data-drop-text]", {
type: "lines, chars",
autoSplit: true,
aria: true,
linesClass: "line",
onSplit(self) {
// use a context to collect up all the animations
let ctx = gsap.context(() => {
self.lines.forEach((line) => { // loop around the lines
gsap.timeline({
scrollTrigger: {
once: true, // only fire once
trigger: line, // use the line as a trigger
start: "top top-=10" // adjust the trigger point to your liking
}
})
});
});
return ctx; // return our animations so GSAP can clean them up when onSplit fires
}
});
4. Drop the letters with Physics2D
Now, let’s add 2 tweens to our timeline. The first one, using the Physics2D plugin, sends each child element of the line, flying straight down with randomized velocity, angle, and gravity. A second tween makes sure the elements are faded out towards the end.
new SplitText("[data-drop-text]", {
type: "lines, chars",
autoSplit: true,
aria: true,
linesClass: "line",
onSplit(self) {
// use a context to collect up all the animations
let ctx = gsap.context(() => {
self.lines.forEach((line) => { // loop around the lines
gsap.timeline({
scrollTrigger: {
once: true, // only fire once
trigger: line, // use the line as a trigger
start: "top top-=10" // adjust the trigger point to your liking
}
})
.to(line.children, { // target the children
duration: "random(1.5, 3)", // Use randomized values for a more dynamic animation
physics2D: {
velocity: "random(500, 1000)",
angle: 90,
gravity: 3000
},
rotation: "random(-90, 90)",
ease: "none"
})
.to(line.children,{ // Start fading them out
autoAlpha: 0,
duration: 0.2
}, "-=.2");
});
});
return ctx; // return our animations so GSAP can clean them up when onSplit fires
}
});
Tip: use gsap.utils.random()! Giving each char and image a slightly different speed and spin creates a joyful, and more natural feeling to it all.
5. Putting it all together
gsap.registerPlugin(ScrollTrigger, SplitText, Physics2DPlugin);
function initDroppingText() {
new SplitText("[data-drop-text]", {
type: "lines, chars",
autoSplit: true,
aria: true,
linesClass: "line",
onSplit(self) {
// use a context to collect up all the animations
let ctx = gsap.context(() => {
self.lines.forEach((line) => {
gsap
.timeline({
scrollTrigger: {
once: true,
trigger: line,
start: "top top-=10"
}
})
.to(line.children, { // target the children
duration: "random(1.5, 3)", // Use randomized values for a more dynamic animation
physics2D: {
velocity: "random(500, 1000)",
angle: 90,
gravity: 3000
},
rotation: "random(-90, 90)",
ease: "none"
})
.to(
line.children,
{
autoAlpha: 0,
duration: 0.2
},
"-=.2"
);
});
});
return ctx; // return our animations so GSAP can clean them up when onSplit fires
}
});
}
document.addEventListener("DOMContentLoaded", initDroppingText);
Next up: an interactive Inertia Dot Grid that springs and flows with your cursor!
Glowing Interactive Dot Grid
InertiaPlugin (formerly ThrowPropsPlugin) allows you to smoothly glide any property to a stop, honoring an initial velocity as well as applying optional restrictions on the end value. It brings real-world momentum to your elements, letting them move with an initial velocity and smoothly slow under configurable resistance. You simply specify a starting velocity and resistance value, and the plugin handles the physics.
In this demo, we’re using a quick-to-prototype grid of <div> dots that glow as your cursor approaches, spring away on rapid mouse movements, and ripple outward on clicks. While a Canvas or WebGL approach would scale more efficiently for thousands of particles and deliver higher frame-rates, our div-based solution keeps the code simple and accessible—perfect for spotlighting InertiaPlugin’s capabilities.
Before we dive in:
Plugins needed: GSAP core and InertiaPlugin.
Demo purpose: Build a responsive grid of dots that glow with proximity and spring away on fast mouse moves or clicks—showcasing how the InertiaPlugin can add playful, physics-based reactions to a layout.
First, wrap everything in an initGlowingInteractiveDotsGrid() function and declare your tweakable parameters—colors, glow distance, speed thresholds, shockwave settings, max pointer speed, and whether to carve out a center hole for a logo. We also set up two arrays, dots and dotCenters, to track the elements and their positions.
With those in place, buildGrid() figures out how many columns and rows fit based on your container’s em sizing, then optionally carves out a perfectly centered block of 4 or 5 columns/rows (depending on whether the grid dimensions are even or odd) if centerHole is true. That hole gives space for your logo; set centerHole = false to fill every cell.
Inside buildGrid(), we:
Clear out any existing dots and reset our arrays.
Read the container’s fontSize to get dotPx (in px) and derive gapPx.
Calculate how many columns and rows fit, plus the total cells.
Compute a centered “hole” of 4 or 5 columns/rows if centerHole is true, so you can place a logo or focal element.
Now loop over every cell index. Inside that loop, we hide any dot in the hole region and initialize the visible ones with GSAP’s set(). Each dot is appended to the container and pushed into our dots array for tracking.
For each dot:
If it falls in the hole region, we hide it.
Otherwise, we position it at { x: 0, y: 0 } with the base color and mark it as not yet sprung.
Append it to the container and track it in dots.
// ... add this to the buildGrid() function
for (let i = 0; i < total; i++) {
const row = Math.floor(i / cols);
const col = i % cols;
const isHole =
centerHole &&
row >= startRow &&
row < startRow + holeRows &&
col >= startCol &&
col < startCol + holeCols;
const d = document.createElement("div");
d.classList.add("dot");
if (isHole) {
d.style.visibility = "hidden";
d._isHole = true;
} else {
gsap.set(d, { x: 0, y: 0, backgroundColor: colors.base });
d._inertiaApplied = false;
}
container.appendChild(d);
dots.push(d);
}
// ... more code added below
Finally, once the DOM is updated, measure each visible dot’s center coordinate—including any scroll offset—so we can calculate distances later. Wrapping in requestAnimationFrame ensures the layout is settled.
// ... add this to the buildGrid() function
requestAnimationFrame(() => {
dotCenters = dots
.filter(d => !d._isHole)
.map(d => {
const r = d.getBoundingClientRect();
return {
el: d,
x: r.left + window.scrollX + r.width / 2,
y: r.top + window.scrollY + r.height / 2
};
});
});
// this is the end of the buildGrid() function
By now, the complete buildGrid() function will look like the following:
As the user moves their cursor, we calculate its velocity by comparing the current e.pageX/e.pageY to the last recorded position over time (dt). We clamp that speed to maxSpeed to avoid runaway values. Then, on the next animation frame, we loop through each dot’s center:
Compute its distance to the cursor and derive t = Math.max(0, 1 - dist / threshold).
Interpolate its color from colors.base to colors.active.
If speed > speedThreshold and the dot is within threshold, mark it _inertiaApplied and fire an inertia tween to push it away before it springs back.
All this still goes inside of our initGlowingInteractiveDotsGrid() function:
let lastTime = 0
let lastX = 0
let lastY = 0
window.addEventListener("mousemove", e => {
const now = performance.now()
const dt = now - lastTime || 16
let dx = e.pageX - lastX
let dy = e.pageY - lastY
let vx = (dx / dt) * 1000
let vy = (dy / dt) * 1000
let speed = Math.hypot(vx, vy)
if (speed > maxSpeed) {
const scale = maxSpeed / speed
vx = vx * scale
vy = vy * scale
speed = maxSpeed
}
lastTime = now
lastX = e.pageX
lastY = e.pageY
requestAnimationFrame(() => {
dotCenters.forEach(({ el, x, y }) => {
const dist = Math.hypot(x - e.pageX, y - e.pageY)
const t = Math.max(0, 1 - dist / threshold)
const col = gsap.utils.interpolate(colors.base, colors.active, t)
gsap.set(el, { backgroundColor: col })
if (speed > speedThreshold && dist < threshold && !el._inertiaApplied) {
el._inertiaApplied = true
const pushX = (x - e.pageX) + vx * 0.005
const pushY = (y - e.pageY) + vy * 0.005
gsap.to(el, {
inertia: { x: pushX, y: pushY, resistance: 750 },
onComplete() {
gsap.to(el, {
x: 0,
y: 0,
duration: 1.5,
ease: "elastic.out(1, 0.75)"
})
el._inertiaApplied = false
}
})
}
})
})
})
4. Handle click ‘shockwave’ effect
On each click, we send a radial ‘shockwave’ through the grid. We reuse the same inertia + elastic return logic, but scale the push by a distance-based falloff so that dots closer to the click move further, then all spring back in unison.
Next up: DrawSVG Scribbles Demo — let’s draw some playful, randomized underlines on hover!
DrawSVG Scribbles Demo
GSAP’s DrawSVGPlugin animates the stroke of an SVG path by tweening its stroke-dasharray and stroke-dashoffset, creating a ‘drawing’ effect. You can control start/end percentages, duration, easing, and even stagger multiple paths. In this demo, we’ll attach a randomized scribble underline to each link on hover—perfect for adding a playful touch to your navigation or call-to-actions.
We define an array of exact SVG scribbles. Each string is a standalone <svg> with its <path>. When we inject it, we run decorateSVG() to ensure it scales to its container and uses currentColor for theming.
We’ve drawn these scribbles ourselves in figma using the pencil. We recommend drawing (and thus creating the path coordinates) in the order of which you want to draw them.
Wrap the above setup in your initDrawRandomUnderline() function and call it once the DOM is ready:
function initDrawRandomUnderline() {
// svgVariants, decorateSVG, and all event listeners…
}
document.addEventListener('DOMContentLoaded', initDrawRandomUnderline);
And now on to the final demo: MorphSVG Toggle Demo—see how to morph one icon into another in a single tween!
MorphSVG Toggle Demo
MorphSVGPlugin lets you fluidly morph one SVG shape into another—even when they have different numbers of points—by intelligently mapping anchor points. You can choose the morphing algorithm (size, position or complexity), control easing, duration, and even add rotation to make the transition feel extra smooth. In this demo, we’re toggling between a play ► and pause ❚❚ icon on button click, then flipping back. Perfect for video players, music apps, or any interactive control.
We highly recommend diving into the docs for this plugin, as there are a whole bunch of options and possibilities.
Plugins needed: GSAP core and MorphSVGPlugin
Demo purpose: Build a play/pause button that seamlessly morphs its SVG path on each click.
We store two path definitions: playPath and pausePath, then grab our button and the <path> element inside it. A simple isPlaying boolean tracks state. On each click, we call gsap.to() on the SVG path, passing morphSVG options:
type: “rotational” to smoothly rotate points into place
map: “complexity” to match by number of anchors for speed
shape set to the opposite icon’s path
Finally, we flip isPlaying so the next click morphs back.
Thank you for making it this far down the page! We know it’s a rather long read, so we hope there’s some inspiring stuff in here for you. Both Dennis and I are super stoked with all the GSAP Plugins being free now, and can’t wait to create more resources with them.
As a note, we’re fully aware that all the HTML and markup in the article is rather concise, and definitely not up to standard with all best practices for accessibility. To make these resources production-ready, definitely look for guidance on the standards at w3.org! Think of the above ones as your launch-pad. Ready to tweak and make your own.
Have a lovely rest of your day, or night, wherever you are. Happy animating!
Access a growing library of resources
Built by two award-winning creative developers Dennis Snellenberg and Ilja van Eck, our vault gives you access to the techniques, components, code, and tools behind our projects. All neatly packed in a custom-built dashboard. Build, tweak, and make them your own—for Webflow and non-Webflow users.
Become a member today to unlock our growing set of components and join a community of more than 850 creative developers worldwide!
Reading financial data from the internet is sometimes challenging. In this short article with two python snippets, I will show how to read it from Wikipedia and from and from API, delivering in JSON format:
This is how the financial json data from the api looks like.
Reading the data from the API is actually not tough, if you have experience reading JSON, with nested lists. If not, simply try with trial and error and eventually you will succeed:
With the reading from wikipedia, it is actually even easier – the site works flawlessly with pandas, and if you count the tables correctly, you would get what you want:
Many of you will know that every man and his dog are producing AI products or LLM’s and integrating them with their products. Not surprisingly AWS — the biggest cloud services provider — is also getting in on the act.
What is bedrock?
Its AI offering is called Bedrock and the following blurb from it’s website describes what Bedrock is.
Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications, simplifying development while maintaining privacy and security. With Amazon Bedrock’s comprehensive capabilities, you can easily experiment with a variety of top FMs, privately customize them with your data using techniques such as fine-tuning and retrieval augmented generation (RAG), and create managed agents that execute complex business tasks — from booking travel and processing insurance claims to creating ad campaigns and managing inventory — all without writing any code. Since Amazon Bedrock is serverless, you don’t have to manage any infrastructure, and you can securely integrate and deploy generative AI…
Carousels are a fairly common UI pattern (there are many excellent carousel and slider examples available on Codrops). While carousel designs vary depending on the use case, the following demos explore how the GreenSock Animation Platform (GSAP) can be used to achieve seamless looping, smooth animations, and ultimately, a better user experience.
This article is for frontend designers and developers interested in enhancing the functionality and visual appeal of a standard horizontal carousel. Familiarity with JavaScript and basic GSAP methods will be helpful, but anyone looking for inspiration and practical examples may find the following content useful.
What You’ll Learn
Basic carousel implementation using HTML and CSS
How to use gsap.utils.wrap() and horizontalLoop()
Advanced animation techniques, including image parallax and function-based values
Our Basic Carousel
Let’s start with a horizontally scrolling carousel using only HTML and CSS:
It uses scroll snapping and some custom styling on the scrollbar. Nothing fancy, but it works even when JavaScript is disabled.
Note that the HTML above is intentionally concise. However, in production, it’s important to follow accessibility best practices, including using alt text on images and descriptive ARIA attributes for screen reader users.
Building on the Foundation – GSAP Demo 1A
To see how GSAP can enhance a carousel, we’ll explore two different approaches—the first using gsap.utils.wrap(). Wrap is one of several handy utility methods included in gsap.js—no plugin required! Given a min/max range, it returns a value within that range:
gsap.utils.wrap(5, 10, 12); // min 5, max 10, value to wrap 12: returns 7
The example above returns 7 because 12 is 2 more than the maximum of 10, so it wraps around to the start and moves 2 steps forward from there. In a carousel, this can be used to loop infinitely through the slides.
Here’s a simple demo of how it can be applied:
In the HTML, a <nav> block has been added that contains previous/next buttons and progress text:
In the JS, we override the carousel’s scroll-snap-type and display the <nav> block. Since we no longer have a scrollable area, the buttons are necessary to maintain keyboard accessibility. Safari requires tabindex="0" to allow users to tab to them. Additionally, aria-labels are important since the buttons have no visible text content.
We apply the new class to each slide, which effectively stacks them all in the center. We also set the initial opacity: 1 for the first slide and 0 for the rest:
Next, we need a function that transitions to the previous or next slide. changeSlide() is passed a direction parameter of either positive or negative 1. Inside this function, we:
Fade out the current slide
Update the current slide index using gsap.utils.wrap()
Fade in the new current slide
Update the progress text
The different easing on the outro and intro tweens helps prevent excessive overlapping opacity during the crossfade.
Easing and staggers help smooth out and space the movement. The dir parameter modifies the rotationY, adding a subtly unique motion to previous and next actions.
This basic setup can be easily customized further. Animating a clip-path, applying a blur filter, or experimenting with additional 3D transforms could all produce interesting results.
A Different Approach – GSAP Demo 2A
Another way to create a seamless looping carousel with GSAP is to use the horizontalLoop() helper function. Although GSAP helper functions aren’t officially part of the core library, they’re a handy collection of code snippets and shortcuts. They also serve as great learning resources for writing more advanced GSAP code.
This specific helper function animates elements along their x-axis and repositions them once they’re out of view to create an infinite loop. Here’s a basic implementation:
Again, we override the CSS and display the <nav> element. Then we call horizontalLoop(), which takes two parameters: an array of the carousel slides and a config object for setting various options.
const loop = horizontalLoop(slides, {
paused: true, // no auto-scroll
paddingRight: 10, // match the 10px flex gap
center: true, // snap the active slide to the center
onChange: (slide, index) => { // called when the active slide changes
if (activeSlide) {
gsap.to(".active", { opacity: 0.3 });
activeSlide.classList.remove("active");
}
slide.classList.add("active");
activeSlide = slide;
gsap.to(".active", { opacity: 1, ease: "power2.inOut" });
gsap.set(".carousel-nav div", { innerText: `${index + 1}/${slides.length}` });
}
});
The most notable of these options is the onChange callback, where we can write code that executes each time the active slide changes. In this example, we’re removing and adding the “active” class name and tweening the opacity to draw more focus to the center slide.
The helper function returns a timeline with several useful added methods, including next(), previous(), and toIndex(). We’ll use these to add navigation functionality to our previous/next buttons, as well as to the individual slides:
next.addEventListener("click", () => loop.next({ duration: 1, ease: "expo" }));
prev.addEventListener("click", () => loop.previous({ duration: 1, ease: "expo" }));
// each slide can function as a button to activate itself
slides.forEach((slide, i) => {
slide.addEventListener("click", () => loop.toIndex(i, {duration: 1, ease: "expo"}))
});
Finally, we set the initial carousel state by adjusting the opacity of each slide and calling toIndex() with no tween duration, which centers the active slide.
If you’re unfamiliar with function-based values in GSAP, this is an amazing feature—definitely check out that link to learn how they work. Here, we’re iterating through each element with the class name “carousel-slide,” returning an opacity value of 1 for the first slide and 0.3 for the rest.
The remainder of the JS is just the helper function, copied and pasted from the GSAP docs demo. In most cases, you won’t need to modify anything inside it. (We’ll look at an exception in Demo 2C.)
Add Draggable & InertiaPlugin – GSAP Demo 2B
To make the carousel move on drag, we’ll need two plugins: Draggable and the Inertia Plugin. Once those scripts are included, you can set draggable: true in the config object.
In addition to drag behavior, this iteration includes some text animation, with logic to prevent it from running on the first load (plus hover in/out animations on the nav buttons).
onChange: (slide, index) => { // called when the active slide changes
if (activeSlide) {
gsap.to(".carousel h2, .carousel h5", { overwrite: true, opacity: 0, ease: "power3" });
gsap.to(".active", { opacity: 0.3 });
activeSlide.classList.remove("active");
}
slide.classList.add("active");
activeSlide = slide;
// intro animation for new active slide
gsap.timeline({ defaults:{ ease:"power1.inOut" } })
// fade in the new active slide
.to(".active", { opacity: 1, ease: "power2.inOut" }, 0)
// fade out the progress text, change its value, fade it back in
.to(".carousel-nav div", { duration: 0.2, opacity: 0, ease: "power1.in" }, 0)
.set(".carousel-nav div", { innerText: `${index + 1}/${slides.length}` }, 0.2)
.to(".carousel-nav div", { duration: 0.4, opacity: 0.5, ease: "power1.inOut" }, 0.2)
// fade in the text elements and translate them vertically
.to(".active h2, .active h5", { opacity: 1, ease: "power1.inOut" }, 0.3)
.fromTo(".active h2, .active h5", { y:(i)=>[40,60][i] },{ duration: 1.5, y: 0, ease: "expo" }, 0.3)
// skip active slide animation on first run
.progress( firstRun? 1: 0 )
}
Adding Parallax – GSAP Demo 2C
To make the movement more engaging, let’s calculate each slide’s horizontal progress and use it to create a parallax effect.
Until now, we haven’t modified the helper function. However, to calculate slide progress, this version includes one change insidehorizontalLoop().
Now, every time the carousel timeline updates, slideImgUpdate() is called. This function sets each image’s xPercent based on the progress of its parent slide. Progress is 0 when the slide is offstage to the left, and 1 when it’s offstage to the right.
GSAP utility functions mapRange(), interpolate(), and clamp() make the progress calculation much easier. Note, in the CSS, the width of .carousel-slide img is increased to 150%, so there will be enough image for a 50% horizontal movement.
Taking It Further
There are endless ways you could build on these demos, customizing both appearance and functionality. A few ideas include:
Modify how many slides are shown at once—a single, full-frame version could be interesting, as could several smaller slides to create a cover flow effect. In both of those examples, the progress indicator also became a fun area for experimentation.
Additional details could be added by calling custom functions inside the helper function’s onPress, onRelease, or onThrowComplete callbacks. Here’s one more iteration on Demo 2, where the entire carousel shrinks while the pointer is held down.
The carousel could even serve as navigation for a separate animated page element, like on Nite Riot.
If you want the carousel to respond to mouse wheel movements, GSAP’s Observer plugin offers an easy way to handle those events.