Fragment shaders allow us to create smooth, organic visuals that are difficult to achieve with standard polygon-based rendering in WebGL. One powerful example is the metaball effect, where multiple objects blend and deform seamlessly. This can be implemented using a technique called ray marching, directly within a fragment shader.
In this tutorial, we’ll walk you through how to create droplet-like, bubble spheres using Three.js and GLSL—an effect that responds interactively to your mouse movements. But first, take a look at the demo video below to see the final result in action.
Overview
Let’s take a look at the overall structure of the demo and review the steps we’ll follow to build it.
We arrange spheres along the mouse trail to create a stretchy, elastic motion.
Let’s get started!
1. Setup
We render a single fullscreen plane that covers the entire viewport.
// Output.ts
const planeGeometry = new THREE.PlaneGeometry(2.0, 2.0);
const planeMaterial = new THREE.RawShaderMaterial({
vertexShader: base_vert,
fragmentShader: output_frag,
uniforms: this.uniforms,
});
const plane = new THREE.Mesh(planeGeometry, planeMaterial);
this.scene.add(plane);
We define a uniform variable named uResolution to pass the canvas size to the shader, where Common.width and Common.height represent the width and height of the canvas in pixels. This uniform will be used to normalize coordinates based on the screen resolution.
The vertex shader receives the position attribute.
Since the xy components of position originally range from -1 to 1, we convert them to a range from 0 to 1 and output them as a texture coordinate called vTexCoord. This is passed to the fragment shader and used to calculate colors or effects based on the position on the screen.
The fragment shader receives the interpolated texture coordinate vTexCoord and the uniform variable uResolution representing the canvas size. Here, we temporarily use vTexCoord to output color for testing.
Now we’re all set to start drawing in the fragment shader! Next, let’s move on to actually rendering the spheres.
2. Ray Marching
2.1. What is Ray Marching?
As mentioned at the beginning, we will use a method called ray marching to render spheres. Ray marching proceeds in the following steps:
Define the scene
Set the camera (viewing) direction
Cast rays
Evaluate the distance from the current ray position to the nearest object in the scene.
Move the ray forward by that distance
Check for a hit
For example, let’s consider a scene with three spheres. These spheres are expressed using SDFs (Signed Distance Functions), which will be explained in detail later.
First, we determine the camera direction. Once the direction is set, we cast a ray in that direction.
Next, we evaluate the distance to all objects from the current ray position, and take the minimum of these distances.
After obtaining this distance, we move the ray forward by that amount.
We repeat this process until either the ray gets close enough to an object—closer than a small threshold—or the maximum number of steps is reached. If the distance is below the threshold, we consider it a “hit” and shade the corresponding pixel.
For example, in the figure above, a hit is detected on the 8th ray marching step.
If the maximum number of steps were set to 7, the 7th step would not have hit anything yet. But since the limit is reached, the loop ends and no hit is detected.
Therefore, nothing would be rendered at that position. If parts of an object appear to be missing in the final image, it may be due to an insufficient number of steps. However, be aware that increasing the step count will also increase the computational load.
To better understand this process, try running this demo to see how it works in practice.
2.2. Signed Distance Function
In the previous section, we briefly mentioned the SDF (Signed Distance Function). Let’s take a moment to understand what it is.
An SDF is a function that returns the distance from a point to a particular shape. The key characteristic is that it returns a positive or negative value depending on whether the point is outside or inside the shape.
For example, here is the distance function for a sphere:
Here, p is a vector representing the position relative to the origin, and s is the radius of the sphere.
This function calculates how far the point p is from the surface of a sphere centered at the origin with radius s.
If the result is positive, the point is outside the sphere.
If negative, it is inside the sphere.
If the result is zero, the point is on the surface—this is considered a hit point (in practice, we detect a hit when the distance is less than a small threshold).
In this demo, we use a sphere’s distance function, but many other shapes have their own distance functions as well.
After that, inside the map function, two spheres are defined and their distances calculated using sdSphere. The variable d is initially set to a large value and updated with the min function to keep track of the shortest distance to the surface.
Then we run a ray marching loop, which updates the ray position by computing the distance to the nearest object at each step. The loop ends either after a fixed number of iterations or when the distance becomes smaller than a threshold (dist < EPS):
for ( int i = 0; i < ITR; ++ i ) {
dist = map(ray);
ray += rayDirection * dist;
if ( dist < EPS ) break ;
}
Finally, we determine the output color. We use black as the default color (background), and render a white pixel only if a hit is detected:
vec3 color = vec3(0.0);
if ( dist < EPS ) {
color = vec3(1.0);
}
We’ve successfully rendered two overlapping spheres using ray marching!
2.4. Normals
Although we successfully rendered spheres in the previous section, the scene still looks flat and lacks depth. This is because we haven’t applied any shading or visual effects that respond to surface orientation.
While we won’t implement full shading in this demo, we’ll still compute surface normals, as they’re essential for adding surface detail and other visual effects.
At first glance, this may seem hard to understand. Put simply, this computes the gradient of the distance function, which corresponds to the normal vector.
If you’ve studied vector calculus, this might be easy to understand. For many others, though, it may seem a bit difficult.
However, for those who are interested in how it works, we’ll now walk through the explanation in more detail.
The gradient of a scalar function 𝑓(𝑥,𝑦,𝑧) is simply a vector composed of its partial derivatives. It points in the direction of the greatest rate of increase of the function:
To compute this gradient numerically, we can use the central difference method. For example:
We apply the same idea for the 𝑦 and 𝑧 components. Note: The factor 2𝜀 is omitted in the code since we normalize the result using normalize().
Next, let us consider a signed distance function 𝑓(𝑥,𝑦,𝑧), which returns the shortest distance from any point in space to the surface of an object. By definition, 𝑓(𝑥,𝑦,𝑧)=0 on the surface of the object.
Assume that 𝑓 is smooth (i.e., differentiable) in the region of interest. When the point (𝑥,𝑦,𝑧) undergoes a small displacement Δ𝒓=(Δ𝑥,Δ𝑦,Δ𝑧), the change in the function value Δ𝑓 can be approximated using the first-order Taylor expansion:
Here,∇𝑓 is the gradient vector of 𝑓, and Δ𝒓 is an arbitrary small displacement vector.
Now, since 𝑓=0 on the surface and remains constant as we move along the surface (i.e., tangentially), the function value does not change, so Δ𝑓=0. Therefore:
This means that the gradient vector is perpendicular to any tangent vector Δ𝒓 on the surface. In other words, the gradient vector ∇𝑓 points in the direction of the surface normal.
Thus, the gradient of a signed distance function gives the surface normal direction at any point on the surface.
2.5. Visualizing Normals with Color
To verify that the surface normals are being calculated correctly, we can visualize them using color.
if ( dist < EPS ) {
vec3 normal = generateNormal(ray);
color = normal;
}
Note that within the if block, ray refers to a point on the surface of the object. So by passing ray to generateNormal, we can obtain the surface normal at the point of intersection.
When we render the scene, you’ll notice that the surface of the sphere is shaded in red, green, and blue based on the orientation of the normal vectors. This is because we’re mapping the 𝑥, 𝑦, and 𝑧 components of the normal vector to the RGB color channels respectively.
This is a common and intuitive way to debug normal vectors visually, helping us ensure they are computed correctly.
When combining two spheres with the standard min() function, a hard edge forms where the shapes intersect, resulting in an unnatural boundary. To avoid this, we can use a blending function called smoothMin, which softens the transition by merging the distance values smoothly.
This function creates a smooth, continuous connection between shapes—producing a metaball-like effect where the forms appear to merge organically.
The parameter k controls the smoothness of the blend. A higher k value results in a sharper transition (closer to min()), while a lower k produces smoother, more gradual merging.
For more details, please refer to the following two articles:
So far, we’ve covered how to calculate normals and how to smoothly blend objects.
Next, let’s tune the surface appearance to make things feel more realistic.
In this demo, we’re aiming to create droplet-like metaballs. So how can we achieve that kind of look? The key idea here is to use noise to distort the surface.
To create the droplet-like texture, we’re using value noise. If you’re unfamiliar with these noise techniques, the following articles provide helpful explanations:
3D value noise is generated by interpolating random values placed at the eight vertices of a cube. The process involves three stages of linear interpolation:
Bottom face interpolation: First, we interpolate between the four corner values on the bottom face of the cube
Top face interpolation: Similarly, we interpolate between the four corner values on the top face
Final z-axis interpolation: Finally, we interpolate between the results from the bottom and top faces along the z-axis
This triple interpolation process is called trilinear interpolation.
The following code demonstrates the trilinear interpolation process for 3D value noise:
By sampling this noise using the reflection vector as coordinates, we can create a realistic water droplet-like texture. Note that we are using the surface normal obtained earlier to compute this reflection vector. To add time-based variation, we generate noise at positions offset by uTime:
It’s starting to look quite like a water droplet! However, it still appears a bit murky. To improve this, let’s add the following post-processing step:
In today’s hyper-connected digital world, the cybersecurity landscape is shifting dramatically. Gone are the days when cyberattacks primarily relied on human intervention. We’re now facing a new breed of silent, swift adversaries: non-human threats. These automated entities—bots, malicious scripts, and sophisticated malware—are designed to operate at machine speed, exploiting vulnerabilities, bypassing traditional defenses, and often remaining undetected until significant damage has occurred. So, how do you defend against something you can’t see, something that moves faster than human reaction? The answer lies in intelligent, automated endpoint security. Enter Seqrite Endpoint Protection (EPP), your robust shield against these invisible invaders. Available for both cloud-based and on-premise deployments, Seqrite EPP is engineered with cutting-edge technologies specifically designed to identify and neutralize these stealthy, non-human threats.
Understanding the Enigma: What Exactly Are Non-Human Cyber Threats?
When we talk about “non-human cyber threats,” we’re referring to automated programs and code snippets that launch attacks without requiring direct human interaction. These include:
Bots: Automated programs designed to perform repetitive tasks at scale. Think credential stuffing attacks where bots try thousands of username/password combinations, or Distributed Denial of Service (DDoS) attacks that flood a server with traffic.
Malicious Scripts: These are pieces of automated code, often hidden within legitimate-looking files or web pages, designed to exploit system weaknesses, exfiltrate sensitive data, or spread malware across your network.
Exploit Kits: These are sophisticated toolkits that automatically scan systems for unpatched vulnerabilities and then deploy exploits to gain unauthorized access or deliver payloads like ransomware.
The key characteristic of these threats is their autonomy and speed. They operate under the radar, making traditional, reactive security measures largely ineffective. This is precisely why proactive, automated detection and prevention mechanisms are absolutely critical for modern businesses.
Seqrite Endpoint Protection: Your Multi-Layered Defense Against Automation
Seqrite’s EPP doesn’t just offer a single line of defense; it deploys a comprehensive, multi-layered security framework. This framework is specifically engineered to detect and block automation-driven threats using a powerful combination of intelligent rule-based systems, behavioral analysis, and advanced AI-powered capabilities.
Let’s dive into the key features that make Seqrite EPP a formidable opponent against non-human threats:
Advanced Device Control: Many non-human threats, especially scripts and certain types of malware, are delivered via external devices like USB drives. Seqrite’s Advanced Device Control enforces strict usage policies, allowing you to define what devices can connect to your endpoints and how they can be used. By controlling storage, network, and wireless interfaces, you effectively close off a major entry point for automated attacks.
Application Control with Zero Trust: Imagine only allowing approved applications and scripts to run on your systems. That’s the power of Seqrite’s Application Control. By implementing a Zero Trust model, it blocks unknown or unapproved applications and scripts from executing. Through meticulous allowlisting and blocklisting, only trusted applications can operate, making it incredibly effective against stealthy automation tools that attempt to execute malicious code.
Behavior-Based Detection (GoDeep.AI): This is where Seqrite truly shines. Leveraging cutting-edge AI and machine learning, GoDeep.AI continuously monitors endpoint activity to identify abnormal and suspicious behaviors that indicate a non-human threat. This includes detecting:
Repetitive access patterns: A hallmark of bots attempting to brute-force accounts or scan for vulnerabilities.
Scripted encryption behavior: Instantly flags the tell-tale signs of ransomware encrypting files.
Silent data exfiltration attempts: Catches automated processes trying to siphon off sensitive information. The system doesn’t just detect; it actively stops suspicious activity in its tracks before it can cause any harm.
Intrusion Detection & Prevention System (IDS/IPS): Seqrite’s integrated IDS/IPS actively monitors network traffic for known exploit patterns and anomalous behavior. This robust system is crucial for blocking automation-based threats that attempt to infiltrate your network through known vulnerabilities or launch network-based attacks like port scanning.
File Sandboxing: When a suspicious file or script enters your environment, Seqrite doesn’t let it run directly on your system. Instead, it’s whisked away to a secure, isolated virtual sandbox environment for deep analysis. Here, the file is allowed to execute and its behavior is meticulously observed. If it exhibits any malicious traits—like attempting to mimic user behavior, access restricted resources, or encrypt files—it’s immediately flagged and stopped, preventing any potential damage to your actual endpoints.
Web Protection & Phishing Control: Many non-human threats, particularly bots and sophisticated malware, rely on communication with remote command-and-control (C2) servers. Seqrite’s Web Protection proactively blocks:
Access to known malicious domains.
Phishing sites designed to steal credentials.
Unauthorized web access that could lead to malware downloads.
Crucially, it cuts off botnet callbacks, effectively severing the communication lines between bots and their command centers, rendering them inert.
Enhancing Your Defense: Essential Supporting Features
Beyond its core capabilities, Seqrite Endpoint Protection is bolstered by a suite of supporting features that further strengthen your organization’s resilience against non-human threats and beyond:
Feature
Benefit
Patch Management
Automatically identifies and fixes software vulnerabilities that bots and scripts often exploit to gain entry. Proactive patching is key to prevention.
Firewall
Provides a critical layer of defense by filtering unauthorized network traffic and blocking communication with known botnet IP addresses.
Data Loss Prevention (DLP)
Prevents automated data theft by monitoring and controlling data in transit, ensuring sensitive information doesn’t leave your network without authorization.
Centralized Log Management
Offers a unified view of security events, allowing for rapid detection and auditing of unusual or suspicious behaviors across all endpoints.
Disk Encryption Management
Safeguards your data by encrypting entire disks, stopping automated decryption attempts even if data is stolen, and protecting against ransomware.
The Future of Endpoint Security: Why Non-Human Threat Detection is Non-Negotiable
As we move deeper into 2025 and beyond, cyber threats are becoming increasingly automated, sophisticated, and often, AI-driven. Relying on traditional, signature-based security solutions is no longer enough to match the speed, stealth, and evolving tactics of automation-based attacks.
Seqrite Endpoint Protection is built for this future. It leverages intelligent automation to effectively combat automation—blocking bots, malicious scripts, advanced ransomware, and other non-human threats before they can execute and wreak havoc on your systems and data.
Final Takeaway: Don’t Let Invisible Threats Compromise Your Business
In a world where cyberattacks are increasingly executed by machines, your defense must be equally advanced. With its comprehensive suite of features—including cutting-edge device and application control, AI-driven behavioral detection (GoDeep.AI), robust network-level protection, and secure sandboxing—Seqrite Endpoint Protection ensures your endpoints remain locked down and secure.
Whether your organization operates with a cloud-first strategy or relies on a traditional on-premise infrastructure, Seqrite provides the adaptable and powerful security solutions you need.
Ready to Fortify Your Defenses?
It’s time to upgrade your endpoint security and protect your organization from both human-initiated and the ever-growing wave of non-human cyber threats.
Rendering text in WebGL opens up a whole new world of unique visual effects and creative possibilities that are often
impossible to achieve with traditional HTML, CSS and JavaScript alone. By the end of this tutorial, we’ll have created
WebGL-rendered text elements that perfectly mimic the underlying HTML structure. Since we’ll be taking an HTML-first
approach, it will be responsive, SEO-friendly and accessible. We’ll be doing every step manually so you’ll gain a
solid understanding of the principles behind merging HTML and WebGL, and text-specific stuff like how to translate CSS
styles into the 3D world.
We’ll be creating the below demo:
We’ll start off with a standard styled HTML setup. Then we’ll recreate the text elements we have inside a 3D world.
From there, we’ll position, scale and make the text responsive with the 3D space. Next, we’ll replicate the “mask
reveal effect” in WebGL. And finally, we’ll apply some scroll-driven post processing effects to the scene.
Below are the core steps we’ll follow to achieve the final result:
Create the text as a HTML element and style it regularly using CSS
Create a 3D world and recreate the text element within it
Merge the 3D and 2D world, so that we can style the 3D text by using our viewport’s dimensions
Sync the key properties like position, size and font — from the HTML element to the WebGL text element
Hide the original HTML element
Be left with only the 3D text, fully styled and positioned in sync with the hidden HTML structure
Apply animations and post-processing to enhance our 3D scene
Necessities and Prerequisites
We’ll be using the Three.js library to create the 3D world, so you should already be familiar with its basics. For the
creation of text meshes, we’ll be using the troika-three-text
library, but you don’t have to be familiar with the library beforehand. If you’ve used HTML, CSS and JavaScript, know the basics of Three.JS,
you’re good to go.
Let’s get started.
1. Creating the Regular HTML and Making it Responsive
Before diving into the WebGL and Three.js implementation, we first need to create the HTML structure that we’ll later
mimic in the 3D world. I’ve set up a very simple page with some quick responsive content — you can find the setup content
in the demo repository under index.html
and styles.css
.
HTML
:
<div class="content">
<div class="container">
<section class="section__heading">
<h3 data-animation="webgl-text" class="text__2">THREE.JS</h3>
<h2 data-animation="webgl-text" class="text__1">
RESPONSIVE AND ACCESSIBLE TEXT
</h2>
</section>
<section class="section__main__content">
<p data-animation="webgl-text" class="text__2">
THIS TEXT IS STYLED TO LOOK LIKE A TYPICAL BLOCK OF TEXT ON A STANDARD
WEBSITE. BUT UNDER THE SURFACE, IT'S BEING RENDERED WITH WEBGL INSTEAD
OF TRADITIONAL HTML.
</p>
<p data-animation="webgl-text" class="text__2">
THIS OPENS THE DOOR TO CUSTOM SHADER EFFECTS AND INTERACTIONS THAT GO
BEYOND WHAT'S POSSIBLE WITH TRADITIONAL HTML.
</p>
<p data-animation="webgl-text" class="text__2">
WE KEEP THE UNDERYLING HTML STRUCTURE PRESENT IN THE DOM. RATHER THAN
CREATING MESHES DIRECTLY IN THREE.JS, THE SCENE IS BUILT BY READING FROM
THE EXISTING HTML CONTENT. THIS WAY, SCREEN READERS, SEARCH ENGINES, AND
OTHER TOOLS CAN STILL INTERPRET THE PAGE AS EXPECTED.
</p>
</section>
<section class="section__footer">
<p data-animation="webgl-text" class="text__3">
NOW GO CRAZY WITH THE SHADERS :)
</p>
</section>
</div>
</div>
The <canvas>
element is set to cover the entire screen, fixed in place behind the main content. We want a full screen canvas
covering the entire screen behind our main content at all times.
All text elements intended for WebGL rendering are marked with data-animation=”webgl-text” for clarity and easy
selection when we begin scripting.
The purpose of this setup is to function as the “placeholder” that we can mimic in our 3D implementation. So, it’s
important to position and style your text at this stage
to ensure it matches the final sizing and positioning that you want to achieve. All text formatting properties like
font-size, letter-spacing, line-height etc. are the properties you want to focus on, because we’ll later read these
computed styles directly from the DOM during the WebGL phase. Color is optional here, as we can handle text coloring
later with shaders inside WebGL.
That’s it for the HTML and CSS setup! It’s all we need for the foundation to move onto our JavaScript and WebGL
implementation.
2. Initial 3D World Setup
Let’s move onto the JavaScript and WebGL implementation. I’ll be using TypeScript, but you can easily follow along
with vanilla JavaScript if you prefer. I’m assuming you’re already familiar with the basics of Three.js, so I’ll focus
on explaining the high-level setup rather than covering every detail.
Below is the starter TypeScript and Three.JS base that I’ll be using for this demo.
// main.ts
import Commons from "./classes/Commons";
import * as THREE from "three";
/**
* Main entry-point.
* Creates Commons and Scenes
* Starts the update loop
* Eventually creates Postprocessing and Texts.
*/
class App {
private commons!: Commons;
scene!: THREE.Scene;
constructor() {
document.addEventListener("DOMContentLoaded", async () => {
await document.fonts.ready; // Important to wait for fonts to load when animating any texts.
this.commons = Commons.getInstance();
this.commons.init();
this.createScene();
this.addEventListeners();
this.update();
});
}
private createScene() {
this.scene = new THREE.Scene();
}
/**
* The main loop handler of the App
* The update function to be called on each frame of the browser.
* Calls update on all other parts of the app
*/
private update() {
this.commons.update();
this.commons.renderer.render(this.scene, this.commons.camera);
window.requestAnimationFrame(this.update.bind(this));
}
private addEventListeners() {
window.addEventListener("resize", this.onResize.bind(this));
}
private onResize() {
this.commons.onResize();
}
}
export default new App();
// Commons.ts
import { PerspectiveCamera, WebGLRenderer, Clock } from "three";
import Lenis from "lenis";
export interface Screen {
width: number;
height: number;
aspect: number;
}
export interface Sizes {
screen: Screen;
pixelRatio: number
}
/**
* Singleton class for Common stuff.
* Camera
* Renderer
* Lenis
* Time
*/
export default class Commons {
private constructor() {}
private static instance: Commons;
lenis!: Lenis;
camera!: PerspectiveCamera;
renderer!: WebGLRenderer;
private time: Clock = new Clock();
elapsedTime!: number;
sizes: Sizes = {
screen: {
width: window.innerWidth,
height: window.innerHeight,
aspect: window.innerWidth / window.innerHeight,
},
pixelRatio: this.getPixelRatio(),
};
private distanceFromCamera: number = 1000;
/**
* Function to be called to either create Commons Singleton instance, or to return existing one.
* TODO AFTER: Call instances init() function.
* @returns Commons Singleton Instance.
*/
static getInstance() {
if (this.instance) return this.instance;
this.instance = new Commons();
return this.instance;
}
/**
* Initializes all-things Commons. To be called after instance is set.
*/
init() {
this.createLenis();
this.createCamera();
this.createRenderer();
}
/**
* Creating Lenis instance.
* Sets autoRaf to true so we don't have to manually update Lenis on every frame.
* Resets possible saved scroll position.
*/
private createLenis() {
this.lenis = new Lenis({ autoRaf: true, duration: 2 });
}
private createCamera() {
this.camera = new PerspectiveCamera(
70,
this.sizes.screen.aspect,
200,
2000
);
this.camera.position.z = this.distanceFromCamera;
this.camera.updateProjectionMatrix();
}
/**
* createRenderer(): Creates the common WebGLRenderer to be used.
*/
private createRenderer() {
this.renderer = new WebGLRenderer({
alpha: true, // Sets scene background to transparent, so our body background defines the background color
});
this.renderer.setSize(this.sizes.screen.width, this.sizes.screen.height);
this.renderer.setPixelRatio(this.sizes.pixelRatio);
// Creating canvas element and appending to body element.
document.body.appendChild(this.renderer.domElement);
}
/**
* Single source of truth to get pixelRatio.
*/
getPixelRatio() {
return Math.min(window.devicePixelRatio, 2);
}
/**
* Resize handler function is called from the entry-point (main.ts)
* Updates the Common screen dimensions.
* Updates the renderer.
* Updates the camera.
*/
onResize() {
this.sizes.screen = {
width: window.innerWidth,
height: window.innerHeight,
aspect: window.innerWidth / window.innerHeight,
};
this.sizes.pixelRatio = this.getPixelRatio();
this.renderer.setSize(this.sizes.screen.width, this.sizes.screen.height);
this.renderer.setPixelRatio(this.sizes.pixelRatio);
this.onResizeCamera();
}
/**
* Handler function that is called from onResize handler.
* Updates the perspective camera with the new adjusted screen dimensions
*/
private onResizeCamera() {
this.camera.aspect = this.sizes.screen.aspect;
this.camera.updateProjectionMatrix();
}
/**
* Update function to be called from entry-point (main.ts)
*/
update() {
this.elapsedTime = this.time.getElapsedTime();
}
}
A Note About Smooth Scroll
When syncing HTML and WebGL worlds, you should use a custom scroll
. This is because the native scroll in browsers updates the scroll position at irregular intervals and does not thus
guarantee frame-perfect updates with our requestAnimationFrame loop and our WebGL world, causing a jittery and unsynchronized movement
.
By integrating a custom scroll (Lenis in this case), we ensure our scroll updates perfectly match the frame updates of
our WebGL world.
Right now we are seeing an empty 3D world, continuously being rendered.
We’re only missing one thing to get something visible in our 3D world: the creation of the text elements. So let’s
move onto creating our WebGLText class next.
3. Creating WebGLText Class and Texts Meshes
For the creation of the text meshes, we’ll be using troika-three-text
library.
npm i troika-three-text
We’ll now create a reusable WebGLText
class
. This will handle turning each HTML element into a 3D text mesh, using Troika and our Three.js scene.
Here’s the basic setup:
// WebGLText.ts
import Commons from "./Commons";
import * as THREE from "three";
// @ts-ignore
import { Text } from "troika-three-text";
interface Props {
scene: THREE.Scene;
element: HTMLElement;
}
export default class WebGLText {
commons: Commons;
scene: THREE.Scene;
element: HTMLElement;
computedStyle: CSSStyleDeclaration;
font!: string; // Path to our .ttf font file.
bounds!: DOMRect;
color!: THREE.Color;
material!: THREE.ShaderMaterial;
mesh!: Text;
// We assign the correct font bard on our element's font weight from here
weightToFontMap: Record<string, string> = {
"900": "/fonts/Humane-Black.ttf",
"800": "/fonts/Humane-ExtraBold.ttf",
"700": "/fonts/Humane-Bold.ttf",
"600": "/fonts/Humane-SemiBold.ttf",
"500": "/fonts/Humane-Medium.ttf",
"400": "/fonts/Humane-Regular.ttf",
"300": "/fonts/Humane-Light.ttf",
"200": "/fonts/Humane-ExtraLight.ttf",
"100": "/fonts/Humane-Thin.ttf",
};
private y: number = 0; // Scroll-adjusted bounds.top
private isVisible: boolean = false;
constructor({ scene, element }: Props) {
this.commons = Commons.getInstance();
this.scene = scene;
this.element = element;
this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style.
}
}
We have access to the Text class
from Troika, which allows us to create text meshes elements and apply styling to it using familiar properties like
fontSize, letterSpacing, and font. I’ll cover everything you need to style your text responsively in this tutorial,
but I implore you to take a look at the full documentation and its possibilities here
.
Troika doesn’t ship with TypeScript definitions out of the box, so if you’re using TS, you can quickly get around this
by creating a type declaration file in the root of your project. It’s not pretty, but it gets the job done while
keeping TypeScript happy.
Let’s start by creating new methods called createFont(), createColor() and createMesh().
createFont()
: Selects the appropriate .ttf file based on the DOM element’s font-weight. If a match isn’t found, we fall back to
the regular weight. Adjust the mapping to match your own font files and multiple font families if needed.
createColor()
: Converts the computed CSS color into a THREE.Color instance:
// WebGLText.ts
private createColor() {
this.color = new THREE.Color(this.computedStyle.color);
}
createMesh():
Instantiates the text mesh and sets some basic properties. Copies the text’s inner text and sets it onto the mesh.
Adds the mesh to our Three.JS scene. We anchor the text from the left-center to match typical HTML layout
expectations.
// WebGLText.ts
private createMesh() {
this.mesh = new Text();
this.mesh.text = this.element.innerText; // Copying HTML content over to the mesh
this.mesh.font = this.font;
// Anchor the text to the left-center (instead of center-center)
this.mesh.anchorX = "0%";
this.mesh.anchorY = "50%";
this.mesh.color = this.color;
this.scene.add(this.mesh);
}
⚠️ When copying text contents over to the Mesh, avoid using innerHTML or textContent and use innerText instead as it
gives the most layout-accurate and consistent results.
setStaticValues
(): Let’s also create a baby setStaticValues() method which will set the critical properties of our text mesh based on
the computedStyle.
We sets values like font size based on computed CSS. We’ll expand this more as we sync more styles down the line.
We want to call all these methods in the constructor like this:
Finally, let’s update our App class (main.ts), and hook this all up by scanning for DOM elements with a
data-animation=”webgl-text” attribute — creating a WebGLText instance for each one:
// main.ts
texts!: Array<WebGLText>;
// ...
private createWebGLTexts() {
const texts = document.querySelectorAll('[data-animation="webgl-text"]');
if (texts) {
this.texts = Array.from(texts).map((el) => {
const newEl = new WebGLText({
element: el as HTMLElement,
scene: this.scene,
});
return newEl;
});
}
}
Make sure to call this method in the constructor on initialization. This will populate our scene with styled text
meshes based on our DOM content.
That’s all we need to have our text meshes visible, it’s not the prettiest sight to behold, but at least we got
everything working:
Next Challenge: Screen vs. 3D Space Mismatch
Even though we copy the font size directly from the DOM, the scale looks different in 3D. That’s because WebGL units don’t map 1:1 with screen pixels
, and they operate in different coordinate systems. This mismatch will become even more obvious if we start
positioning and animating elements.
To get true visual parity between our DOM elements and WebGL text, we need to bridge the gap between screen space and
3D space. Let’s tackle that next so our text sizes and positions actually match with what we see in the browser.
4. Syncing Dimensions
The major problem when syncing HTML and WebGL dimensions is that things between them aren’t exactly pixel-perfect.
This is because the DOM and WebGL don’t “speak the same units” by default.
Web browsers work in screen pixels.
WebGL uses arbitrary units
Our goal is simple:
💡 Make one unit in the WebGL scene equal one pixel on the screen.
To achieve this, we’ll adjust the camera’s field of view (FOV) so that visible area through the camera exactly matches
the dimensions of the browser window in pixels.
So, we’ll create a syncDimensions()
function under our Commons class, which calculates our camera’s field of view such that 1 unit in the WebGL scene
corresponds to 1 pixel on the screen — at a given distance from the camera.
// Commons.ts
/**
* Helper function that is called upon creation and resize
* Updates the camera's fov according to the new dimensions such that the window's pixels match with that of WebGL scene
*/
private syncDimensions() {
this.camera.fov =
2 *
Math.atan(this.sizes.screen.height / 2 / this.distanceFromCamera) *
(180 / Math.PI);
}
This function will be called once when we create the camera, and every time that the screen is resized.
Let’s break down what’s actually going on here using the image below:
We know:
The height of the screen
The distance from camera (Z)
The FOV of the camera is the vertical angle (fov y in the image)
So our main goal is to set how wide (vertical angle) we see according to our screen height.
Because the Z (distance from camera) and half of the screen height forms a right triangle
(distance + height), we can solve for the angle using some basic trigonometry, and compute the FOV using the inverse
tangent ( atan
) of this triangle.
Step-by-step Breakdown of the Formula
this.sizes.screen.height / 2
→ This gives us half the screen’s pixel height — the opposite side of our triangle.
this.distanceFromCamera
→ This is the adjacent side of the triangle — the distance from the camera to the 3D scene.
Math.atan(opposite / adjacent)
→ Calculates half of the vertical FOV (in radians).
*2
→ Since atan only gives half of the angle, we multiply it by 2 to get the full FOV.
* (180 / Math.PI)
→ Converts the angle from radians to degrees (Three.js expects degrees for PerspectiveCamera’s fov)
That’s all we need to sync our dimensions, and this setup ensures that 1 unit in WebGL = 1 pixel on screen.
Let’s move back to the text implementation.
5. Setting Text Properties and Positioning
Now that we’ve synced the WebGL scene to our screen’s pixel dimensions, we can start mapping HTML styles to our 3D
text.
If everything’s wired up correctly, you should see that the font size of the WebGL-rendered text matches the size of
the underlying HTML, although the positioning is still off.
Let’s sync more styling properties and positioning.
Before we can position the 3D text, we need to get the DOM element’s position and size. We’ll create a new method in
the WebGLText class called createBounds() ,
and use the browser’s built-in getBoundingClientRect() method:
Next, we’ll pull important typographic properties from the DOM (computed style) and pass them to the 3D mesh, so that
it behaves like our native HTML text. (Again, you can see the full documentation and possible properties of troika here
). Below I’ve included the most important ones.
// WebGLText.ts
private setStaticValues() {
const { fontSize, letterSpacing, lineHeight, whiteSpace, textAlign } =
this.computedStyle;
const fontSizeNum = window.parseFloat(fontSize);
this.mesh.fontSize = fontSizeNum;
this.mesh.textAlign = textAlign;
// Troika defines letter spacing in em's, so we convert to them
this.mesh.letterSpacing = parseFloat(letterSpacing) / fontSizeNum;
// Same with line height
this.mesh.lineHeight = parseFloat(lineHeight) / fontSizeNum;
// Important to define maxWidth for the mesh, so that our text doesn't overflow
this.mesh.maxWidth = this.bounds.width;
// Match whiteSpace behavior (e.g., 'pre', 'nowrap')
this.mesh.whiteSpace = whiteSpace;
}
Troika accepts some of the properties in local em units, so we have to convert pixels into em’s by dividing the pixel
values by the font size.
Also, it’s important to set a maximum width (in pixels) to constrain the mesh’s layout — this prevents text from
overflowing and ensures proper text wrapping.
And finally, let’s create an update()
function to be called on each frame that consistently positions our mesh according to the underlying DOM position.
And now, the texts will perfectly follow DOM counterparts
, even as the user scrolls.
Let’s finalize our base text class implementation before diving into effects:
Resizing
We need to ensure that our WebGL text updates correctly on window resize events. This means recreating the computedStyle, bounds, and static values
whenever the window size changes.
Once everything is working responsively and perfectly synced with the DOM, we can finally hide the original HTML text by setting it transparent
— but we’ll keep it in place so it’s still selectable and accessible to the user.
// WebGLText.ts
this.createFont();
this.createColor();
this.createBounds();
this.createMesh();
this.setStaticValues();
this.element.style.color = "transparent"; // Hide DOM element
We should now have our perfectly responsive text meshes, and the user only sees the rendered WebGL text, while the DOM
element remains fully intact for accessibility.
Let’s add some effects!
6. Adding a Custom shader and Replicating Mask Reveal Animations
Troika also lets us use custom shader materials for meshes, giving us the flexibility to create complex effects beyond
just setting colors.
The vertex shader passes the texture coordinates (uv) to the fragment shader for the text rendering.
Shader File Imports using Vite
To handle shader files more easily, we can use the vite-plugin-glsl
plugin together with Vite to directly import shader files like .frag and .vert in code:
Let’s now create our custom ShaderMaterial and apply it to our mesh:
// WebGLText.ts
// Importing shaders
import fragmentShader from "../../shaders/text/text.frag";
import vertexShader from "../../shaders/text/text.vert";
//...
this.createFont();
this.createColor();
this.createBounds();
this.createMaterial(); // Creating material
this.createMesh();
this.setStaticValues();
//...
private createMaterial() {
this.material = new THREE.ShaderMaterial({
fragmentShader,
vertexShader
uniforms: {
uColor: new THREE.Uniform(this.color), // Passing our color to the shader
},
});
}
In the createMaterial()
method, we define the ShaderMaterial
using the imported shaders and pass in the uColor uniform, which allows us to dynamically control the color of the
text based on our DOM-element.
And now, instead of setting the color directly on the default mesh material, we apply our new custom material:
// WebGLText.ts
private createMesh() {
this.mesh = new Text();
this.mesh.text = this.element.innerText; // Always use innerText (not innerHTML or textContent).
this.mesh.font = this.font;
this.mesh.anchorX = "0%";
this.mesh.anchorY = "50%";
this.mesh.material = this.material; //Using custom material instead of color
}
At this point, we are using our custom shader material, but so far, nothing in our output has changed. Let’s now setup
show and hide animations using our custom shader, and replicate the mask reveal effect.
Setting up Reveal Animations
We’ll create an animation that uses a progress uniform (uProgress) to control the visibility and reveal progress of
the text. The animation will be controlled using the motion library.
First, we must install motion
and import its animate
and inView
functions to our WebGLText class.
npm i motion
// WebGLText.ts
import { inView, animate } from "motion";
Now, let’s configure our class so that when the text steps into view, the show() function is called
, and when it steps away, the hide() function is called
. These methods also control the current visibility variable this.isVisible
. These functions will control the uProgress variable, and animate it between 0 and 1.
For this, we also must setup an addEventListeners() function:
// WebGLText.ts
/**
* Inits visibility tracking using motion's inView function.
* Show is called when the element steps into view, and hide is called when the element steps out of view
*/
private addEventListeners() {
inView(this.element, () => {
this.show();
return () => this.hide();
});
}
show() {
this.isVisible = true;
animate(
this.material.uniforms.uProgress,
{ value: 1 },
{ duration: 1.8, ease: [0.25, 1, 0.5, 1] }
);
}
hide() {
animate(
this.material.uniforms.uProgress,
{ value: 0 },
{ duration: 1.8, onComplete: () => (this.isVisible = false) }
);
}
Just make sure to call addEventListeners() in your constructor after setting up the class.
Updating the Shader Material for Animation
We’ll also add two additional uniform variables in our material for the animations:
uProgress
: Controls the reveal progress (from 0 to 1).
uHeight
: Used by the vertex shader to calculate vertical position offset.
Updated createMaterial()
method:
// WebGLText.ts
private createMaterial() {
this.material = new THREE.ShaderMaterial({
fragmentShader,
vertexShader,
uniforms: {
uProgress: new THREE.Uniform(0),
uHeight: new THREE.Uniform(this.bounds.height),
uColor: new THREE.Uniform(this.color),
},
});
}
Since the uHeight is dependent on bounds, we also want to update the uniform variable upon resizing:
We now have the text class instance automatically calling show() and hide(), and animating the uProgress according to
the visibility of our underlying DOM-element.
For performance, you might want to update the update() method to only calculate a new position when the mesh is
visible:
Creating a mask reveal effect with custom shaders in WebGL is surprisingly simple when we break it down into two
separate movements: one happening in the fragment shader and the other in the vertex shader. You might’ve seen this
effect happen in WebGL on the page of Zajno
, for example.
Instead of overcomplicating the concept with complex masks or thinking about “lifting it up” behind a window (as we do
in traditional HTML), we can think of it as two distinct actions that work together.
Fragment Shader
: We clip the text vertically, revealing it gradually from top to bottom.
Vertex Shader
: We translate the text’s position from the bottom to the top by its height.
Together these two movements create the illusion of the text lifting itself up from behind a mask.
Let’s update our fragment shader code:
//text.frag
uniform float uProgress; // Our progress value between 0 and 1
uniform vec3 uColor;
varying vec2 vUv;
void main() {
// Calculate the reveal threshold (bottom to top reveal)
float reveal = 1.0 - vUv.y;
// Discard fragments above the reveal threshold based on progress
if (reveal > uProgress) discard;
// Apply the color to the visible parts of the text
gl_FragColor = vec4(uColor, 1.0);
}
When uProgress is 0, the mesh is fully clipped out, and nothing is visible
When uProgress increases towards 1, the mesh reveals itself from top to bottom.
For the vertex shader, we can simply pass the new uniform called uHeight, which stands for the height of our
DOM-element (this.bounds.height), and translate the output vertically according to it and uProgress.
//text.vert
uniform float uProgress;
uniform float uHeight; // Total height of the mesh passed in from JS
varying vec2 vUv;
void main() {
vUv = uv;
vec3 transformedPosition = position;
// Push the mesh upward as it reveals
transformedPosition.y -= uHeight * (1.0 - uProgress);
gl_Position = projectionMatrix * modelViewMatrix * vec4(transformedPosition, 1.0);
}
uHeight
: Total height of the DOM-element (and mesh), passed in from JS.
When uProgress
is 0
, the mesh is fully pushed down.
As uProgress
reaches 1
, it resolves to its natural position.
Now, we should have a beautifully on-scroll animating scene, where the texts reveal themselves as in regular HTML when
they scroll into view.
To spice things up, let’s add some scroll-velocity based post processing effects to our scene as the final step!
7. Adding Post-processing
Now that we’ve built our animated WebGL text with custom shaders and scroll-triggered reveals, we can push the visuals
further with post-processing
.
Post-processing allows us to apply full-screen visual effects after the scene has been rendered. This is done by
passing the final image through a series of custom shader passes.
So, in this final section, we’ll:
Set up a PostProcessing class using Three.js’s EffectComposer
Add a custom RGB shift and wave distortion effect
Drive the distortion strength dynamically using the scroll velocity from our Lenis custom scroll instance
Creating a PostProcessing class with EffectComposer
Let’s create a PostProcessing class that will be intialized from our entry-point, and which will handle everything
regarding postprocessing using Three.JS’s EffectComposer. Read more about the EffectComposer class here from Three.js’s documentation
. We’ll also create new fragment and vertex shaders for the postprocessing class to use.
// PostProcessing.ts
import {
EffectComposer,
RenderPass,
ShaderPass,
} from "three/examples/jsm/Addons.js";
import Commons from "./Commons";
import * as THREE from "three";
// Importing postprocessing shaders
import fragmentShader from "../../shaders/postprocessing/postprocessing.frag";
import vertexShader from "../../shaders/postprocessing/postprocessing.vert";
interface Props {
scene: THREE.Scene;
}
export default class PostProcessing {
// Scene and utility references
private commons: Commons;
private scene: THREE.Scene;
private composer!: EffectComposer;
private renderPass!: RenderPass;
private shiftPass!: ShaderPass;
constructor({ scene }: Props) {
this.commons = Commons.getInstance();
this.scene = scene;
this.createComposer();
this.createPasses();
}
private createComposer() {
this.composer = new EffectComposer(this.commons.renderer);
this.composer.setPixelRatio(this.commons.sizes.pixelRatio);
this.composer.setSize(
this.commons.sizes.screen.width,
this.commons.sizes.screen.height
);
}
private createPasses() {
// Creating Render Pass (final output) first.
this.renderPass = new RenderPass(this.scene, this.commons.camera);
this.composer.addPass(this.renderPass);
// Creating Post-processing shader for wave and RGB-shift effect.
const shiftShader = {
uniforms: {
tDiffuse: { value: null }, // Default input from previous pass
uVelocity: { value: 0 }, // Scroll velocity input
uTime: { value: 0 }, // Elapsed time for animated distortion
},
vertexShader,
fragmentShader,
};
this.shiftPass = new ShaderPass(shiftShader);
this.composer.addPass(this.shiftPass);
}
/**
* Resize handler for EffectComposer, called from entry-point.
*/
onResize() {
this.composer.setPixelRatio(this.commons.sizes.pixelRatio);
this.composer.setSize(
this.commons.sizes.screen.width,
this.commons.sizes.screen.height
);
}
update() {
this.shiftPass.uniforms.uTime.value = this.commons.elapsedTime;
this.composer.render();
}
}
Since we don’t have our postprocessing shaders created yet, make sure you create placeholder postprocessing.frag and
postprocessing.vert shaders so the imports don’t fail.
Constructor:
Initializes the class by storing the provided scene, grabbing the shared Commons instance, and then calling createComposer()
and createPasses()
.
createComposer():
Sets up the EffectComposer with the correct pixel ratio and canvas size:
EffectComposer wraps the WebGL renderer and allows chaining of multiple render passes.
Sized according to current viewport dimensions and pixel ratio
createPasses():
This method sets up all rendering passes applied to the scene.
RenderPass
: The first pass that simply renders the scene with the main camera as regular.
ShaderPass (shiftPass)
: A custom full-screen shader pass that we’ll create and which will create the RGB shift and wavy distortion
effects.
update():
Method called on every frame. Updates the uTime uniform so we can animate effects over time, and renders the final
post-processed image using composer.render()
Initializing Post-processing
To wire the post-processing system into our existing app, we update our main.ts:
//main.ts
private postProcessing!: PostProcessing;
//....
constructor() {
document.addEventListener("DOMContentLoaded", async () => {
await document.fonts.ready;
this.commons = Commons.getInstance();
this.commons.init();
this.createScene();
this.createWebGLTexts();
this.createPostProcessing(); // Creating post-processing
this.addEventListeners();
this.update();
});
}
// ...
private createPostProcessing() {
this.postProcessing = new PostProcessing({ scene: this.scene });
}
// ...
private update() {
this.commons.update();
if (this.texts) {
this.texts.forEach((el) => el.update());
}
// Don't need line below as we're rendering everything using EffectComposer.
// this.commons.renderer.render(this.scene, this.commons.camera);
this.postProcessing.update(); // Post-processing class handles rendering of output from now on
window.requestAnimationFrame(this.update.bind(this));
}
private onResize() {
this.commons.onResize();
if (this.texts) {
this.texts.forEach((el) => el.onResize());
}
this.postProcessing.onResize(); // Resize post-processing
}
So in the new update() function, instead of rendering directly from there, we now hand off rendering responsibility to
the PostProcessing class.
Creating Post-processing Shader and Wiring Scroll Velocity
We want to modify the PostProcessing class further, so that we update the postprocessing fragment shader with the
current scroll velocity from Lenis.
For this, I’m adding a new property lerpedVelocity and lerpFactor, which control the smoothed out velocity. The raw
velocity values from lenis can be spiky and sudden, especially with fast scrolling or scroll jumps. If we pass that
raw value directly into a shader, it can cause a really jittery output.
private lerpedVelocity = 0; // Smoothed scroll velocity for post-processing.
private lerpFactor = 0.05; // Controls how quickly lerpedVelocity follows the real velocity
// ...
update() {
this.shiftPass.uniforms.uTime.value = this.commons.elapsedTime;
// Reading current velocity form lenis instance.
const targetVelocity = this.commons.lenis.velocity;
// We use the lerped velocity as the actual velocity for the shader, just for a smoother experience.
this.lerpedVelocity +=
(targetVelocity - this.lerpedVelocity) * this.lerpFactor;
this.shiftPass.uniforms.uVelocity.value = this.lerpedVelocity;
this.composer.render();
}
Post-processing Shaders
For the vertex shader, we can keep all things default, we pass the texture coordinates to the fragment shader.
The red channel is offset slightly based on the velocity, creating the RGB shift effect.
// Applying the RGB shift to the wave-distorted coordinates
float r = texture2D(tDiffuse, vec2(waveUv.x, waveUv.y + uVelocity * 0.0005)).r;
vec2 gb = texture2D(tDiffuse, waveUv).gb;
gl_FragColor = vec4(r, gb, r);
This will create a subtle color separation in the final image that shifts according to our scroll velocity.
Finally, we combine red, green, blue, and alpha into the output color.
8. Final Result
And there you have it! We’ve created a responsive text scene, with scroll-triggered mask reveal animations and
wavy/rgb shifted post-processing.
This setup provides a solid, modular foundation for building more advanced WebGL text effects. If you’re curious to explore further, consider adding particles, fluid simulations, audio reactivity, or more complex materials and shaders. If you’re interested in breakdowns of any of these, feel free to reach out on X.
I’ve always been interested in finding simple ways to bring more depth into web interfaces, not just through visuals, but through interaction and space.
In this demo, I explored how flat UI cards can become interactive 3D scenes using GLB models, Three.js, and Webflow. Each card starts as a basic layout but reveals a small, self-contained environment built with real-time rendering and subtle motion.
It’s a lightweight approach to adding spatial storytelling to familiar components, using tools many designers already work with.
Welcome to My Creative World
I’m always drawn to visuals that mix the futuristic with the familiar — space-inspired forms, minimal layouts, and everyday elements seen from a different angle.
Most of my projects start this way: by reimagining ordinary ideas through a more immersive or atmospheric lens.
It All Started with a Moodboard
This one began with a simple inspiration board:
From that board, I picked a few of my favorite visuals and ran them through an AI tool that converts images into GLB 3D models.
The results were surprisingly good! Abstract, textured, and full of character.
The Concept: Flat to Deep
When I saw the output from the AI-generated GLB models, I started thinking about how we perceive depth in UI design, not just visually, but interactively.
That led to a simple idea: what if flat cards could reveal a hidden spatial layer? Not through animation alone, but through actual 3D geometry, lighting, and camera movement.
I designed three UI cards, each styled with minimal HTML and CSS in Webflow. On interaction, they load a unique GLB model into a Three.js scene directly within the card container. Each model is lit, framed, and animated to create the feeling of a self-contained 3D space.
Building the Web Experience
The layout was built in Webflow using a simple flexbox structure with three cards inside a wrapper. Each card contains a div that serves as the mounting point for a 3D object.
The GLB models are rendered using Three.js, which is integrated into the project with custom JavaScript. Each scene is initialized and handled separately, giving each card its own interactive 3D space while keeping the layout lightweight and modular.
Scene Design with Blender
Each GLB model was prepared in Blender, where I added a surrounding sphere to create a sense of depth and atmosphere. This simple shape helps simulate background contrast and encloses the object in a self-contained space.
Lighting played an important role; especially with reflective materials like glass or metal. Highlights and soft shadows were used to create that subtle, futuristic glow.
The result is that each 3D model feels like it lives inside its own ambient environment, even when rendered in a small card.
Bringing It Together with Three.js
Once the models were exported from Blender as .glb files, I used Three.js to render them inside each card. Each card container acts as its own 3D scene, initialized through a custom JavaScript function.
The setup involves creating a basic scene with a perspective camera, ambient and directional lighting, and a WebGL renderer. I used GLTFLoader to load each .glb file and OrbitControls to enable subtle rotation. Zooming and panning are disabled to keep the interaction focused and controlled.
Each model is loaded into a separate container, making it modular and easy to manage. The camera is offset slightly for a more dynamic starting view, and the background is kept dark to help the lighting pop.
Here’s the full JavaScript used to load and render the models:
// Import required libraries
import * as THREE from 'three';
import { OrbitControls } from 'three/addons/controls/OrbitControls.js';
import { GLTFLoader } from 'three/addons/loaders/GLTFLoader.js';
import gsap from 'gsap';
/**
* This function initializes a Three.js scene inside a given container
* and loads a .glb model into it.
*/
function createScene(containerSelector, glbPath) {
const container = document.querySelector(containerSelector);
// 1. Create a scene
const scene = new THREE.Scene();
scene.background = new THREE.Color(0x202020); // dark background
// 2. Set up the camera with perspective
const camera = new THREE.PerspectiveCamera(
45, // Field of view
container.clientWidth / container.clientHeight, // Aspect ratio
0.1, // Near clipping plane
100 // Far clipping plane
);
camera.position.set(2, 0, 0); // Offset to the side for better viewing
// 3. Create a renderer and append it to the container
const renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(container.clientWidth, container.clientHeight);
container.appendChild(renderer.domElement);
// 4. Add lighting
const light = new THREE.DirectionalLight(0xffffff, 4);
light.position.set(30, -10, 20);
scene.add(light);
const ambientLight = new THREE.AmbientLight(0x404040); // soft light
scene.add(ambientLight);
// 5. Set up OrbitControls to allow rotation
const controls = new OrbitControls(camera, renderer.domElement);
controls.enableZoom = false; // no zooming
controls.enablePan = false; // no dragging
controls.minPolarAngle = Math.PI / 2; // lock vertical angle
controls.maxPolarAngle = Math.PI / 2;
controls.enableDamping = true; // smooth movement
// 6. Load the GLB model
const loader = new GLTFLoader();
loader.load(
glbPath,
(gltf) => {
scene.add(gltf.scene); // Add model to the scene
},
(xhr) => {
console.log(`${containerSelector}: ${(xhr.loaded / xhr.total) * 100}% loaded`);
},
(error) => {
console.error(`Error loading ${glbPath}`, error);
}
);
// 7. Make it responsive
window.addEventListener("resize", () => {
camera.aspect = container.clientWidth / container.clientHeight;
camera.updateProjectionMatrix();
renderer.setSize(container.clientWidth, container.clientHeight);
});
// 8. Animate the scene
function animate() {
requestAnimationFrame(animate);
controls.update(); // updates rotation smoothly
renderer.render(scene, camera);
}
animate(); // start the animation loop
}
// 9. Initialize scenes for each card (replace with your URLs)
createScene(".div", "https://yourdomain.com/models/yourmodel.glb");
createScene(".div2", "https://yourdomain.com/models/yourmodel2.glb");
createScene(".div3", "https://yourdomain.com/models/yourmodel3.glb");
This script is added via a <script type="module"> tag, either in the Webflow page settings or as an embedded code block. Each call to createScene() initializes a new card, linking it to its corresponding .glb file.
How This Works in Practice
In Webflow, create three containers with the classes .div, .div2, and .div3. Each one will act as a canvas for a different 3D scene.
Embed the JavaScript module shown above by placing it just before the closing </body> tag in your Webflow project, or by using an Embed block with <script type="module">.
Once the page loads, each container initializes its own Three.js scene and loads the corresponding GLB model. The result: flat UI cards become interactive, scrollable 3D objects — all directly inside Webflow.
This approach is lightweight, clean, and performance-conscious, while still giving you the flexibility to work with real 3D content.
Important Note for Webflow Users
This setup works in Webflow, but only if you structure it correctly.
To make it work, you’ll need to:
Host your Three.js code externally using a bundler like Vite, Parcel, or Webpack
Or bundle the JavaScript manually and embed it as a <script type="module"> in your exported site
Keep in mind: Webflow’s Designer does not support ES module imports (import) directly. Pasting the code into an Embed block won’t work unless it’s already built and hosted elsewhere.
You’ll need to export your Webflow project or host the script externally, then link it via your project settings.
Final Thoughts
Thanks for following along with this project. What started as a simple moodboard turned into a small experiment in mixing UI design with real-time 3D.
Taking flat cards and turning them into interactive scenes was a fun way to explore how much depth you can add with just a few tools: Webflow, Three.js, and GLB models.
If this gave you an idea or made you want to try something similar, that’s what matters most. Keep experimenting, keep learning, and keep building.
This project primarily serves as a technical demo and learning material. It began when I decided to start learning Blender. I followed a few tutorials, then decided to do a small project using it—so I chose to create the Canon F-1 camera!
After that, I decided to export the project to Three.js to add some cool post-processing shader effects. I wanted to create a sketch effect similar to what I had seen in some repair guides.
After spending a few hours experimenting with it, I decided to integrate it into a fully functional website featuring some cool shaders and 3D effects!
In this article, I’m going to walk through some of the key features of the site and provide a technical breakdown, assuming you already have a basic or beginner-level understanding of Three.js and shaders.
1. The Edge Detection Shader
Three.js includes a built-in edge detection shader called SobelOperatorShader. Basically, it detects edges based on color contrast—it draws a line between two areas with a strong enough difference in color.
To make my effect work the way I want, I need to assign a unique color to each area I want to highlight on my model. This way, Three.js will draw a line around those areas.
Here’s my model with all the materials applied:
This way, Three.js can accurately detect each area I want to highlight!
As you can see, the lines are not all the same intensity—some are white, while others are light gray. This is because, by default, line intensity depends on contrast: edges with lower contrast appear with lighter lines. To fix this, I manually modified the post-processing shader to make all lines fully white, regardless of contrast.
What I’m doing here is moving all the edge detection logic into the Sobel function. Then, I pass the tDiffuse texture—which is the composer’s render—to this function.
This way, I can modify the output of the edge detection shader before passing it back to the composer:
float G = sobel(t,texel);
G= G > 0.001 ? 1. : 0.;
G represents the intensity of the edge detection. It’s a single value because the lines are monochrome. G ranges from 0 to 1, where 0 means full black (no edge detected) and 1 means full white (strong contrast detected).
As mentioned earlier, this value depends on the contrast. What I’m doing in the second line is forcing G to be 1 if it’s above a certain threshold (I chose 0.001, but you could pick a smaller value if you want).
This way I can get all the edges to have the same intensity.
Here’s how I’m applying the custom fragment shader to the Sobel Operator shader pass:
Next, let’s take a look at the lens parts section.
This is mainly achieved using a Three.js utility called RenderTarget.
A render target is a buffer where the GPU draws pixels for a scene being rendered off-screen. It’s commonly used in effects like post-processing, where the rendered image is processed before being displayed on the screen.
Basically, this allows me to render my scene twice per frame: once with only the highlighted mesh, and once without it.
In the onSelectMesh method, I set the value of this.selectedMeshName to the name of the mesh group that contains the target mesh from the Raycaster (I’m using names to refer to groups of meshes).
This way, in my render loop, I can create two distinct renders:
One render (renderTargetA) with all the meshes except the hovered mesh
Another render (renderTargetB) with only the hovered mesh
As you can see, I’m sending both renders as texture uniforms to the effectSobel shader. The post-processing shader then “merges” these two renders into a single output.
At this point, we have two renders of the scene, and the post-processing shader needs to decide which one to display. Initially, I thought of simply combining them by adding the two textures together, but that didn’t produce the correct result:
What I needed was a way to hide the pixels of one render when they are “covered” by pixels from another render.
To achieve this, I used the distance of each vertex from the camera. This meant I had to go through all the meshes in the model and modify their materials. However, since the mesh colors are important for the edge detection effect, I couldn’t change their colors.
Instead, I used the alpha channel of each individual vertex to set the distance from the camera.
First, the luminance function is a built-in Three.js shader utility imported from the <common> module. It’s recommended to use this function with the Sobel effect to improve edge detection results.
The uColor value represents the initial color of the mesh.
The dist value calculates the distance between the vertex position (passed from the vertex shader via a varying) and the camera, using the built-in cameraPosition variable in Three.js shaders.
Finally, I pass this distance through the alpha channel. Since the alpha value can’t exceed 1, I use a normalized version of the distance.
And here is the updated logic for the postprocessing shader:
Now that the alpha channel of the textures contains the distance to the camera, I can simply compare them and display the render that have the closer vertices to the camera.
3. The Film Roll Effect
Next is this film roll component that moves and twist on scroll.
This effect is achieved using only shaders, the component is a single plane component with a shader material.
All the data is sent to the shader through uniforms:
export default class Film {
constructor() {
//...code
}
createGeometry() {
this.geometry = new THREE.PlaneGeometry(
60,
2,
100,
10
)
}
createMaterial() {
this.material = new THREE.ShaderMaterial({
vertexShader,
fragmentShader,
side: THREE.DoubleSide,
transparent: true,
depthWrite: false,
blending: THREE.CustomBlending,
blendEquation: THREE.MaxEquation,
blendSrc: THREE.SrcAlphaFactor,
blendDst: THREE.OneMinusSrcAlphaFactor,
uniforms: {
uPlaneWidth: new THREE.Uniform(this.geometry.parameters.width),
uRadius: new THREE.Uniform(2),
uXZfreq: new THREE.Uniform(3.525),
uYfreq: new THREE.Uniform(2.155),
uOffset: new THREE.Uniform(0),
uAlphaMap: new THREE.Uniform(
window.preloader.loadTexture(
"./alpha-map.jpg",
"film-alpha-map",
(texture) => {
texture.wrapS = THREE.RepeatWrapping
const { width, height } = texture.image
this.material.uniforms.uAlphaMapResolution.value =
new THREE.Vector2(width, height)
}
)
),
//uImages: new THREE.Uniform(new THREE.Vector4()),
uImages: new THREE.Uniform(
window.preloader.loadTexture(
"/film-texture.png",
"film-image-texture",
(tex) => {
tex.wrapS = THREE.RepeatWrapping
}
)
),
uRepeatFactor: new THREE.Uniform(this.repeatFactor),
uImagesCount: new THREE.Uniform(this.images.length * this.repeatFactor),
uAlphaMapResolution: new THREE.Uniform(new THREE.Vector2()),
uFilmColor: new THREE.Uniform(window.colors.orange1),
},
})
}
createMesh() {
this.mesh = new THREE.Mesh(this.geometry, this.material)
this.scene.add(this.mesh)
}
}
The main vertex shader uniforms are:
uRadius is the radius of the cylinder shape
uXZfreq is the frequency of the twists on the (X,Z) plane
uYfreq is a cylinder height factor
uOffset is the vertical offset of the roll when you scroll up and down
As you can see they are used to modify the initial position attribute to give it the shape of a cylinder. the modified position’s X Y and Z factors are using uOffset in their frequency. this uniform is linked to a Scrolltrigger timeline that will give the twist on scroll effect.
That’s it for the most part! Don’t feel frustrated if you don’t understand everything right away—I often got stuck for days on certain parts and didn’t know every technical detail before I started building.
I learned so much from this project, and I hope you’ll find it just as useful!
Thank you for reading, and thanks to Codrops for featuring me again!
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
When you need to compose the path to a folder or file location, you can rely on the Path class. It provides several static methods to create, analyze and modify strings that represent a file system.
Path.Join and Path.Combine look similar, yet they have some important differences that you should know to get the result you are expecting.
Path.Combine: take from the last absolute path
Path.Combine concatenates several strings into a single string that represents a file path.
However, there’s a tricky behaviour: if any argument other than the first contains an absolute path, all the previous parts are discarded, and the returned string starts with the last absolute path:
As you can see, the behaviour is slightly different.
Let’s see a table where we call the two methods using the same input strings:
Path.Combine
Path.Join
["singlestring"]
singlestring
singlestring
["foo", "bar", "baz"]
foo\bar\baz
foo\bar\baz
["foo", " bar ", "baz"]
foo\ bar \baz
foo\ bar \baz
["C:", "users", "davide"]
C:\users\davide
C:\users\davide
["foo", " ", "baz"]
foo\ \baz
foo\ \baz
["foo", "C:bar", "baz"]
C:bar\baz
foo\C:bar\baz
["foo", "C:bar", "baz", "D:we", "ranl"]
D:we\ranl
foo\C:bar\baz\D:we\ranl
["C:", "/users", "/davide"]
/davide
C:/users/davide
["C:", "users/", "/davide"]
/davide
C:\users//davide
["C:", "\users", "\davide"]
\davide
C:\users\davide
Have a look at some specific cases:
neither methods handle white and empty spaces: ["foo", " ", "baz"] are transformed to foo\ \baz. Similarly, ["foo", " bar ", "baz"] are combined into foo\ bar \baz, without removing the head and trail whitespaces. So, always remove white spaces and empty values!
Path.Join handles in a not-so-obvious way the case of a path starting with / or \: if a part starts with \, it is included in the final path; if it starts with /, it is escaped as //. This behaviour depends on the path separator used by the OS: in my case, I’m running these methods using Windows 11.
Finally, always remember that the path separator depends on the Operating System that is running the code. Don’t assume that it will always be /: this assumption may be correct for one OS but wrong for another one.
We can easily change the background color of div’s even and odd index using the:nth-child pseudo-class with the even and odd keywords, respectively. Odd and even are keywords that can be used to match child elements whose index is odd or even (the index of the first child is 1). Here, we specify two different background colors for odd and even p elements.
In Postman, you can define scripts to be executed before the beginning of a request. Can we use them to work with endpoints using Cookie Authentication?
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Nowadays, it’s rare to find services that use Cookie Authentication, yet they still exist. How can we configure Cookie Authentication with Postman? How can we centralize the definition using pre-request scripts?
I had to answer these questions when I had to integrate a third-party system that was using Cookie Authentication. Instead of generating a new token manually, I decided to centralize the Cookie creation in a single place, making it automatically available to every subsequent request.
In order to generate the token, I had to send a request to the Authentication endpoint, sending a JSON payload with data coming from Postman’s variables.
In this article, I’ll recap what I learned, teach you some basics of creating pre-request scripts with Postman, and provide a full example of how I used it to centralize the generation and usage of a cookie for a whole Postman collection.
Introducing Postman’s pre-request scripts
As you probably know, Postman allows you to create scripts that are executed before and after an HTTP call.
These scripts are written in JavaScript and can use some objects and methods that come out of the box with Postman.
You can create such scripts for a single request or the whole collection. In the second case, you write the script once so that it becomes available for all the requests stored within that collection.
The operations defined in the Scripts section of the collection are then executed before (or after) every request in the collection.
Here, you can either use the standard JavaScript code—like the dear old console.log— or the pm object to reference the context in which the script will be executed.
For example, you can print the value of a Postman variable by using:
How to send a POST request with JSON body in Postman pre-request scripts
How can we issue a POST request in the pre-request script, specifying a JSON body?
Postman’s pm object, along with some other methods, exposes the sendRequest function. Its first parameter is the “description” of the request; its second parameter is the callback to execute after the request is completed.
pm.sendRequest(request, (errorResponse, successfulResponse) => {
// do something here
})
You have to carefully craft the request, by specifying the HTTP method, the body, and the content type:
Pay particular attention to the options node: it tells Postman how to treat the body content and what the content type is. Because I was missing this node, I spent too many minutes trying to figure out why this call was badly formed.
options: {
raw: {
language:"json" }
}
Now, the result of the operation is used to execute the callback function. Generally, you want it to be structured like this:
pm.sendRequest(request, (err, response) => {
if (err) {
// handle error
}
if (response) {
// handle success
}
})
Storing Cookies in Postman (using a Jar)
You have received the response with the token, and you have parsed the response to retrieve the value. Now what?
You cannot store Cookies directly as it they were simple variables. Instead, you must store Cookies in a Jar.
Postman allows you to programmatically operate with cookies only by accessing them via a Jar (yup, pun intended!), that can be initialized like this:
constjar=pm.cookies.jar()
From here, you can add, remove or retrieve cookies by working with the jar object.
To add a new cookie, you must use the set() method of the jar object, specifying the domain the cookie belongs to, its name, its value, and the callback to execute when the operation completes.
You can try it now: execute a request, have a look at the console logs, and…
We’ve received a strange error:
An error occurred: Error: CookieStore: programmatic access to “add-your-domain-here.com” is denied
Wait, what? What does “programmatic access to X is denied” mean, and how can we solve this error?
For security reasons, you cannot handle cookies via code without letting Postman know that you explicitly want to operate on the specified domain. To overcome this limitation, you need to whitelist the domain associated with the cookie so that Postman will accept that the operation you’re trying to achieve via code is legit.
To enable a domain for cookies operations, you first have to navigate to the headers section of any request under the collection and click the Cookies button.
From here, select Domains Allowlist:
Finally, add your domain to the list of the allowed ones.
Now Postman knows that if you try to set a cookie via code, it’s because you actively want it, allowing you to add your cookies to the jar.
If you open again the Cookie section (see above), you will be able to see the current values for the cookies associated with the domain:
Further readings
Clearly, we’ve just scratched the surface of what you can do with pre-request scripts in Postman. To learn more, have a look at the official documentation:
In this article, we learned what pre-request scripts are, how to execute a POST request passing a JSON object as a body, and how to programmatically add a Cookie in Postman by operating on the Jar object.
For clarity, here’s the complete code I used in my pre-request script.
Notice that to parse the response from the authentication endpoint I used the .json() method, that allows me to access the internal values using the property name, as in jresponse["Token"].
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛
In an era of digital banking, cloud migration, and a growing cyber threat landscape, traditional perimeter-based security models are no longer sufficient for the Banking, Financial Services, and Insurance (BFSI) sector. Enter Zero Trust Network Access (ZTNA) — a modern security framework that aligns perfectly with the BFSI industry’s need for robust, scalable, and compliant cybersecurity practices.
This blog explores the key use cases and benefits of ZTNA for BFSI organizations.
ZTNA Use Cases for BFSI
Secure Remote Access for Employees
With hybrid and remote work becoming the norm, financial institutions must ensure secure access to critical applications and data outside corporate networks. ZTNA allows secure, identity-based access without exposing internal resources to the public internet. This ensures that only authenticated and authorized users can access specific resources, reducing attack surfaces and preventing lateral movement by malicious actors.
Protect Customer Data Using Least Privileged Access
ZTNA enforces the principle of least privilege, granting users access only to the resources necessary for their roles. This granular control is vital in BFSI, where customer financial data is highly sensitive. By limiting access based on contextual parameters such as user identity, device health, and location, ZTNA drastically reduces the chances of data leakage or internal misuse.
Compliance with Regulatory Requirements
The BFSI sector is governed by stringent regulations such as RBI guidelines, PCI DSS, GDPR, and more. ZTNA provides centralized visibility, detailed audit logs, and fine-grained access control—all critical for meeting regulatory requirements. It also helps institutions demonstrate proactive data protection measures during audits and assessments.
Vendor and Third-Party Access Management
Banks and insurers frequently engage with external vendors, consultants, and partners. Traditional VPNs provide broad access once a connection is established, posing a significant security risk. ZTNA addresses this by granting secure, time-bound, and purpose-specific access to third parties—without ever bringing them inside the trusted network perimeter.
Key Benefits of ZTNA for BFSI
Reduced Risk of Data Breaches
By minimizing the attack surface and verifying every user and device before granting access, ZTNA significantly lowers the risk of unauthorized access and data breaches. Since applications are never directly exposed to the internet, ZTNA also protects against exploitation of vulnerabilities in public-facing assets.
Improved Compliance Posture
ZTNA simplifies compliance by offering audit-ready logs, consistent policy enforcement, and better visibility into user activity. BFSI firms can use these capabilities to ensure adherence to local and global regulations and quickly respond to compliance audits with accurate data.
Enhanced Customer Trust and Loyalty
Security breaches in financial institutions can erode customer trust instantly. By adopting a Zero Trust approach, organizations can demonstrate their commitment to customer data protection, thereby enhancing credibility, loyalty, and long-term customer relationships.
Cost Savings on Legacy VPNs
Legacy VPN solutions are often complex, expensive, and challenging to scale. ZTNA offers a modern alternative that is more efficient and cost-effective. It eliminates the need for dedicated hardware and reduces operational overhead by centralizing policy management in the cloud.
Scalability for Digital Transformation
As BFSI institutions embrace digital transformation—be it cloud adoption, mobile banking, or FinTech partnerships—ZTNA provides a scalable, cloud-native security model that grows with the business. It supports rapid onboarding of new users, apps, and services without compromising on security.
Final Thoughts
ZTNA is more than just a security upgrade—it’s a strategic enabler for BFSI organizations looking to build resilient, compliant, and customer-centric digital ecosystems. With its ability to secure access for employees, vendors, and partners while ensuring regulatory compliance and data privacy, ZTNA is fast becoming the cornerstone of modern cybersecurity strategies in the financial sector.
Ready to embrace Zero Trust? Identify high-risk access points and gradually implement ZTNA for your most critical systems. The transformation may be phased, but the security gains are immediate and long-lasting.
Seqrite’s Zero Trust Network Access (ZTNA) solution empowers BFSI organizations with secure, seamless, and policy-driven access control tailored for today’s hybrid and regulated environments. Partner with Seqrite to strengthen data protection, streamline compliance, and accelerate your digital transformation journey.
WireMock.NET is a popular library used to simulate network communication through HTTP. But there is no simple way to integrate the generated in-memory server with an instance of IHttpClientFactory injected via constructor. Right? Wrong!
Table of Contents
Just a second! 🫷 If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .
If you want to support this blog, please ensure that you have disabled the adblocker for this site. I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.
Thank you for your understanding. – Davide
Testing the integration with external HTTP clients can be a cumbersome task, but most of the time, it is necessary to ensure that a method is able to perform correct operations – not only sending the right information but also ensuring that we are able to read the content returned from the called API.
Instead of spinning up a real server (even if in the local environment), we can simulate a connection to a mock server. A good library for creating temporary in-memory servers is WireMock.NET.
Many articles I read online focus on creating a simple HttpClient, using WireMock.NET to drive its behaviour. In this article, we are going to do a little step further: we are going to use WireMock.NET to handle HttpClients generated, using Moq, via IHttpClientFactory.
Explaining the dummy class used for the examples
As per every practical article, we must start with a dummy example.
For the sake of this article, I’ve created a dummy class with a single method that calls an external API to retrieve details of a book and then reads the returned content. If the call is successful, the method returns an instance of Book; otherwise, it throws a BookServiceException exception.
Just for completeness, here’s the Book class:
publicclassBook{
publicint Id { get; set; }
publicstring Title { get; set; }
}
publicclassBookService{
privatereadonly IHttpClientFactory _httpClientFactory;
public BookService(IHttpClientFactory httpClientFactory)
{
_httpClientFactory = httpClientFactory;
}
publicasync Task<Book> GetBookById(int id)
{
string url = $"/api/books/{id}";
HttpClient httpClient = _httpClientFactory.CreateClient("books_client");
try {
Book? book = await httpClient.GetFromJsonAsync<Book>(url);
return book;
}
catch (Exception ex)
{
thrownew BookServiceException($"There was an error while getting info about the book {id}", ex);
}
}
}
There are just two things to notice:
We are injecting an instance of IHttpClientFactory into the constructor.
We are generating an instance of HttpClient by passing a name to the CreateClient method of IHttpClientFactory.
Now that we have our cards on the table, we can start!
WireMock.NET, a library to simulate HTTP calls
WireMock is an open-source platform you can install locally to create a real mock server. You can even create a cloud environment to generate and test HTTP endpoints.
However, for this article we are interested in the NuGet package that takes inspiration from the WireMock project, allowing .NET developers to generate disposable in-memory servers: WireMock.NET.
To add the library, you must add the WireMock.NET NuGet package to your project, for example using dotnet add package WireMock.Net.
Once the package is ready, you can generate a test server in your Unit Tests class:
You can instantiate a new instance of WireMockServer in the OneTimeSetUp step, store it in a private field, and make it accessible to every test in the test class.
Before each test run, you can reset the internal status of the mock server by running the Reset() method. I’d suggest you reset the server to avoid unintentional internal status, but it all depends on what you want to do with the server instance.
Finally, remember to free up resources by calling the Stop() method in the OneTimeTearDown phase (but not during the TearDown phase: you still need the server to be on while running your tests!).
Basic configuration of HTTP requests and responses with WireMock.NET
The basic structure of the definition of a mock response using WireMock.NET is made of two parts:
Within the Given method, you define the HTTP Verb and URL path whose response is going to be mocked.
Using RespondWith you define what the mock server must return when the endpoint specified in the Given step is called.
In the next example, you can see that the _server instance (the one I instantiated in the OneTimeSetUp phase, remember?) must return a specific body (responseBody) and the 200 HTTP Status Code when the /api/books/42 endpoint is called.
All in all, both the request and the response are highly customizable: you can add HTTP Headers, delays, cookies, and much more.
Look closely; there’s one part that is missing: What is the full URL? We have declared only the path (/api/books/42) but have no info about the hostname and the port used to communicate.
How to integrate WireMock.NET with a Moq-driven IHttpClientFactory
In order to have WireMock.NET react to an HTTP call, we have to call the exact URL – even the hostname and port must match. But when we create a mocked HttpClient – like we did in this article – we don’t have a real hostname. So, how can we have WireMock.NET and HttpClient work together?
The answer is easy: since WireMockServer.Start() automatically picks a free port in your localhost, you don’t have to guess the port number, but you can reference the current instance of _server.
Once the WireMockServer is created, internally it contains the reference to one or more URLs it will use to listen for HTTP requests, intercepting the calls and replying in place of a real server. You can then use one of these ports to configure the HttpClient generated by the HttpClientFactory.
Let’s see the code:
[Test]publicasync Task GetBookById_Should_HandleBadRequests()
{
string baseUrl = _server.Url;
HttpClient myHttpClient = new HttpClient() { BaseAddress = new Uri(baseUrl) };
Mock<IHttpClientFactory> mockFactory = new Mock<IHttpClientFactory>();
mockFactory.Setup(_ => _.CreateClient("books_client")).Returns(myHttpClient);
_server
.Given(Request.Create().WithPath("/api/books/42").UsingGet())
.RespondWith(
Response.Create()
.WithStatusCode(404)
);
BookService service = new BookService(mockFactory.Object);
Assert.CatchAsync<BookServiceException>(() => service.GetBookById(42));
}
First we access the base URL used by the mock server by accessing _server.Url.
We use that URL as a base address for the newly created instance of HttpClient.
Then, we create a mock of IHttpClientFactory and configure it to return the local instance of HttpClient whenever we call the CreateClient method with the specified name.
In the meanwhile, we define how the mock server must behave when an HTTP call to the specified path is intercepted.
Finally, we can pass the instance of the mock IHttpClientFactory to the BookService.
So, the key part to remember is that you can simply access the Url property (or, if you have configured it to handle many URLs, you can access the Urls property, that is an array of strings).
Let WireMock.NET create the HttpClient for you
As suggested by Stef in the comments to this post, there’s actually another way to generate the HttpClient with the correct URL: let WireMock.NET do it for you.
Instead of doing
string baseUrl = _server.Url;
HttpClient myHttpClient = new HttpClient() { BaseAddress = new Uri(baseUrl) };
you can simplify the process by calling the CreateClient method:
HttpClient myHttpClient = _server.CreateClient();
Of course, you will still have to pass the instance to the mock of IHttpClientFactory.
Further readings
It’s important to notice that WireMock and WireMock.NET are two totally distinct things: one is a platform, and one is a library, owned by a different group of people, that mimics some functionalities from the platform to help developers write better tests.
WireMock.NET is greatly integrated with many other libraries, such as xUnit, FluentAssertions, and .NET Aspire.
It’s important to remember that using an HttpClientFactory is generally more performant than instantiating a new HttpClient. Ever heard of socket exhaustion?
Finally, for the sake of this article I’ve used Moq. However, there’s a similar library you can use: NSubstitute. The learning curve is quite flat: in the most common scenarios, it’s just a matter of syntax usage.
In this article, we almost skipped all the basic stuff about WireMock.NET and tried to go straight to the point of integrating WireMock.NET with IHttpClientFactory.
There are lots of articles out there that explain how to use WireMock.NET – just remember that WireMock and WireMock.NET are not the same thing!
I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛