برچسب: and

  • How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL

    How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL


    Fragment shaders allow us to create smooth, organic visuals that are difficult to achieve with standard polygon-based rendering in WebGL. One powerful example is the metaball effect, where multiple objects blend and deform seamlessly. This can be implemented using a technique called ray marching, directly within a fragment shader.

    In this tutorial, we’ll walk you through how to create droplet-like, bubble spheres using Three.js and GLSL—an effect that responds interactively to your mouse movements. But first, take a look at the demo video below to see the final result in action.

    Overview

    Let’s take a look at the overall structure of the demo and review the steps we’ll follow to build it.

    1. Setting Up the Fullscreen Plane

    We create a fullscreen plane that covers the entire viewport.

    2. Rendering Spheres with Ray Marching

    We’ll render spheres using ray marching in the fragment shader.

    3. From Spheres to Metaballs

    We blend multiple spheres smoothly to create a metaball effect.

    4. Adding Noise for a Droplet-like Appearance

    By adding noise to the surface, we create a realistic droplet-like texture.

    5. Simulating Stretchy Droplets with Mouse Movement

    We arrange spheres along the mouse trail to create a stretchy, elastic motion.

    Let’s get started!

    1. Setup

    We render a single fullscreen plane that covers the entire viewport.

    // Output.ts
    
    const planeGeometry = new THREE.PlaneGeometry(2.0, 2.0);
    const planeMaterial = new THREE.RawShaderMaterial({
        vertexShader: base_vert,
        fragmentShader: output_frag,
        uniforms: this.uniforms,
    });
    const plane = new THREE.Mesh(planeGeometry, planeMaterial);
    this.scene.add(plane);

    We define a uniform variable named uResolution to pass the canvas size to the shader, where Common.width and Common.height represent the width and height of the canvas in pixels. This uniform will be used to normalize coordinates based on the screen resolution.

    // Output.ts
    
    this.uniforms = {
        uResolution: {
            value: new THREE.Vector2(Common.width, Common.height),
        },
    };

    When using RawShaderMaterial, you need to provide your own shaders. Therefore, we prepare both a vertex shader and a fragment shader.

    // base.vert
    
    attribute vec3 position;
    varying vec2 vTexCoord;
    
    void main() {
        vTexCoord = position.xy * 0.5 + 0.5;
        gl_Position = vec4(position, 1.0);
    }

    The vertex shader receives the position attribute.

    Since the xy components of position originally range from -1 to 1, we convert them to a range from 0 to 1 and output them as a texture coordinate called vTexCoord. This is passed to the fragment shader and used to calculate colors or effects based on the position on the screen.

    // output.frag
    
    precision mediump float;
    
    uniform vec2 uResolution;
    varying vec2 vTexCoord;
    
    void main() {
        gl_FragColor = vec4(vTexCoord, 1.0, 1.0);
    }

    The fragment shader receives the interpolated texture coordinate vTexCoord and the uniform variable uResolution representing the canvas size. Here, we temporarily use vTexCoord to output color for testing.

    Now we’re all set to start drawing in the fragment shader!
    Next, let’s move on to actually rendering the spheres.

    2. Ray Marching

    2.1. What is Ray Marching?

    As mentioned at the beginning, we will use a method called ray marching to render spheres. Ray marching proceeds in the following steps:

    1. Define the scene
    2. Set the camera (viewing) direction
    3. Cast rays
    4. Evaluate the distance from the current ray position to the nearest object in the scene.
    5. Move the ray forward by that distance
    6. Check for a hit

    For example, let’s consider a scene with three spheres. These spheres are expressed using SDFs (Signed Distance Functions), which will be explained in detail later.

    First, we determine the camera direction. Once the direction is set, we cast a ray in that direction.

    Next, we evaluate the distance to all objects from the current ray position, and take the minimum of these distances.

    After obtaining this distance, we move the ray forward by that amount.

    We repeat this process until either the ray gets close enough to an object—closer than a small threshold—or the maximum number of steps is reached.
    If the distance is below the threshold, we consider it a “hit” and shade the corresponding pixel.

    For example, in the figure above, a hit is detected on the 8th ray marching step.

    If the maximum number of steps were set to 7, the 7th step would not have hit anything yet. But since the limit is reached, the loop ends and no hit is detected.

    Therefore, nothing would be rendered at that position. If parts of an object appear to be missing in the final image, it may be due to an insufficient number of steps. However, be aware that increasing the step count will also increase the computational load.

    To better understand this process, try running this demo to see how it works in practice.

    2.2. Signed Distance Function

    In the previous section, we briefly mentioned the SDF (Signed Distance Function).
    Let’s take a moment to understand what it is.

    An SDF is a function that returns the distance from a point to a particular shape. The key characteristic is that it returns a positive or negative value depending on whether the point is outside or inside the shape.

    For example, here is the distance function for a sphere:

    float sdSphere(vec3 p, float s)
    {
        return length(p) - s;
    }

    Here, p is a vector representing the position relative to the origin, and s is the radius of the sphere.

    This function calculates how far the point p is from the surface of a sphere centered at the origin with radius s.

    • If the result is positive, the point is outside the sphere.
    • If negative, it is inside the sphere.
    • If the result is zero, the point is on the surface—this is considered a hit point (in practice, we detect a hit when the distance is less than a small threshold).

    In this demo, we use a sphere’s distance function, but many other shapes have their own distance functions as well.

    If you’re interested, here’s a great article on distance functions.

    2.3. Rendering Spheres

    Let’s try rendering spheres.
    In this demo, we’ll render two slightly overlapping spheres.

    // output.frag
    
    precision mediump float;
    
    const float EPS = 1e-4;
    const int ITR = 16;
    
    uniform vec2 uResolution;
    
    varying vec2 vTexCoord;
    
    // Camera Params
    vec3 origin = vec3(0.0, 0.0, 1.0);
    vec3 lookAt = vec3(0.0, 0.0, 0.0);
    vec3 cDir = normalize(lookAt - origin);
    vec3 cUp = vec3(0.0, 1.0, 0.0);
    vec3 cSide = cross(cDir, cUp);
    
    vec3 translate(vec3 p, vec3 t) {
        return p - t;
    }
    
    float sdSphere(vec3 p, float s)
    {
        return length(p) - s;
    }
    
    float map(vec3 p) {
        float radius = 0.5;
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
        d = min(sphere0, sphere1);
    
        return d;
    }
    
    void main() {
        vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);
    
        // Orthographic Camera
        vec3 ray = origin + cSide * p.x + cUp * p.y;
        vec3 rayDirection = cDir;
    
        float dist = 0.0;
    
        for (int i = 0; i < ITR; ++i) {
            dist = map(ray);
            ray += rayDirection * dist;
            if (dist < EPS) break;
        }
    
        vec3 color = vec3(0.0);
    
        if (dist < EPS) {
            color = vec3(1.0, 1.0, 1.0);
        }
    
        gl_FragColor = vec4(color, 1.0);
    }

    First, we normalize the screen coordinates:

    vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);

    Next, we set up the camera. This demo uses an orthographic camera (parallel projection):

    // Camera Params
    vec3 origin = vec3(0.0, 0.0, 1.0);
    vec3 lookAt = vec3(0.0, 0.0, 0.0);
    vec3 cDir = normalize(lookAt - origin);
    vec3 cUp = vec3(0.0, 1.0, 0.0);
    vec3 cSide = cross(cDir, cUp);
    
    // Orthographic Camera
    vec3 ray = origin + cSide * p.x + cUp * p.y;
    vec3 rayDirection = cDir;

    After that, inside the map function, two spheres are defined and their distances calculated using sdSphere. The variable d is initially set to a large value and updated with the min function to keep track of the shortest distance to the surface.

    float map(vec3 p) {
        float radius = 0.5;
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
        d = min(sphere0, sphere1);
    
        return d;
    }

    Then we run a ray marching loop, which updates the ray position by computing the distance to the nearest object at each step. The loop ends either after a fixed number of iterations or when the distance becomes smaller than a threshold (dist < EPS):

    for ( int i = 0; i < ITR; ++ i ) {
    	dist = map(ray);
    	ray += rayDirection * dist;
    	if ( dist < EPS ) break ;
    }

    Finally, we determine the output color. We use black as the default color (background), and render a white pixel only if a hit is detected:

    vec3 color = vec3(0.0);
    
    if ( dist < EPS ) {
    	color = vec3(1.0);
    }

    We’ve successfully rendered two overlapping spheres using ray marching!

    2.4. Normals

    Although we successfully rendered spheres in the previous section, the scene still looks flat and lacks depth. This is because we haven’t applied any shading or visual effects that respond to surface orientation.

    While we won’t implement full shading in this demo, we’ll still compute surface normals, as they’re essential for adding surface detail and other visual effects.

    Let’s look at the code first:

    vec3 generateNormal(vec3 p) {
        return normalize(vec3(
                map(p + vec3(EPS, 0.0, 0.0)) - map(p + vec3(-EPS, 0.0, 0.0)),
                map(p + vec3(0.0, EPS, 0.0)) - map(p + vec3(0.0, -EPS, 0.0)),
                map(p + vec3(0.0, 0.0, EPS)) - map(p + vec3(0.0, 0.0, -EPS))
            ));
    }

    At first glance, this may seem hard to understand. Put simply, this computes the gradient of the distance function, which corresponds to the normal vector.

    If you’ve studied vector calculus, this might be easy to understand. For many others, though, it may seem a bit difficult.

    That’s totally fine—a full understanding of the details isn’t necessary to use the result. If you just want to move on, feel free to skip ahead to the section where we debug normals by visualizing them with color.

    However, for those who are interested in how it works, we’ll now walk through the explanation in more detail.

    The gradient of a scalar function 𝑓(𝑥,𝑦,𝑧) is simply a vector composed of its partial derivatives. It points in the direction of the greatest rate of increase of the function:

    To compute this gradient numerically, we can use the central difference method. For example:

    We apply the same idea for the 𝑦 and 𝑧 components.
    Note: The factor 2𝜀 is omitted in the code since we normalize the result using normalize().

    Next, let us consider a signed distance function 𝑓(𝑥,𝑦,𝑧), which returns the shortest distance from any point in space to the surface of an object. By definition, 𝑓(𝑥,𝑦,𝑧)=0 on the surface of the object.

    Assume that 𝑓 is smooth (i.e., differentiable) in the region of interest. When the point (𝑥,𝑦,𝑧) undergoes a small displacement Δ𝒓=(Δ𝑥,Δ𝑦,Δ𝑧), the change in the function value Δ𝑓 can be approximated using the first-order Taylor expansion:

    Here,∇𝑓 is the gradient vector of 𝑓, and Δ𝒓 is an arbitrary small displacement vector.

    Now, since 𝑓=0 on the surface and remains constant as we move along the surface (i.e., tangentially), the function value does not change, so Δ𝑓=0. Therefore:

    This means that the gradient vector is perpendicular to any tangent vector Δ𝒓 on the surface. In other words, the gradient vector ∇𝑓 points in the direction of the surface normal.

    Thus, the gradient of a signed distance function gives the surface normal direction at any point on the surface.

    2.5. Visualizing Normals with Color

    To verify that the surface normals are being calculated correctly, we can visualize them using color.

    if ( dist < EPS ) {
    	vec3 normal = generateNormal(ray);
    	color = normal;
    }

    Note that within the if block, ray refers to a point on the surface of the object. So by passing ray to generateNormal, we can obtain the surface normal at the point of intersection.

    When we render the scene, you’ll notice that the surface of the sphere is shaded in red, green, and blue based on the orientation of the normal vectors. This is because we’re mapping the 𝑥, 𝑦, and 𝑧 components of the normal vector to the RGB color channels respectively.

    This is a common and intuitive way to debug normal vectors visually, helping us ensure they are computed correctly.

    When combining two spheres with the standard min() function, a hard edge forms where the shapes intersect, resulting in an unnatural boundary.
    To avoid this, we can use a blending function called smoothMin, which softens the transition by merging the distance values smoothly.

    // added
    float smoothMin(float d1, float d2, float k) {
        float h = exp(-k * d1) + exp(-k * d2);
        return -log(h) / k;
    }
    
    float map(vec3 p) {
        float radius = 0.5;
        float k = 7.; // added: smoothing factor for metaball effect
        float d = 1e5;
    
        float sphere0 = sdSphere(translate(p, vec3(.4, 0.0, 0.0)), radius);
        float sphere1 = sdSphere(translate(p, vec3(-.4, 0.0, 0.0)), radius);
        d = smoothMin(d, sphere0, k); // modified: blend with smoothing
        d = smoothMin(d, sphere1, k); // modified
    
        return d;
    }

    This function creates a smooth, continuous connection between shapes—producing a metaball-like effect where the forms appear to merge organically.

    The parameter k controls the smoothness of the blend. A higher k value results in a sharper transition (closer to min()), while a lower k produces smoother, more gradual merging.

    For more details, please refer to the following two articles:

    1. wgld.org | GLSL: オブジェクト同士を補間して結合する
    2. Inigo Quilez :: computer graphics, mathematics, shaders, fractals, demoscene and more

    4. Adding Noise for a Droplet-like Appearance

    So far, we’ve covered how to calculate normals and how to smoothly blend objects.

    Next, let’s tune the surface appearance to make things feel more realistic.

    In this demo, we’re aiming to create droplet-like metaballs. So how can we achieve that kind of look? The key idea here is to use noise to distort the surface.

    Let’s jump right into the code:

    // output.frag
    
    uniform float uTime;
    
    // ...
    
    float rnd3D(vec3 p) {
        return fract(sin(dot(p, vec3(12.9898, 78.233, 37.719))) * 43758.5453123);
    }
    
    float noise3D(vec3 p) {
        vec3 i = floor(p);
        vec3 f = fract(p);
    
        float a000 = rnd3D(i); // (0,0,0)
        float a100 = rnd3D(i + vec3(1.0, 0.0, 0.0)); // (1,0,0)
        float a010 = rnd3D(i + vec3(0.0, 1.0, 0.0)); // (0,1,0)
        float a110 = rnd3D(i + vec3(1.0, 1.0, 0.0)); // (1,1,0)
        float a001 = rnd3D(i + vec3(0.0, 0.0, 1.0)); // (0,0,1)
        float a101 = rnd3D(i + vec3(1.0, 0.0, 1.0)); // (1,0,1)
        float a011 = rnd3D(i + vec3(0.0, 1.0, 1.0)); // (0,1,1)
        float a111 = rnd3D(i + vec3(1.0, 1.0, 1.0)); // (1,1,1)
    
        vec3 u = f * f * (3.0 - 2.0 * f);
        // vec3 u = f*f*f*(f*(f*6.0-15.0)+10.0);
    
        float k0 = a000;
        float k1 = a100 - a000;
        float k2 = a010 - a000;
        float k3 = a001 - a000;
        float k4 = a000 - a100 - a010 + a110;
        float k5 = a000 - a010 - a001 + a011;
        float k6 = a000 - a100 - a001 + a101;
        float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;
    
        return k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;
    }
    
    vec3 dropletColor(vec3 normal, vec3 rayDir) {
        vec3 reflectDir = reflect(rayDir, normal);
    
        float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
        float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);
    
        vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
        vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;
    
        float intensity = 2.3;
        vec3 color = (_color0 + _color1) * intensity;
    
        return color;
    }
    
    // ...
    
    void main() {
    	// ...
    
    	if ( dist < EPS ) {
    		vec3 normal = generateNormal(ray);
    		color = dropletColor(normal, rayDirection);
    	}
    	
    	 gl_FragColor = vec4(color, 1.0);
    }

    To create the droplet-like texture, we’re using value noise. If you’re unfamiliar with these noise techniques, the following articles provide helpful explanations:

    3D value noise is generated by interpolating random values placed at the eight vertices of a cube. The process involves three stages of linear interpolation:

    1. Bottom face interpolation: First, we interpolate between the four corner values on the bottom face of the cube
    2. Top face interpolation: Similarly, we interpolate between the four corner values on the top face
    3. Final z-axis interpolation: Finally, we interpolate between the results from the bottom and top faces along the z-axis

    This triple interpolation process is called trilinear interpolation.

    The following code demonstrates the trilinear interpolation process for 3D value noise:

    float n = mix(
    	mix( mix( a000, a100, u.x ), mix( a010, a110, u.x ), u.y ),
    	mix( mix( a001, a101, u.x ), mix( a011, a111, u.x ), u.y ),
    	u.z
    );

    The nested mix() functions above can be converted into an explicit polynomial form for better performance:

    float k0 = a000;
    float k1 = a100 - a000;
    float k2 = a010 - a000;
    float k3 = a001 - a000;
    float k4 = a000 - a100 - a010 + a110;
    float k5 = a000 - a010 - a001 + a011;
    float k6 = a000 - a100 - a001 + a101;
    float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;
    
    float n = k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;

    By sampling this noise using the reflection vector as coordinates, we can create a realistic water droplet-like texture. Note that we are using the surface normal obtained earlier to compute this reflection vector. To add time-based variation, we generate noise at positions offset by uTime:

    vec3 reflectDir = reflect(rayDir, normal);
    
    float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
    float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);

    Finally, we blend two noise-influenced colors and scale the result:

    vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
    vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;
    
    float intensity = 2.3;
    vec3 color = (_color0 + _color1) * intensity;

    It’s starting to look quite like a water droplet! However, it still appears a bit murky.
    To improve this, let’s add the following post-processing step:

    // output.frag
    
    if ( dist < EPS ) {
    	vec3 normal = generateNormal(ray);
    	color = dropletColor(normal, rayDirection);
    }
    
    vec3 finalColor = pow(color, vec3(7.0)); // added
    
    gl_FragColor = vec4(finalColor, 1.0); // modified

    Using pow(), darker regions are suppressed, allowing the highlights to pop and creating a more glass-like, translucent surface.

    5. Simulating Stretchy Droplets with Mouse Movement

    Finally, let’s make the droplet stretch and follow the mouse movement, giving it a soft and elastic feel.

    We’ll achieve this by placing multiple spheres along the mouse trail.

    // Output.ts
    
    constructor() {
    	// ...
    	this.trailLength = 15;
    	this.pointerTrail = Array.from({ length: this.trailLength }, () => new THREE.Vector2(0, 0));
    	
    	this.uniforms = {
    	    uTime: { value: Common.time },
    	    uResolution: {
    	        value: new THREE.Vector2(Common.width, Common.height),
    	    },
    	    uPointerTrail: { value: this.pointerTrail },
    	};
    }
    
    // ...
    
    /**
     * # rAF update
     */
    update() {
      this.updatePointerTrail();
      this.render();
    }
    
    /**
     * # Update the pointer trail
     */
    updatePointerTrail() {
      for (let i = this.trailLength - 1; i > 0; i--) {
         this.pointerTrail[i].copy(this.pointerTrail[i - 1]);
      }
      this.pointerTrail[0].copy(Pointer.coords);
    }
    // output.frag
    
    const int TRAIL_LENGTH = 15; // added
    uniform vec2 uPointerTrail[TRAIL_LENGTH]; // added
    
    // ...
    
    // modified
    float map(vec3 p) {
        float baseRadius = 8e-3;
        float radius = baseRadius * float(TRAIL_LENGTH);
        float k = 7.;
        float d = 1e5;
    
        for (int i = 0; i < TRAIL_LENGTH; i++) {
            float fi = float(i);
            vec2 pointerTrail = uPointerTrail[i] * uResolution / min(uResolution.x, uResolution.y);
    
            float sphere = sdSphere(
                    translate(p, vec3(pointerTrail, .0)),
                    radius - baseRadius * fi
                );
    
            d = smoothMin(d, sphere, k);
        }
    
        float sphere = sdSphere(translate(p, vec3(1.0, -0.25, 0.0)), 0.55);
        d = smoothMin(d, sphere, k);
    
        return d;
    }

    Conclusion

    In this tutorial, we explored how to create a dynamic, droplet-like effect using ray marching and shading techniques. Here’s what we covered:

    1. Used ray marching to render spheres in 3D space.
    2. Applied smoothMin to blend the spheres into seamless metaballs.
    3. Added surface noise to give the spheres a more organic appearance.
    4. Simulated stretchy motion by arranging spheres along the mouse trail.

    By combining these techniques, we achieved a soft, fluid visual that responds to user interaction.

    Thanks for following along—I hope you find these techniques useful in your own projects!



    Source link

  • How Seqrite Endpoint Protection Blocks Bots, Scripts, and Malware


    In today’s hyper-connected digital world, the cybersecurity landscape is shifting dramatically. Gone are the days when cyberattacks primarily relied on human intervention. We’re now facing a new breed of silent, swift adversaries: non-human threats. These automated entities—bots, malicious scripts, and sophisticated malware—are designed to operate at machine speed, exploiting vulnerabilities, bypassing traditional defenses, and often remaining undetected until significant damage has occurred. So, how do you defend against something you can’t see, something that moves faster than human reaction? The answer lies in intelligent, automated endpoint security. Enter Seqrite Endpoint Protection (EPP), your robust shield against these invisible invaders. Available for both cloud-based and on-premise deployments, Seqrite EPP is engineered with cutting-edge technologies specifically designed to identify and neutralize these stealthy, non-human threats.

    Understanding the Enigma: What Exactly Are Non-Human Cyber Threats?

    When we talk about “non-human cyber threats,” we’re referring to automated programs and code snippets that launch attacks without requiring direct human interaction. These include:

    • Bots: Automated programs designed to perform repetitive tasks at scale. Think credential stuffing attacks where bots try thousands of username/password combinations, or Distributed Denial of Service (DDoS) attacks that flood a server with traffic.
    • Malicious Scripts: These are pieces of automated code, often hidden within legitimate-looking files or web pages, designed to exploit system weaknesses, exfiltrate sensitive data, or spread malware across your network.
    • Exploit Kits: These are sophisticated toolkits that automatically scan systems for unpatched vulnerabilities and then deploy exploits to gain unauthorized access or deliver payloads like ransomware.

    The key characteristic of these threats is their autonomy and speed. They operate under the radar, making traditional, reactive security measures largely ineffective. This is precisely why proactive, automated detection and prevention mechanisms are absolutely critical for modern businesses.

    Seqrite Endpoint Protection: Your Multi-Layered Defense Against Automation

    Seqrite’s EPP doesn’t just offer a single line of defense; it deploys a comprehensive, multi-layered security framework. This framework is specifically engineered to detect and block automation-driven threats using a powerful combination of intelligent rule-based systems, behavioral analysis, and advanced AI-powered capabilities.

    Let’s dive into the key features that make Seqrite EPP a formidable opponent against non-human threats:

    1. Advanced Device Control: Many non-human threats, especially scripts and certain types of malware, are delivered via external devices like USB drives. Seqrite’s Advanced Device Control enforces strict usage policies, allowing you to define what devices can connect to your endpoints and how they can be used. By controlling storage, network, and wireless interfaces, you effectively close off a major entry point for automated attacks.
    2. Application Control with Zero Trust: Imagine only allowing approved applications and scripts to run on your systems. That’s the power of Seqrite’s Application Control. By implementing a Zero Trust model, it blocks unknown or unapproved applications and scripts from executing. Through meticulous allowlisting and blocklisting, only trusted applications can operate, making it incredibly effective against stealthy automation tools that attempt to execute malicious code.
    3. Behavior-Based Detection (GoDeep.AI): This is where Seqrite truly shines. Leveraging cutting-edge AI and machine learning, GoDeep.AI continuously monitors endpoint activity to identify abnormal and suspicious behaviors that indicate a non-human threat. This includes detecting:
      • Repetitive access patterns: A hallmark of bots attempting to brute-force accounts or scan for vulnerabilities.
      • Scripted encryption behavior: Instantly flags the tell-tale signs of ransomware encrypting files.
      • Silent data exfiltration attempts: Catches automated processes trying to siphon off sensitive information. The system doesn’t just detect; it actively stops suspicious activity in its tracks before it can cause any harm.
    4. Intrusion Detection & Prevention System (IDS/IPS): Seqrite’s integrated IDS/IPS actively monitors network traffic for known exploit patterns and anomalous behavior. This robust system is crucial for blocking automation-based threats that attempt to infiltrate your network through known vulnerabilities or launch network-based attacks like port scanning.
    5. File Sandboxing: When a suspicious file or script enters your environment, Seqrite doesn’t let it run directly on your system. Instead, it’s whisked away to a secure, isolated virtual sandbox environment for deep analysis. Here, the file is allowed to execute and its behavior is meticulously observed. If it exhibits any malicious traits—like attempting to mimic user behavior, access restricted resources, or encrypt files—it’s immediately flagged and stopped, preventing any potential damage to your actual endpoints.
    6. Web Protection & Phishing Control: Many non-human threats, particularly bots and sophisticated malware, rely on communication with remote command-and-control (C2) servers. Seqrite’s Web Protection proactively blocks:
      • Access to known malicious domains.
      • Phishing sites designed to steal credentials.
      • Unauthorized web access that could lead to malware downloads.
      • Crucially, it cuts off botnet callbacks, effectively severing the communication lines between bots and their command centers, rendering them inert.

    Enhancing Your Defense: Essential Supporting Features

    Beyond its core capabilities, Seqrite Endpoint Protection is bolstered by a suite of supporting features that further strengthen your organization’s resilience against non-human threats and beyond:

    Feature Benefit
    Patch Management Automatically identifies and fixes software vulnerabilities that bots and scripts often exploit to gain entry. Proactive patching is key to prevention.
    Firewall Provides a critical layer of defense by filtering unauthorized network traffic and blocking communication with known botnet IP addresses.
    Data Loss Prevention (DLP) Prevents automated data theft by monitoring and controlling data in transit, ensuring sensitive information doesn’t leave your network without authorization.
    Centralized Log Management Offers a unified view of security events, allowing for rapid detection and auditing of unusual or suspicious behaviors across all endpoints.
    Disk Encryption Management Safeguards your data by encrypting entire disks, stopping automated decryption attempts even if data is stolen, and protecting against ransomware.

     

    The Future of Endpoint Security: Why Non-Human Threat Detection is Non-Negotiable

    As we move deeper into 2025 and beyond, cyber threats are becoming increasingly automated, sophisticated, and often, AI-driven. Relying on traditional, signature-based security solutions is no longer enough to match the speed, stealth, and evolving tactics of automation-based attacks.

    Seqrite Endpoint Protection is built for this future. It leverages intelligent automation to effectively combat automation—blocking bots, malicious scripts, advanced ransomware, and other non-human threats before they can execute and wreak havoc on your systems and data.

    Final Takeaway: Don’t Let Invisible Threats Compromise Your Business

    In a world where cyberattacks are increasingly executed by machines, your defense must be equally advanced. With its comprehensive suite of features—including cutting-edge device and application control, AI-driven behavioral detection (GoDeep.AI), robust network-level protection, and secure sandboxing—Seqrite Endpoint Protection ensures your endpoints remain locked down and secure.

    Whether your organization operates with a cloud-first strategy or relies on a traditional on-premise infrastructure, Seqrite provides the adaptable and powerful security solutions you need.

    Ready to Fortify Your Defenses?

    It’s time to upgrade your endpoint security and protect your organization from both human-initiated and the ever-growing wave of non-human cyber threats.

    Explore how Seqrite can secure your business today. Request a Free Trial or Schedule a Demo.

     



    Source link

  • How to Create Responsive and SEO-friendly WebGL Text

    How to Create Responsive and SEO-friendly WebGL Text


    Responsive text article cover image

    Rendering text in WebGL opens up a whole new world of unique visual effects and creative possibilities that are often
    impossible to achieve with traditional HTML, CSS and JavaScript alone. By the end of this tutorial, we’ll have created
    WebGL-rendered text elements that perfectly mimic the underlying HTML structure. Since we’ll be taking an HTML-first
    approach, it will be responsive, SEO-friendly and accessible. We’ll be doing every step manually so you’ll gain a
    solid understanding of the principles behind merging HTML and WebGL, and text-specific stuff like how to translate CSS
    styles into the 3D world.

    We’ll be creating the below demo:

    We’ll start off with a standard styled HTML setup. Then we’ll recreate the text elements we have inside a 3D world.
    From there, we’ll position, scale and make the text responsive with the 3D space. Next, we’ll replicate the “mask
    reveal effect” in WebGL. And finally, we’ll apply some scroll-driven post processing effects to the scene.

    Below are the core steps we’ll follow to achieve the final result:

    1. Create the text as a HTML element and style it regularly using CSS
    2. Create a 3D world and recreate the text element within it
    3. Merge the 3D and 2D world, so that we can style the 3D text by using our viewport’s dimensions
    4. Sync the key properties like position, size and font — from the HTML element to the WebGL text element
    5. Hide the original HTML element
    6. Be left with only the 3D text, fully styled and positioned in sync with the hidden HTML structure
    7. Apply animations and post-processing to enhance our 3D scene

    Necessities and Prerequisites

    We’ll be using the Three.js library to create the 3D world, so you should already be familiar with its basics. For the
    creation of text meshes, we’ll be using the
    troika-three-text
    library, but you don’t have to be familiar with the library beforehand. If you’ve used HTML, CSS and JavaScript, know
    the basics of Three.JS,
    you’re good to go.

    Let’s get started.

    1. Creating the Regular HTML and Making it Responsive

    Before diving into the WebGL and Three.js implementation, we first need to create the HTML structure that we’ll later
    mimic in the 3D world. I’ve set up a very simple page with some quick responsive content — you can find the
    setup content
    in the demo repository under
    index.html
    and
    styles.css
    .

    HTML
    :

    <div class="content">
      <div class="container">
        <section class="section__heading">
          <h3 data-animation="webgl-text" class="text__2">THREE.JS</h3>
          <h2 data-animation="webgl-text" class="text__1">
            RESPONSIVE AND ACCESSIBLE TEXT
          </h2>
        </section>
        <section class="section__main__content">
          <p data-animation="webgl-text" class="text__2">
            THIS TEXT IS STYLED TO LOOK LIKE A TYPICAL BLOCK OF TEXT ON A STANDARD
            WEBSITE. BUT UNDER THE SURFACE, IT'S BEING RENDERED WITH WEBGL INSTEAD
            OF TRADITIONAL HTML.
          </p>
          <p data-animation="webgl-text" class="text__2">
            THIS OPENS THE DOOR TO CUSTOM SHADER EFFECTS AND INTERACTIONS THAT GO
            BEYOND WHAT'S POSSIBLE WITH TRADITIONAL HTML.
          </p>
          <p data-animation="webgl-text" class="text__2">
            WE KEEP THE UNDERYLING HTML STRUCTURE PRESENT IN THE DOM. RATHER THAN
            CREATING MESHES DIRECTLY IN THREE.JS, THE SCENE IS BUILT BY READING FROM
            THE EXISTING HTML CONTENT. THIS WAY, SCREEN READERS, SEARCH ENGINES, AND
            OTHER TOOLS CAN STILL INTERPRET THE PAGE AS EXPECTED.
          </p>
        </section>
        <section class="section__footer">
          <p data-animation="webgl-text" class="text__3">
            NOW GO CRAZY WITH THE SHADERS :)
          </p>
        </section>
      </div>
    </div>
    

    styles.css

    :root {
      --clr-text: #fdcdf9;
      --clr-selection: rgba(255, 156, 245, 0.3);
      --clr-background: #212720;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Black.ttf") format("truetype");
      font-weight: 900;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Bold.ttf") format("truetype");
      font-weight: 700;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-ExtraBold.ttf") format("truetype");
      font-weight: 800;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-ExtraLight.ttf") format("truetype");
      font-weight: 200;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Light.ttf") format("truetype");
      font-weight: 300;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Medium.ttf") format("truetype");
      font-weight: 500;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Regular.ttf") format("truetype");
      font-weight: 400;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-SemiBold.ttf") format("truetype");
      font-weight: 600;
      font-style: normal;
      font-display: swap;
    }
    
    @font-face {
      font-family: "Humane";
      src: url("/fonts/Humane-Thin.ttf") format("truetype");
      font-weight: 100;
      font-style: normal;
      font-display: swap;
    }
    
    body {
      background: var(--clr-background);
    }
    
    canvas {
      position: fixed;
      top: 0;
      left: 0;
      width: 100vw;
      height: 100vh;
      pointer-events: none;
    }
    
    ::selection {
      background-color: var(--clr-selection);
      color: var(--clr-text);
    }
    
    ::-moz-selection {
      background-color: var(--clr-selection);
      color: var(--clr-text);
    }
    
    .text__1,
    .text__2,
    .text__3 {
      color: var(--clr-text);
      text-align: center;
      margin-block-start: 0;
      margin-block-end: 0;
    }
    
    .content {
      width: 100%;
      font-family: Humane;
      font-size: 0.825vw;
    
      @media (max-width: 768px) {
        font-size: 2vw;
      }
    }
    .container {
      display: flex;
      flex-direction: column;
      align-items: center;
    
      width: 70em;
      gap: 17.6em;
      padding: 6em 0;
    
      @media (max-width: 768px) {
        width: 100%;
      }
    }
    
    .container section {
      display: flex;
      flex-direction: column;
      align-items: center;
      height: auto;
    }
    
    .section__main__content {
      gap: 5.6em;
    }
    
    .text__1 {
      font-size: 19.4em;
      font-weight: 700;
      max-width: 45em;
    
      @media (max-width: 768px) {
        font-size: 13.979em;
      }
    }
    
    .text__2 {
      font-size: 4.9em;
      max-width: 7.6em;
      letter-spacing: 0.01em;
    }
    
    .text__3 {
      font-size: 13.979em;
      max-width: 2.4em;
    }
    

    A Few Key Notes about the Setup

    • The
      <canvas>
      element is set to cover the entire screen, fixed in place behind the main content. We want a full screen canvas
      covering the entire screen behind our main content at all times.
    • All text elements intended for WebGL rendering are marked with data-animation=”webgl-text” for clarity and easy
      selection when we begin scripting.

    The purpose of this setup is to function as the “placeholder” that we can mimic in our 3D implementation. So, it’s
    important to
    position and style your text at this stage
    to ensure it matches the final sizing and positioning that you want to achieve. All text formatting properties like
    font-size, letter-spacing, line-height etc. are the properties you want to focus on, because we’ll later read these
    computed styles directly from the DOM during the WebGL phase. Color is optional here, as we can handle text coloring
    later with shaders inside WebGL.

    That’s it for the HTML and CSS setup! It’s all we need for the foundation to move onto our JavaScript and WebGL
    implementation.

    2. Initial 3D World Setup

    Let’s move onto the JavaScript and WebGL implementation. I’ll be using TypeScript, but you can easily follow along
    with vanilla JavaScript if you prefer. I’m assuming you’re already familiar with the basics of Three.js, so I’ll focus
    on explaining the high-level setup rather than covering every detail.

    Below is the starter TypeScript and Three.JS base that I’ll be using for this demo.

    // main.ts
    
    import Commons from "./classes/Commons";
    import * as THREE from "three";
    
    /**
     * Main entry-point.
     * Creates Commons and Scenes
     * Starts the update loop
     * Eventually creates Postprocessing and Texts.
      */
    class App {
      private commons!: Commons;
    
      scene!: THREE.Scene;
    
      constructor() {
        document.addEventListener("DOMContentLoaded", async () => {
          await document.fonts.ready; // Important to wait for fonts to load when animating any texts.
    
          this.commons = Commons.getInstance();
          this.commons.init();
    
          this.createScene();
          
          this.addEventListeners();
    
          this.update();
        });
      }
    
      private createScene() {
        this.scene = new THREE.Scene();
      }
    
      /**
       * The main loop handler of the App
       * The update function to be called on each frame of the browser.
       * Calls update on all other parts of the app
       */
      private update() {
        this.commons.update();
    
        this.commons.renderer.render(this.scene, this.commons.camera);
    
        window.requestAnimationFrame(this.update.bind(this));
      }
    
      private addEventListeners() {
        window.addEventListener("resize", this.onResize.bind(this));
      }
    
      private onResize() {
        this.commons.onResize();
      }
    }
    
    export default new App();
    
    // Commons.ts
    
    import { PerspectiveCamera, WebGLRenderer, Clock } from "three";
    
    import Lenis from "lenis";
    
    export interface Screen {
      width: number;
      height: number;
      aspect: number;
    }
    
    export interface Sizes {
      screen: Screen;
      pixelRatio: number
    }
    
    /**
     * Singleton class for Common stuff.
     * Camera
     * Renderer
     * Lenis
     * Time
     */
    export default class Commons {
      private constructor() {}
      
      private static instance: Commons;
    
      lenis!: Lenis;
      camera!: PerspectiveCamera;
      renderer!: WebGLRenderer;
    
      private time: Clock = new Clock();
      elapsedTime!: number;
    
      sizes: Sizes = {
        screen: {
          width: window.innerWidth,
          height: window.innerHeight,
          aspect: window.innerWidth / window.innerHeight,
        },
        pixelRatio: this.getPixelRatio(),
      };
    
      private distanceFromCamera: number = 1000;
    
      /**
       * Function to be called to either create Commons Singleton instance, or to return existing one.
       * TODO AFTER: Call instances init() function.
       * @returns Commons Singleton Instance.
       */
      static getInstance() {
        if (this.instance) return this.instance;
    
        this.instance = new Commons();
        return this.instance;
      }
    
      /**
       * Initializes all-things Commons. To be called after instance is set.
       */
      init() {
        this.createLenis();
        this.createCamera();
        this.createRenderer();
      }
    
      /**
       * Creating Lenis instance.
       * Sets autoRaf to true so we don't have to manually update Lenis on every frame.
       * Resets possible saved scroll position.
       */
      private createLenis() {
        this.lenis = new Lenis({ autoRaf: true, duration: 2 });
      }
    
      private createCamera() {
        this.camera = new PerspectiveCamera(
          70,
          this.sizes.screen.aspect,
          200,
          2000
        );
        this.camera.position.z = this.distanceFromCamera;
        this.camera.updateProjectionMatrix();
      }
    
      /**
       * createRenderer(): Creates the common WebGLRenderer to be used.
       */
      private createRenderer() {
        this.renderer = new WebGLRenderer({
          alpha: true, // Sets scene background to transparent, so our body background defines the background color
        });
    
        this.renderer.setSize(this.sizes.screen.width, this.sizes.screen.height);
    
        this.renderer.setPixelRatio(this.sizes.pixelRatio);
    
    	  // Creating canvas element and appending to body element.
        document.body.appendChild(this.renderer.domElement); 
      }
    
      /**
       * Single source of truth to get pixelRatio.
       */
      getPixelRatio() {
        return Math.min(window.devicePixelRatio, 2);
      }
    
      /**
       * Resize handler function is called from the entry-point (main.ts)
       * Updates the Common screen dimensions.
       * Updates the renderer.
       * Updates the camera.
       */
      onResize() {
        this.sizes.screen = {
          width: window.innerWidth,
          height: window.innerHeight,
          aspect: window.innerWidth / window.innerHeight,
        };
        this.sizes.pixelRatio = this.getPixelRatio();
    
        this.renderer.setSize(this.sizes.screen.width, this.sizes.screen.height);
        this.renderer.setPixelRatio(this.sizes.pixelRatio);
    
        this.onResizeCamera();
      }
    
      /**
       * Handler function that is called from onResize handler.
       * Updates the perspective camera with the new adjusted screen dimensions
       */
      private onResizeCamera() {
        this.camera.aspect = this.sizes.screen.aspect;
        this.camera.updateProjectionMatrix();
      }
    
      /**
       * Update function to be called from entry-point (main.ts)
       */
      update() {
        this.elapsedTime = this.time.getElapsedTime();
      }
    }
    

    A Note About Smooth Scroll

    When syncing HTML and WebGL worlds,
    you should use a custom scroll
    . This is because the native scroll in browsers updates the scroll position at irregular intervals and does not thus
    guarantee frame-perfect updates with our requestAnimationFrame loop and our WebGL world, causing a
    jittery and unsynchronized movement
    .

    By integrating a custom scroll (Lenis in this case), we ensure our scroll updates perfectly match the frame updates of
    our WebGL world.

    Right now we are seeing an empty 3D world, continuously being rendered.

    We’re only missing one thing to get something visible in our 3D world: the creation of the text elements. So let’s
    move onto creating our WebGLText class next.

    3. Creating WebGLText Class and Texts Meshes

    For the creation of the text meshes, we’ll be using
    troika-three-text
    library.

    npm i troika-three-text

    We’ll now create a reusable

    WebGLText
    class

    . This will handle turning each HTML element into a 3D text mesh, using Troika and our Three.js scene.

    Here’s the basic setup:

    // WebGLText.ts
    
    import Commons from "./Commons";
    import * as THREE from "three";
    
    // @ts-ignore
    import { Text } from "troika-three-text";
    
    interface Props {
      scene: THREE.Scene;
      element: HTMLElement;
    }
    
    export default class WebGLText {
      commons: Commons;
    
      scene: THREE.Scene;
      element: HTMLElement;
    
      computedStyle: CSSStyleDeclaration;
      font!: string; // Path to our .ttf font file.
      bounds!: DOMRect;
      color!: THREE.Color;
      material!: THREE.ShaderMaterial;
      mesh!: Text;
    
      // We assign the correct font bard on our element's font weight from here
      weightToFontMap: Record<string, string> = {
        "900": "/fonts/Humane-Black.ttf",
        "800": "/fonts/Humane-ExtraBold.ttf",
        "700": "/fonts/Humane-Bold.ttf",
        "600": "/fonts/Humane-SemiBold.ttf",
        "500": "/fonts/Humane-Medium.ttf",
        "400": "/fonts/Humane-Regular.ttf",
        "300": "/fonts/Humane-Light.ttf",
        "200": "/fonts/Humane-ExtraLight.ttf",
        "100": "/fonts/Humane-Thin.ttf",
      };
      
      private y: number = 0; // Scroll-adjusted bounds.top
      
      private isVisible: boolean = false;
    
      constructor({ scene, element }: Props) {
        this.commons = Commons.getInstance();
    
        this.scene = scene;
        this.element = element;
    
        this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style.
      }
    }
    

    We have access to the
    Text class
    from Troika, which allows us to create text meshes elements and apply styling to it using familiar properties like
    fontSize, letterSpacing, and font. I’ll cover everything you need to style your text responsively in this tutorial,
    but I implore you to take a look at the full documentation and its possibilities
    here
    .

    Troika doesn’t ship with TypeScript definitions out of the box, so if you’re using TS, you can quickly get around this
    by creating a type declaration file in the root of your project. It’s not pretty, but it gets the job done while
    keeping TypeScript happy.

    // troika.d.ts
    
    declare module "troika-three-text" {
      const value: any;
      export default value;
    }

    Let’s start by creating new methods called createFont(), createColor() and createMesh().

    createFont()
    : Selects the appropriate .ttf file based on the DOM element’s font-weight. If a match isn’t found, we fall back to
    the regular weight. Adjust the mapping to match your own font files and multiple font families if needed.

    // WebGLText.ts 
    
    private createFont() {
        this.font =
          this.weightToFontMap[this.computedStyle.fontWeight] ||
          "/fonts/Humane-Regular.ttf";
    }

    createColor()
    : Converts the computed CSS color into a THREE.Color instance:

    // WebGLText.ts 
    
    private createColor() {
        this.color = new THREE.Color(this.computedStyle.color);
    }

    createMesh():
    Instantiates the text mesh and sets some basic properties. Copies the text’s inner text and sets it onto the mesh.
    Adds the mesh to our Three.JS scene. We anchor the text from the left-center to match typical HTML layout
    expectations.

    // WebGLText.ts 
    private createMesh() {
      this.mesh = new Text();
    
      this.mesh.text = this.element.innerText; // Copying HTML content over to the mesh
      this.mesh.font = this.font;
    
      // Anchor the text to the left-center (instead of center-center)
      this.mesh.anchorX = "0%";
      this.mesh.anchorY = "50%";
    
      this.mesh.color = this.color;
    
      this.scene.add(this.mesh);
    }

    ⚠️ When copying text contents over to the Mesh, avoid using innerHTML or textContent and use innerText instead as it
    gives the most layout-accurate and consistent results.

    setStaticValues
    (): Let’s also create a baby setStaticValues() method which will set the critical properties of our text mesh based on
    the computedStyle.

    We sets values like font size based on computed CSS. We’ll expand this more as we sync more styles down the line.

    We want to call all these methods in the constructor like this:

    // WebGLText.ts 
     constructor({ scene, element }: Props) {
      this.commons = Commons.getInstance();
    
      this.scene = scene;
      this.element = element;
    
      this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style.
    
      this.createFont();
      this.createColor();
      this.createMesh();
      this.setStaticValues();
    }

    Instantiating Text Elements from DOM

    Finally, let’s update our App class (main.ts), and hook this all up by scanning for DOM elements with a
    data-animation=”webgl-text” attribute — creating a WebGLText instance for each one:

    // main.ts
    
    texts!: Array<WebGLText>;
    
    // ...
    
    private createWebGLTexts() {
      const texts = document.querySelectorAll('[data-animation="webgl-text"]');
    
      if (texts) {
        this.texts = Array.from(texts).map((el) => {
          const newEl = new WebGLText({
            element: el as HTMLElement,
            scene: this.scene,
          });
    
          return newEl;
        });
      }
    }
    

    Make sure to call this method in the constructor on initialization. This will populate our scene with styled text
    meshes based on our DOM content.

    That’s all we need to have our text meshes visible, it’s not the prettiest sight to behold, but at least we got
    everything working:

    Next Challenge: Screen vs. 3D Space Mismatch

    Even though we copy the font size directly from the DOM, the scale looks different in 3D. That’s because
    WebGL units don’t map 1:1 with screen pixels
    , and they operate in different coordinate systems. This mismatch will become even more obvious if we start
    positioning and animating elements.

    To get true visual parity between our DOM elements and WebGL text, we need to bridge the gap between screen space and
    3D space. Let’s tackle that next so our text sizes and positions actually match with what we see in the browser.

    4. Syncing Dimensions

    The major problem when syncing HTML and WebGL dimensions is that things between them aren’t exactly pixel-perfect.
    This is because the DOM and WebGL don’t “speak the same units” by default.

    • Web browsers work in screen pixels.
    • WebGL uses arbitrary units

    Our goal is simple:

    💡 Make one unit in the WebGL scene equal one pixel on the screen.

    To achieve this, we’ll adjust the camera’s field of view (FOV) so that visible area through the camera exactly matches
    the dimensions of the browser window in pixels.

    So, we’ll create a
    syncDimensions()
    function under our Commons class, which calculates our camera’s field of view such that 1 unit in the WebGL scene
    corresponds to 1 pixel on the screen —
    at a given distance from the camera.

     // Commons.ts 
    /**
      * Helper function that is called upon creation and resize
      * Updates the camera's fov according to the new dimensions such that the window's pixels match with that of WebGL scene
      */
    private syncDimensions() {
      this.camera.fov =
        2 *
        Math.atan(this.sizes.screen.height / 2 / this.distanceFromCamera) *
        (180 / Math.PI);
    }

    This function will be called once when we create the camera, and every time that the screen is resized.

    
    //Commons.ts
    
    private createCamera() {
      this.camera = new PerspectiveCamera(
        70,
        this.sizes.screen.aspect,
        200,
        2000
      );
      this.camera.position.z = this.distanceFromCamera;
      this.syncDimensions(); // Syncing dimensions
      this.camera.updateProjectionMatrix();
    }
    
    // ...
    
    private onResizeCamera() {
      this.syncDimensions(); // Syncing dimensions
    
      this.camera.aspect = this.sizes.screen.aspect;
      this.camera.updateProjectionMatrix();
    }

    Let’s break down what’s actually going on here using the image below:

    We know:

    • The height of the screen
    • The distance from camera (Z)
    • The FOV of the camera is the vertical angle (fov y in the image)

    So our main goal is to set how wide (vertical angle) we see according to our screen height.

    Because the Z (distance from camera) and half of the screen height
    forms a right triangle
    (distance + height), we can solve for the angle using some basic trigonometry, and compute the FOV using the inverse
    tangent (
    atan
    ) of this triangle.

    Step-by-step Breakdown of the Formula

    this.sizes.screen.height / 2

    → This gives us half the screen’s pixel height — the opposite side of our triangle.

    this.distanceFromCamera

    → This is the adjacent side of the triangle — the distance from the camera to the 3D scene.

    Math.atan(opposite / adjacent)

    → Calculates half of the vertical FOV (in radians).

    *2

    → Since atan only gives half of the angle, we multiply it by 2 to get the full FOV.

    * (180 / Math.PI)

    → Converts the angle from radians to degrees (Three.js expects degrees for PerspectiveCamera’s fov)

    So the final formula comes down to:

    this.camera.fov =
      2 *
      Math.atan(this.sizes.screen.height / 2 / this.distanceFromCamera) *
      (180 / Math.PI);

    That’s all we need to sync our dimensions, and this setup ensures that 1 unit in WebGL = 1 pixel on screen.

    Let’s move back to the text implementation.

    5. Setting Text Properties and Positioning

    Now that we’ve synced the WebGL scene to our screen’s pixel dimensions, we can start mapping HTML styles to our 3D
    text.

    If everything’s wired up correctly, you should see that the font size of the WebGL-rendered text matches the size of
    the underlying HTML, although the positioning is still off.

    Let’s sync more styling properties and positioning.

    Before we can position the 3D text, we need to get the DOM element’s position and size. We’ll create a new method in
    the WebGLText class called
    createBounds()
    ,
    and use the browser’s built-in getBoundingClientRect() method:

    // WebGLText.ts
    
    private createBounds() {
      this.bounds = this.element.getBoundingClientRect();
      this.y = this.bounds.top + this.commons.lenis.actualScroll;
    }

    And call this in the constructor:

      // WebGLText.ts
    
    constructor({ scene, element }: Props) {
      this.commons = Commons.getInstance();
    
      this.scene = scene;
      this.element = element;
    
      this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style.
    
      this.createFont();
      this.createColor();
      this.createBounds(); // Creating bounds
      this.createMesh();
      this.setStaticValues();
    }

    Next, we’ll pull important typographic properties from the DOM (computed style) and pass them to the 3D mesh, so that
    it behaves like our native HTML text. (Again, you can see the full documentation and possible properties of troika
    here
    ). Below I’ve included the most important ones.

      // WebGLText.ts 
    
    private setStaticValues() {
      const { fontSize, letterSpacing, lineHeight, whiteSpace, textAlign } =
        this.computedStyle;
    
      const fontSizeNum = window.parseFloat(fontSize);
    
      this.mesh.fontSize = fontSizeNum;
    
      this.mesh.textAlign = textAlign;
    
      // Troika defines letter spacing in em's, so we convert to them
      this.mesh.letterSpacing = parseFloat(letterSpacing) / fontSizeNum;
    
      // Same with line height
      this.mesh.lineHeight = parseFloat(lineHeight) / fontSizeNum;
    
      // Important to define maxWidth for the mesh, so that our text doesn't overflow
      this.mesh.maxWidth = this.bounds.width;
    
      // Match whiteSpace behavior (e.g., 'pre', 'nowrap')
      this.mesh.whiteSpace = whiteSpace;
    }

    Troika accepts some of the properties in local em units, so we have to convert pixels into em’s by dividing the pixel
    values by the font size.

    Also, it’s important to set a maximum width (in pixels) to constrain the mesh’s layout — this prevents text from
    overflowing and ensures proper text wrapping.

    And finally, let’s create an
    update()
    function to be called on each frame that consistently positions our mesh according to the underlying DOM position.

    This is what it looks like:

    //WebGLText.ts
    
    update() {
      this.mesh.position.y =
        -this.y +
        this.commons.lenis.animatedScroll +
        this.commons.sizes.screen.height / 2 -
        this.bounds.height / 2;
    
      this.mesh.position.x =
        this.bounds.left - this.commons.sizes.screen.width / 2;
    }

    Breakdown:

    • this.y
      shifts the mesh upward by the element’s absolute Y offset.
    • lenis.animatedScroll
      re-applies the live animated scroll position.
    • Together, they give the current relative position inside the viewport.

    Since our WebGL coordinate system is centered in the middle of the screen (Y = 0 is center), we also:

    • Add half the screen height (to convert from DOM top-left origin to WebGL center origin)
    • Subtract half the text height to vertically center the text
    • Subtract half the screen width

    Now, we call this update function for each of the text instances in our entry-file:

      // main.ts
    
    private update() {
      this.commons.update();
    
      this.commons.renderer.render(this.scene, this.commons.camera);
    
    
      if (this.texts) {
        this.texts.forEach((el) => el.update());
      }
    
      window.requestAnimationFrame(this.update.bind(this));
    }

    And now, the
    texts will perfectly follow DOM counterparts
    , even as the user scrolls.

    Let’s finalize our base text class implementation before diving into effects:

    Resizing

    We need to ensure that our WebGL text updates correctly on window resize events. This means
    recreating the computedStyle, bounds, and static values
    whenever the window size changes.

    Here’s the resize event handler:

     // WebGLText.ts
    
    onResize() {
      this.computedStyle = window.getComputedStyle(this.element);
      this.createBounds();
      this.setStaticValues();
    }

    And, call it in the entry-point for each of the text instances:

      // main.ts
    
    private onResize() {
      this.commons.onResize();
    
      // Resizing texts
      if (this.texts) {
        this.texts.forEach((el) => el.onResize());
      }
    }

    Once everything is working responsively and perfectly synced with the DOM, we can finally
    hide the original HTML text by setting it transparent
    — but we’ll keep it in place so it’s still selectable and accessible to the user.

    // WebGLText.ts
    
    this.createFont();
    this.createColor();
    this.createBounds();
    this.createMesh();
    this.setStaticValues();
    
    this.element.style.color = "transparent"; // Hide DOM element

    We should now have our perfectly responsive text meshes, and the user only sees the rendered WebGL text, while the DOM
    element remains fully intact for accessibility.

    Let’s add some effects!

    6. Adding a Custom shader and Replicating Mask Reveal Animations

    Troika also lets us use custom shader materials for meshes, giving us the flexibility to create complex effects beyond
    just setting colors.

    Let’s set up our initial custom shaders:

    Fragment Shader:

    // text.frag
    
    uniform vec3 uColor;
    varying vec2 vUv;
    
    void main() {
      gl_FragColor = vec4(uColor, 1.0); // Applying our custom color.
    }

    The fragment shader defines the color of the text using the uColor uniform.

    Vertex Shader:

    // text.vert
    varying vec2 vUv;
    
    void main() {
      vUv = uv;
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    }

    The vertex shader passes the texture coordinates (uv) to the fragment shader for the text rendering.

    Shader File Imports using Vite

    To handle shader files more easily, we can use the
    vite-plugin-glsl
    plugin together with Vite to directly import shader files like .frag and .vert in code:

    npm i vite-plugin-glsl -D
    // vite.config.ts
    
    import { defineConfig } from "vite";
    import glsl from "vite-plugin-glsl";
    
    export default defineConfig({
      plugins: [
        glsl({
          include: [
            "**/*.glsl",
            "**/*.wgsl",
            "**/*.vert",
            "**/*.frag",
            "**/*.vs",
            "**/*.fs",
          ],
          warnDuplicatedImports: true,
          defaultExtension: "glsl",
          watch: true,
          root: "/",
        }),
      ],
    });
    

    If you’re using TypeScript, you also need to declare the modules for shader files so TypeScript can understand how to
    import them:

    // shaders.d.ts
    
    declare module "*.frag" {
      const value: string;
      export default value;
    }
    
    declare module "*.vert" {
      const value: string;
      export default value;
    }
    
    declare module "*.glsl" {
      const value: string;
      export default value;
    }

    Creating Custom Shader Materials

    Let’s now create our custom ShaderMaterial and apply it to our mesh:

    // WebGLText.ts
    
    // Importing shaders
    import fragmentShader from "../../shaders/text/text.frag";
    import vertexShader from "../../shaders/text/text.vert";
    
    //...
    
    this.createFont();
    this.createColor();
    this.createBounds();
    this.createMaterial(); // Creating material
    this.createMesh();
    this.setStaticValues();
    
    //...
    
    private createMaterial() {
       this.material = new THREE.ShaderMaterial({
         fragmentShader,
         vertexShader
           uniforms: {
           uColor: new THREE.Uniform(this.color), // Passing our color to the shader
         },
       });
     }

    In the
    createMaterial()
    method, we define the
    ShaderMaterial
    using the imported shaders and pass in the uColor uniform, which allows us to dynamically control the color of the
    text based on our DOM-element.

    And now, instead of setting the color directly on the default mesh material, we apply our new custom material:

      // WebGLText.ts
    
    private createMesh() {
      this.mesh = new Text();
    
      this.mesh.text = this.element.innerText; // Always use innerText (not innerHTML or textContent).
      this.mesh.font = this.font;
    
      this.mesh.anchorX = "0%";
      this.mesh.anchorY = "50%";
    
      this.mesh.material = this.material; //Using custom material instead of color
    }

    At this point, we are using our custom shader material, but so far, nothing in our output has changed. Let’s now setup
    show and hide animations using our custom shader, and replicate the mask reveal effect.

    Setting up Reveal Animations

    We’ll create an animation that uses a progress uniform (uProgress) to control the visibility and reveal progress of
    the text. The animation will be controlled using the motion library.

    First, we must install
    motion
    and import its
    animate
    and
    inView
    functions to our WebGLText class.

    npm i motion
    // WebGLText.ts
    
    import { inView, animate } from "motion";

    Now, let’s configure our class so that when the text steps into view,
    the show() function is called
    , and when it steps away,
    the hide() function is called
    . These methods also control the current visibility variable
    this.isVisible
    . These functions will control the uProgress variable, and animate it between 0 and 1.

    For this, we also must setup an addEventListeners() function:

     // WebGLText.ts
    
    /**
      * Inits visibility tracking using motion's inView function.
      * Show is called when the element steps into view, and hide is called when the element steps out of view
      */
    private addEventListeners() {
      inView(this.element, () => {
        this.show();
    
        return () => this.hide();
      });
    }
    
    show() {
      this.isVisible = true;
    
      animate(
        this.material.uniforms.uProgress,
        { value: 1 },
        { duration: 1.8, ease: [0.25, 1, 0.5, 1] }
      );
    }
    
    hide() {
      animate(
        this.material.uniforms.uProgress,
        { value: 0 },
        { duration: 1.8, onComplete: () => (this.isVisible = false) }
      );
    }

    Just make sure to call addEventListeners() in your constructor after setting up the class.

    Updating the Shader Material for Animation

    We’ll also add two additional uniform variables in our material for the animations:

    • uProgress
      : Controls the reveal progress (from 0 to 1).
    • uHeight
      : Used by the vertex shader to calculate vertical position offset.

    Updated
    createMaterial()
    method:

     // WebGLText.ts
    
    private createMaterial() {
      this.material = new THREE.ShaderMaterial({
        fragmentShader,
        vertexShader,
        uniforms: {
          uProgress: new THREE.Uniform(0),
          uHeight: new THREE.Uniform(this.bounds.height),
          uColor: new THREE.Uniform(this.color),
        },
      });
    }

    Since the uHeight is dependent on bounds, we also want to update the uniform variable upon resizing:

      // WebGLText.ts
    
    onResize() {
      this.computedStyle = window.getComputedStyle(this.element);
      this.createBounds();
      this.setStaticValues();
      this.material.uniforms.uHeight.value = this.bounds.height;
    }

    We now have the text class instance automatically calling show() and hide(), and animating the uProgress according to
    the visibility of our underlying DOM-element.

    For performance, you might want to update the update() method to only calculate a new position when the mesh is
    visible:

    update() {
      if (this.isVisible) {
        this.mesh.position.y =
          -this.y +
          this.commons.lenis.animatedScroll +
          this.commons.sizes.screen.height / 2 -
          this.bounds.height / 2;
    
        this.mesh.position.x =
          this.bounds.left - this.commons.sizes.screen.width / 2;
      }
    }

    Mask Reveal Theory and Shader Implementation

    Creating a mask reveal effect with custom shaders in WebGL is surprisingly simple when we break it down into two
    separate movements: one happening in the fragment shader and the other in the vertex shader. You might’ve seen this
    effect happen in WebGL on the page of
    Zajno
    , for example.

    Instead of overcomplicating the concept with complex masks or thinking about “lifting it up” behind a window (as we do
    in traditional HTML), we can think of it as two distinct actions that work together.

    1. Fragment Shader
      : We clip the text vertically, revealing it gradually from top to bottom.
    2. Vertex Shader
      : We translate the text’s position from the bottom to the top by its height.

    Together these two movements create the illusion of the text lifting itself up from behind a mask.

    Let’s update our fragment shader code:

    //text.frag
    
    uniform float uProgress; // Our progress value between 0 and 1
    
    uniform vec3 uColor;
    varying vec2 vUv;
    
    void main() {
      // Calculate the reveal threshold (bottom to top reveal)
      float reveal = 1.0 - vUv.y;
      
      // Discard fragments above the reveal threshold based on progress
      if (reveal > uProgress) discard;
    
      // Apply the color to the visible parts of the text
      gl_FragColor = vec4(uColor, 1.0);
    }
    
    • When uProgress is 0, the mesh is fully clipped out, and nothing is visible
    • When uProgress increases towards 1, the mesh reveals itself from top to bottom.

    For the vertex shader, we can simply pass the new uniform called uHeight, which stands for the height of our
    DOM-element (this.bounds.height), and translate the output vertically according to it and uProgress.

    //text.vert
    
    uniform float uProgress;
    uniform float uHeight; // Total height of the mesh passed in from JS
    
    varying vec2 vUv;
    
    void main() {
      vUv = uv;
      
      vec3 transformedPosition = position;
    
      // Push the mesh upward as it reveals
      transformedPosition.y -= uHeight * (1.0 - uProgress);
      
      gl_Position = projectionMatrix * modelViewMatrix * vec4(transformedPosition, 1.0);
    }
    • uHeight
      : Total height of the DOM-element (and mesh), passed in from JS.
    • When
      uProgress
      is
      0
      , the mesh is fully pushed down.
    • As
      uProgress
      reaches
      1
      , it resolves to its natural position.

    Now, we should have a beautifully on-scroll animating scene, where the texts reveal themselves as in regular HTML when
    they scroll into view.

    To spice things up, let’s add some scroll-velocity based post processing effects to our scene as the final step!

    7. Adding Post-processing

    Now that we’ve built our animated WebGL text with custom shaders and scroll-triggered reveals, we can push the visuals
    further with
    post-processing
    .

    Post-processing allows us to apply full-screen visual effects after the scene has been rendered. This is done by
    passing the final image through a series of custom shader passes.

    So, in this final section, we’ll:

    • Set up a PostProcessing class using Three.js’s EffectComposer
    • Add a custom RGB shift and wave distortion effect
    • Drive the distortion strength dynamically using the scroll velocity from our Lenis custom scroll instance

    Creating a PostProcessing class with EffectComposer

    Let’s create a PostProcessing class that will be intialized from our entry-point, and which will handle everything
    regarding postprocessing using Three.JS’s EffectComposer. Read more about the EffectComposer class
    here from Three.js’s documentation
    . We’ll also create new fragment and vertex shaders for the postprocessing class to use.

    // PostProcessing.ts
    
    import {
      EffectComposer,
      RenderPass,
      ShaderPass,
    } from "three/examples/jsm/Addons.js";
    import Commons from "./Commons";
    import * as THREE from "three";
    
    // Importing postprocessing shaders
    import fragmentShader from "../../shaders/postprocessing/postprocessing.frag";
    import vertexShader from "../../shaders/postprocessing/postprocessing.vert";
    
    interface Props {
      scene: THREE.Scene;
    }
    
    export default class PostProcessing {
      // Scene and utility references
      private commons: Commons;
      private scene: THREE.Scene;
    
      private composer!: EffectComposer;
    
      private renderPass!: RenderPass;
      private shiftPass!: ShaderPass;
    
      constructor({ scene }: Props) {
        this.commons = Commons.getInstance();
    
        this.scene = scene;
    
        this.createComposer();
        this.createPasses();
      }
    
      private createComposer() {
        this.composer = new EffectComposer(this.commons.renderer);
        this.composer.setPixelRatio(this.commons.sizes.pixelRatio);
        this.composer.setSize(
          this.commons.sizes.screen.width,
          this.commons.sizes.screen.height
        );
      }
    
      private createPasses() {
        // Creating Render Pass (final output) first.
        this.renderPass = new RenderPass(this.scene, this.commons.camera);
        this.composer.addPass(this.renderPass);
    
        // Creating Post-processing shader for wave and RGB-shift effect.
        const shiftShader = {
          uniforms: {
            tDiffuse: { value: null },      // Default input from previous pass
            uVelocity: { value: 0 },        // Scroll velocity input
            uTime: { value: 0 },            // Elapsed time for animated distortion
          },
          vertexShader,
          fragmentShader,
        };
    
        this.shiftPass = new ShaderPass(shiftShader);
        this.composer.addPass(this.shiftPass);
      }
    
      /**
       * Resize handler for EffectComposer, called from entry-point.
       */
      onResize() {
        this.composer.setPixelRatio(this.commons.sizes.pixelRatio);
        this.composer.setSize(
          this.commons.sizes.screen.width,
          this.commons.sizes.screen.height
        );
      }
    
      update() {
        this.shiftPass.uniforms.uTime.value = this.commons.elapsedTime;
    
        this.composer.render();
      }
    }
    

    Since we don’t have our postprocessing shaders created yet, make sure you create placeholder postprocessing.frag and
    postprocessing.vert shaders so the imports don’t fail.

    Example placeholders below:

    //postprocessing.frag
    
    uniform sampler2D tDiffuse;
    uniform float uVelocity;
    uniform float uTime;
    
    varying vec2 vUv;
    
    void main() {
        gl_FragColor = texture2D(tDiffuse, vUv);
    }
    
    //postprocessing.vert
    varying vec2 vUv;
    
    void main() {
        vUv = uv;
            
        gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    }

    Breakdown of the PostProcessing class

    Constructor:
    Initializes the class by storing the provided scene, grabbing the shared Commons instance, and then calling
    createComposer()
    and
    createPasses()
    .

    createComposer():
    Sets up the EffectComposer with the correct pixel ratio and canvas size:

    • EffectComposer wraps the WebGL renderer and allows chaining of multiple render passes.
    • Sized according to current viewport dimensions and pixel ratio

    createPasses():
    This method sets up all rendering passes applied to the scene.

    • RenderPass
      : The first pass that simply renders the scene with the main camera as regular.
    • ShaderPass (shiftPass)
      : A custom full-screen shader pass that we’ll create and which will create the RGB shift and wavy distortion
      effects.

    update():
    Method called on every frame. Updates the uTime uniform so we can animate effects over time, and renders the final
    post-processed image using
    composer.render()

    Initializing Post-processing

    To wire the post-processing system into our existing app, we update our main.ts:

      //main.ts
    private postProcessing!: PostProcessing;
    
    //....
    
    constructor() {
      document.addEventListener("DOMContentLoaded", async () => {
        await document.fonts.ready;
    
        this.commons = Commons.getInstance();
        this.commons.init();
    
        this.createScene();
        this.createWebGLTexts();
        this.createPostProcessing(); // Creating post-processing
        this.addEventListeners();
    
        this.update();
      });
    }
    
    // ...
    
    private createPostProcessing() {
      this.postProcessing = new PostProcessing({ scene: this.scene });
    }
    
    // ...
    
    private update() {
      this.commons.update();
      
      if (this.texts) {
        this.texts.forEach((el) => el.update());
      }
      
      // Don't need line below as we're rendering everything using EffectComposer.
      // this.commons.renderer.render(this.scene, this.commons.camera);
      
      this.postProcessing.update(); // Post-processing class handles rendering of output from now on
    
      
      window.requestAnimationFrame(this.update.bind(this));
    }
    
    
    private onResize() {
      this.commons.onResize();
    
      if (this.texts) {
        this.texts.forEach((el) => el.onResize());
      }
    
      this.postProcessing.onResize(); // Resize post-processing
    }

    So in the new update() function, instead of rendering directly from there, we now hand off rendering responsibility to
    the PostProcessing class.

    Creating Post-processing Shader and Wiring Scroll Velocity

    We want to modify the PostProcessing class further, so that we update the postprocessing fragment shader with the
    current scroll velocity from Lenis.

    For this, I’m adding a new property lerpedVelocity and lerpFactor, which control the smoothed out velocity. The raw
    velocity values from lenis can be spiky and sudden, especially with fast scrolling or scroll jumps. If we pass that
    raw value directly into a shader, it can cause a really jittery output.

    private lerpedVelocity = 0; // Smoothed scroll velocity for post-processing.
    private lerpFactor = 0.05; // Controls how quickly lerpedVelocity follows the real velocity
    
    // ...
    
    update() {
        this.shiftPass.uniforms.uTime.value = this.commons.elapsedTime;
    
      // Reading current velocity form lenis instance.
      const targetVelocity = this.commons.lenis.velocity;
    
      // We use the lerped velocity as the actual velocity for the shader, just for a smoother experience.
      this.lerpedVelocity +=
        (targetVelocity - this.lerpedVelocity) * this.lerpFactor;
    
      this.shiftPass.uniforms.uVelocity.value = this.lerpedVelocity;
    
      this.composer.render();
    }

    Post-processing Shaders

    For the vertex shader, we can keep all things default, we pass the texture coordinates to the fragment shader.

    //postprocessing.vert
    
    varying vec2 vUv;
    
    void main() {
      vUv = uv;
            
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    }

    And for the fragment shader:

    //postprocessing.frag
    
    uniform sampler2D tDiffuse;
    uniform float uVelocity;
    uniform float uTime;
    
    varying vec2 vUv;
    
    void main() {
      vec2 uv = vUv;
      
      // Calculating wave distortion based on velocity
      float waveAmplitude = uVelocity * 0.0009;
      float waveFrequency = 4.0 + uVelocity * 0.01;
      
      // Applying wave distortion to the UV coordinates
      vec2 waveUv = uv;
      waveUv.x += sin(uv.y * waveFrequency + uTime) * waveAmplitude;
      waveUv.y += sin(uv.x * waveFrequency * 5. + uTime * 0.8) * waveAmplitude;
      
      // Applying the RGB shift to the wave-distorted coordinates
      float r = texture2D(tDiffuse, vec2(waveUv.x, waveUv.y + uVelocity * 0.0005)).r;
      vec2 gb = texture2D(tDiffuse, waveUv).gb;
    
      gl_FragColor = vec4(r, gb, r);
    }

    Breakdown

    // Calculating wave distortion based on velocity
    float waveAmplitude = uVelocity * 0.0009;
    float waveFrequency = 4.0 + uVelocity * 0.01;

    Wave amplitude controls how strongly the wave effect distorts the screen according to our scroll velocity.

    Wave frequency controls how frequently the waves occur.

    Next, we distort the UV-coordinates using sin functions and the uTime uniform:

    // Applying wave distortion to the UV coordinates
    vec2 waveUv = uv;
    waveUv.x += sin(uv.y * waveFrequency + uTime) * waveAmplitude;
    waveUv.y += sin(uv.x * waveFrequency * 5. + uTime * 0.8) * waveAmplitude;

    The red channel is offset slightly based on the velocity, creating the RGB shift effect.

    // Applying the RGB shift to the wave-distorted coordinates
    float r = texture2D(tDiffuse, vec2(waveUv.x, waveUv.y + uVelocity * 0.0005)).r;
    vec2 gb = texture2D(tDiffuse, waveUv).gb;
    
    gl_FragColor = vec4(r, gb, r);

    This will create a subtle color separation in the final image that shifts according to our scroll velocity.

    Finally, we combine red, green, blue, and alpha into the output color.

    8. Final Result

    And there you have it! We’ve created a responsive text scene, with scroll-triggered mask reveal animations and
    wavy/rgb shifted post-processing.

    This setup provides a solid, modular foundation for building more advanced WebGL text effects. If you’re curious to explore further, consider adding particles, fluid simulations, audio reactivity, or more complex materials and shaders. If you’re interested in breakdowns of any of these, feel free to reach out on X.

    Thanks so much for following along 🙌



    Source link

  • 3D Cards in Webflow Using Three.js and GLB Models

    3D Cards in Webflow Using Three.js and GLB Models


    I’ve always been interested in finding simple ways to bring more depth into web interfaces, not just through visuals, but through interaction and space.

    In this demo, I explored how flat UI cards can become interactive 3D scenes using GLB models, Three.js, and Webflow. Each card starts as a basic layout but reveals a small, self-contained environment built with real-time rendering and subtle motion.

    It’s a lightweight approach to adding spatial storytelling to familiar components, using tools many designers already work with.

    Welcome to My Creative World

    I’m always drawn to visuals that mix the futuristic with the familiar — space-inspired forms, minimal layouts, and everyday elements seen from a different angle.

    Most of my projects start this way: by reimagining ordinary ideas through a more immersive or atmospheric lens.

    It All Started with a Moodboard

    This one began with a simple inspiration board:

    From that board, I picked a few of my favorite visuals and ran them through an AI tool that converts images into GLB 3D models.

    The results were surprisingly good! Abstract, textured, and full of character.

    The Concept: Flat to Deep

    When I saw the output from the AI-generated GLB models, I started thinking about how we perceive depth in UI design, not just visually, but interactively.

    That led to a simple idea: what if flat cards could reveal a hidden spatial layer? Not through animation alone, but through actual 3D geometry, lighting, and camera movement.

    I designed three UI cards, each styled with minimal HTML and CSS in Webflow. On interaction, they load a unique GLB model into a Three.js scene directly within the card container. Each model is lit, framed, and animated to create the feeling of a self-contained 3D space.

    Building the Web Experience

    The layout was built in Webflow using a simple flexbox structure with three cards inside a wrapper. Each card contains a div that serves as the mounting point for a 3D object.

    The GLB models are rendered using Three.js, which is integrated into the project with custom JavaScript. Each scene is initialized and handled separately, giving each card its own interactive 3D space while keeping the layout lightweight and modular.

    Scene Design with Blender

    Each GLB model was prepared in Blender, where I added a surrounding sphere to create a sense of depth and atmosphere. This simple shape helps simulate background contrast and encloses the object in a self-contained space.

    Lighting played an important role; especially with reflective materials like glass or metal. Highlights and soft shadows were used to create that subtle, futuristic glow.

    The result is that each 3D model feels like it lives inside its own ambient environment, even when rendered in a small card.

    Bringing It Together with Three.js

    Once the models were exported from Blender as .glb files, I used Three.js to render them inside each card. Each card container acts as its own 3D scene, initialized through a custom JavaScript function.

    The setup involves creating a basic scene with a perspective camera, ambient and directional lighting, and a WebGL renderer. I used GLTFLoader to load each .glb file and OrbitControls to enable subtle rotation. Zooming and panning are disabled to keep the interaction focused and controlled.

    Each model is loaded into a separate container, making it modular and easy to manage. The camera is offset slightly for a more dynamic starting view, and the background is kept dark to help the lighting pop.

    Here’s the full JavaScript used to load and render the models:

    // Import required libraries
    import * as THREE from 'three';
    import { OrbitControls } from 'three/addons/controls/OrbitControls.js';
    import { GLTFLoader } from 'three/addons/loaders/GLTFLoader.js';
    import gsap from 'gsap';
    
    /**
     * This function initializes a Three.js scene inside a given container
     * and loads a .glb model into it.
     */
    function createScene(containerSelector, glbPath) {
      const container = document.querySelector(containerSelector);
    
      // 1. Create a scene
      const scene = new THREE.Scene();
      scene.background = new THREE.Color(0x202020); // dark background
    
      // 2. Set up the camera with perspective
      const camera = new THREE.PerspectiveCamera(
        45, // Field of view
        container.clientWidth / container.clientHeight, // Aspect ratio
        0.1, // Near clipping plane
        100  // Far clipping plane
      );
      camera.position.set(2, 0, 0); // Offset to the side for better viewing
    
      // 3. Create a renderer and append it to the container
      const renderer = new THREE.WebGLRenderer({ antialias: true });
      renderer.setSize(container.clientWidth, container.clientHeight);
      container.appendChild(renderer.domElement);
    
      // 4. Add lighting
      const light = new THREE.DirectionalLight(0xffffff, 4);
      light.position.set(30, -10, 20);
      scene.add(light);
    
      const ambientLight = new THREE.AmbientLight(0x404040); // soft light
      scene.add(ambientLight);
    
      // 5. Set up OrbitControls to allow rotation
      const controls = new OrbitControls(camera, renderer.domElement);
      controls.enableZoom = false; // no zooming
      controls.enablePan = false;  // no dragging
      controls.minPolarAngle = Math.PI / 2; // lock vertical angle
      controls.maxPolarAngle = Math.PI / 2;
      controls.enableDamping = true; // smooth movement
    
      // 6. Load the GLB model
      const loader = new GLTFLoader();
      loader.load(
        glbPath,
        (gltf) => {
          scene.add(gltf.scene); // Add model to the scene
        },
        (xhr) => {
          console.log(`${containerSelector}: ${(xhr.loaded / xhr.total) * 100}% loaded`);
        },
        (error) => {
          console.error(`Error loading ${glbPath}`, error);
        }
      );
    
      // 7. Make it responsive
      window.addEventListener("resize", () => {
        camera.aspect = container.clientWidth / container.clientHeight;
        camera.updateProjectionMatrix();
        renderer.setSize(container.clientWidth, container.clientHeight);
      });
    
      // 8. Animate the scene
      function animate() {
        requestAnimationFrame(animate);
        controls.update(); // updates rotation smoothly
        renderer.render(scene, camera);
      }
    
      animate(); // start the animation loop
    }
    
    // 9. Initialize scenes for each card (replace with your URLs)
    createScene(".div",  "https://yourdomain.com/models/yourmodel.glb");
    createScene(".div2", "https://yourdomain.com/models/yourmodel2.glb");
    createScene(".div3", "https://yourdomain.com/models/yourmodel3.glb");

    This script is added via a <script type="module"> tag, either in the Webflow page settings or as an embedded code block. Each call to createScene() initializes a new card, linking it to its corresponding .glb file.

    How This Works in Practice

    In Webflow, create three containers with the classes .div, .div2, and .div3. Each one will act as a canvas for a different 3D scene.

    Embed the JavaScript module shown above by placing it just before the closing </body> tag in your Webflow project, or by using an Embed block with <script type="module">.

    Once the page loads, each container initializes its own Three.js scene and loads the corresponding GLB model. The result: flat UI cards become interactive, scrollable 3D objects — all directly inside Webflow.

    This approach is lightweight, clean, and performance-conscious, while still giving you the flexibility to work with real 3D content.

    Important Note for Webflow Users

    This setup works in Webflow, but only if you structure it correctly.

    To make it work, you’ll need to:

    • Host your Three.js code externally using a bundler like Vite, Parcel, or Webpack
    • Or bundle the JavaScript manually and embed it as a <script type="module"> in your exported site

    Keep in mind: Webflow’s Designer does not support ES module imports (import) directly. Pasting the code into an Embed block won’t work unless it’s already built and hosted elsewhere.

    You’ll need to export your Webflow project or host the script externally, then link it via your project settings.

    Final Thoughts

    Thanks for following along with this project. What started as a simple moodboard turned into a small experiment in mixing UI design with real-time 3D.

    Taking flat cards and turning them into interactive scenes was a fun way to explore how much depth you can add with just a few tools: Webflow, Three.js, and GLB models.

    If this gave you an idea or made you want to try something similar, that’s what matters most.
    Keep experimenting, keep learning, and keep building.



    Source link

  • Deconstructing the 35mm Website: A Look at the Process and Technical Details

    Deconstructing the 35mm Website: A Look at the Process and Technical Details


    The Idea Behind the Project

    This project primarily serves as a technical demo and learning material. It began when I decided to start learning Blender. I followed a few tutorials, then decided to do a small project using it—so I chose to create the Canon F-1 camera!

    After that, I decided to export the project to Three.js to add some cool post-processing shader effects. I wanted to create a sketch effect similar to what I had seen in some repair guides.

    After spending a few hours experimenting with it, I decided to integrate it into a fully functional website featuring some cool shaders and 3D effects!

    In this article, I’m going to walk through some of the key features of the site and provide a technical breakdown, assuming you already have a basic or beginner-level understanding of Three.js and shaders.

    1. The Edge Detection Shader

    Three.js includes a built-in edge detection shader called SobelOperatorShader. Basically, it detects edges based on color contrast—it draws a line between two areas with a strong enough difference in color.

    To make my effect work the way I want, I need to assign a unique color to each area I want to highlight on my model. This way, Three.js will draw a line around those areas.

    Here’s my model with all the materials applied:

    This way, Three.js can accurately detect each area I want to highlight!

    As you can see, the lines are not all the same intensity—some are white, while others are light gray. This is because, by default, line intensity depends on contrast: edges with lower contrast appear with lighter lines. To fix this, I manually modified the post-processing shader to make all lines fully white, regardless of contrast.

    The shader can be found in:

    node_modules/three/examples/jsm/shaders/SobelOperatorShader.js

    I copied the contents of the fragment shader into a separate file so I could freely modify it.

    uniform vec2 resolution;
    varying vec2 vUv;
    
    float sobel(sampler2D tDiffuse,vec2 texel)
    {
        // kernel definition (in glsl matrices are filled in column-major order)
    
        const mat3 Gx = mat3( -1, -2, -1, 0, 0, 0, 1, 2, 1 ); // x direction kernel
        const mat3 Gy = mat3( -1, 0, 1, -2, 0, 2, -1, 0, 1 ); // y direction kernel
    
        // fetch the 3x3 neighbourhood of a fragment
    
        // first column
    
        float tx0y0 = texture2D( tDiffuse, vUv + texel * vec2( -1, -1 ) ).r;
        float tx0y1 = texture2D( tDiffuse, vUv + texel * vec2( -1,  0 ) ).r;
        float tx0y2 = texture2D( tDiffuse, vUv + texel * vec2( -1,  1 ) ).r;
    
        // second column
    
        float tx1y0 = texture2D( tDiffuse, vUv + texel * vec2(  0, -1 ) ).r;
        float tx1y1 = texture2D( tDiffuse, vUv + texel * vec2(  0,  0 ) ).r;
        float tx1y2 = texture2D( tDiffuse, vUv + texel * vec2(  0,  1 ) ).r;
    
        // third column
    
        float tx2y0 = texture2D( tDiffuse, vUv + texel * vec2(  1, -1 ) ).r;
        float tx2y1 = texture2D( tDiffuse, vUv + texel * vec2(  1,  0 ) ).r;
        float tx2y2 = texture2D( tDiffuse, vUv + texel * vec2(  1,  1 ) ).r;
    
        // gradient value in x direction
    
        float valueGx = Gx[0][0] * tx0y0 + Gx[1][0] * tx1y0 + Gx[2][0] * tx2y0 +
            Gx[0][1] * tx0y1 + Gx[1][1] * tx1y1 + Gx[2][1] * tx2y1 +
            Gx[0][2] * tx0y2 + Gx[1][2] * tx1y2 + Gx[2][2] * tx2y2;
    
        // gradient value in y direction
    
        float valueGy = Gy[0][0] * tx0y0 + Gy[1][0] * tx1y0 + Gy[2][0] * tx2y0 +
            Gy[0][1] * tx0y1 + Gy[1][1] * tx1y1 + Gy[2][1] * tx2y1 +
            Gy[0][2] * tx0y2 + Gy[1][2] * tx1y2 + Gy[2][2] * tx2y2;
    
        // magnitute of the total gradient
    
        float G = sqrt( ( valueGx * valueGx ) + ( valueGy * valueGy ) );
    
        return G;
    }
    
    
    void main() {
    
        vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
        
        vec4 t = texture2D(tDiffuse,vUv);    
    
        float G = sobel(t,texel);
        G= G > 0.001 ? 1. : 0.;
            
        gl_FragColor = vec4(vec3(G),1.0);
    
        #include <colorspace_fragment>
    }

    What I’m doing here is moving all the edge detection logic into the Sobel function. Then, I pass the tDiffuse texture—which is the composer’s render—to this function.

    This way, I can modify the output of the edge detection shader before passing it back to the composer:

    float G = sobel(t,texel);
    G= G > 0.001 ? 1. : 0.;

    G represents the intensity of the edge detection. It’s a single value because the lines are monochrome. G ranges from 0 to 1, where 0 means full black (no edge detected) and 1 means full white (strong contrast detected).

    As mentioned earlier, this value depends on the contrast. What I’m doing in the second line is forcing G to be 1 if it’s above a certain threshold (I chose 0.001, but you could pick a smaller value if you want).

    This way I can get all the edges to have the same intensity.

    Here’s how I’m applying the custom fragment shader to the Sobel Operator shader pass:

    import { SobelOperatorShader } from "three/addons/shaders/SobelOperatorShader.js"
    import { ShaderPass } from "three/addons/postprocessing/ShaderPass.js"
    
    
    export default class CannonF1 {
        constructor() {
            //....code
        }
    
        setupPostprocessing()
        {
    
            SobelOperatorShader.fragmentShader = sobelFragment
    
            this.effectSobel = new ShaderPass(SobelOperatorShader)
            this.effectSobel.uniforms["resolution"].value.x =
            window.innerWidth * Math.min(window.devicePixelRatio, 2)
            this.effectSobel.uniforms["resolution"].value.y =
            window.innerHeight * Math.min(window.devicePixelRatio, 2)
    
            this.composer.addPass(this.effectSobel)
        }
    }

    2. The Mesh Highlight on Hover Effect

    Next, let’s take a look at the lens parts section.

    This is mainly achieved using a Three.js utility called RenderTarget.

    A render target is a buffer where the GPU draws pixels for a scene being rendered off-screen. It’s commonly used in effects like post-processing, where the rendered image is processed before being displayed on the screen.

    Basically, this allows me to render my scene twice per frame: once with only the highlighted mesh, and once without it.

    First I setup the render targets:

    /* 
      ....Code 
    */
    
    createRenderTargets() {
        const sizes = {
          width:
            window.innerWidth * Math.ceil(Math.min(2, window.devicePixelRatio)),
          height:
            window.innerHeight * Math.ceil(Math.min(2, window.devicePixelRatio)),
        }
    
        this.renderTargetA = new THREE.WebGLRenderTarget(
          sizes.width,
          sizes.height,
          rtParams
        )
    
        this.renderTargetB = new THREE.WebGLRenderTarget(
          sizes.width,
          sizes.height,
          rtParams
        )
      }
    
    /* 
      ...Code 
    */

    Then, using three.js Raycaster, I can retrieve the uuid of the mesh that is being hoverer on:

    onMouseMove(event: MouseEvent) {
        this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1
        this.mouse.y = -(event.clientY / window.innerHeight) * 2 + 1
    
        this.raycaster.setFromCamera(this.mouse, this.camera)
        const intersects = this.raycaster.intersectObjects(this.scene.children)
        const target = intersects[0]
    
        if (target && "material" in target.object) {
          const targetMesh = intersects[0].object as THREE.Mesh
          this.cannonF1?.onSelectMesh(targetMesh.uuid)
        } else {
          this.cannonF1?.onSelectMesh()
        }
      }

    In the onSelectMesh method, I set the value of this.selectedMeshName to the name of the mesh group that contains the target mesh from the Raycaster (I’m using names to refer to groups of meshes).

    This way, in my render loop, I can create two distinct renders:

    • One render (renderTargetA) with all the meshes except the hovered mesh
    • Another render (renderTargetB) with only the hovered mesh
    render() {
        // Render renderTargetA
        this.modelChildren.forEach((mesh) => {
          if (this.mesheUuidToName[mesh.uuid] === this.selectedMeshName) {
            mesh.visible = false
          } else {
            mesh.visible = true
          }
        })
    
        this.renderer.setRenderTarget(this.renderTargetA)
        this.renderer.render(this.scene, this.camera)
    
        // Render renderTargetB
        this.modelChildren.forEach((mesh) => {
          if (this.mesheUuidToName[mesh.uuid] === this.selectedMeshName) {
            mesh.visible = true
          } else {
            mesh.visible = false
          }
        })
        if (this.targetedMesh) {
          this.targetedMesh.children.forEach((child) => {
            child.visible = true
          })
        }
    
        this.renderer.setRenderTarget(this.renderTargetB)
        this.renderer.render(this.scene, this.camera)
    
        this.modelChildren.forEach((mesh) => {
          mesh.visible = false
        })    
    
        this.effectSobel.uniforms.tDiffuse1.value = this.renderTargetA.texture
        this.effectSobel.uniforms.tDiffuse2.value = this.renderTargetB.texture
    
        this.renderer.setRenderTarget(null)
      }

    This is what the renderTargetA render looks like:

    …and renderTargetB:

    As you can see, I’m sending both renders as texture uniforms to the effectSobel shader. The post-processing shader then “merges” these two renders into a single output.

    At this point, we have two renders of the scene, and the post-processing shader needs to decide which one to display. Initially, I thought of simply combining them by adding the two textures together, but that didn’t produce the correct result:

    What I needed was a way to hide the pixels of one render when they are “covered” by pixels from another render.

    To achieve this, I used the distance of each vertex from the camera. This meant I had to go through all the meshes in the model and modify their materials. However, since the mesh colors are important for the edge detection effect, I couldn’t change their colors.

    Instead, I used the alpha channel of each individual vertex to set the distance from the camera.

    #include <common>
    
    varying vec3 vPosition;
    uniform vec3 uColor;
    
    float normalizeRange(float value, float oldMin, float oldMax, float newMin, float newMax) {
        float normalized = (value - oldMin) / (oldMax - oldMin);
        
        return newMin + (newMax - newMin) * normalized;
    }
    
    void main()
    {
        float dist = distance(vPosition,cameraPosition);
    
        float l = luminance( uColor );
    
        gl_FragColor=vec4(vec3(l),normalizeRange(dist,0.,20.,0.,1.));
    
        #include <colorspace_fragment>
    }

    Here’s an explanation of this shader:

    • First, the luminance function is a built-in Three.js shader utility imported from the <common> module. It’s recommended to use this function with the Sobel effect to improve edge detection results.
    • The uColor value represents the initial color of the mesh.
    • The dist value calculates the distance between the vertex position (passed from the vertex shader via a varying) and the camera, using the built-in cameraPosition variable in Three.js shaders.
    • Finally, I pass this distance through the alpha channel. Since the alpha value can’t exceed 1, I use a normalized version of the distance.

    And here is the updated logic for the postprocessing shader:

    uniform sampler2D tDiffuse;
    uniform sampler2D tDiffuse1;
    uniform sampler2D tDiffuse2;
    uniform vec2 resolution;
    varying vec2 vUv;
    
    float sobel(sampler2D tDiffuse,vec2 texel)
    {
        //sobel operator
    }
    
    
    void main() {
    
        vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
        
        vec4 t = texture2D(tDiffuse,vUv);
    
        vec4 t1 = texture2D(tDiffuse1,vUv);
        vec4 t2 = texture2D(tDiffuse2,vUv);     
    
        if(t1.a==0.)
        {
            t1.a = 1.;
        }
        if(t2.a==0.)
        {
            t2.a = 1.;
        }
    
    
        float G = sobel(tDiffuse1,texel);
        G= G > 0.001 ? 1. : 0.;
        float Gs = sobel(tDiffuse2,texel);
        Gs = Gs > 0.001 ? 1. : 0.;
        
        vec4 s1 = vec4(vec3(G),1.);
        
        vec4 s2 = vec4(vec3(Gs),1.);    
        
        vec4 sobelTexture = vec4(vec3(0.),1.);
    
    
        if(t1.a>t2.a)
        {
            sobelTexture = s2;       
        }    
        else{
            sobelTexture = s1;
        }    
    
            
        gl_FragColor = sobelTexture;
    
        #include <colorspace_fragment>
    }

    Now that the alpha channel of the textures contains the distance to the camera, I can simply compare them and display the render that have the closer vertices to the camera.

    3. The Film Roll Effect

    Next is this film roll component that moves and twist on scroll.

    This effect is achieved using only shaders, the component is a single plane component with a shader material.

    All the data is sent to the shader through uniforms:

    export default class Film {  
      constructor() {
        //...code
      }
    
      createGeometry() {
        this.geometry = new THREE.PlaneGeometry(
          60,
          2,
          100,
          10
        )
      }
    
      createMaterial() {
        this.material = new THREE.ShaderMaterial({
          vertexShader,
          fragmentShader,
          side: THREE.DoubleSide,
          transparent: true,
          depthWrite: false,
          blending: THREE.CustomBlending,
          blendEquation: THREE.MaxEquation,
          blendSrc: THREE.SrcAlphaFactor,
          blendDst: THREE.OneMinusSrcAlphaFactor,
          uniforms: {
            uPlaneWidth: new THREE.Uniform(this.geometry.parameters.width),
            uRadius: new THREE.Uniform(2),
            uXZfreq: new THREE.Uniform(3.525),
            uYfreq: new THREE.Uniform(2.155),
            uOffset: new THREE.Uniform(0),
            uAlphaMap: new THREE.Uniform(
              window.preloader.loadTexture(
                "./alpha-map.jpg",
                "film-alpha-map",
                (texture) => {
                  texture.wrapS = THREE.RepeatWrapping
                  const { width, height } = texture.image
                  this.material.uniforms.uAlphaMapResolution.value =
                    new THREE.Vector2(width, height)
                }
              )
            ),
            //uImages: new THREE.Uniform(new THREE.Vector4()),
            uImages: new THREE.Uniform(
              window.preloader.loadTexture(
                "/film-texture.png",
                "film-image-texture",
                (tex) => {
                  tex.wrapS = THREE.RepeatWrapping
                }
              )
            ),
            uRepeatFactor: new THREE.Uniform(this.repeatFactor),
            uImagesCount: new THREE.Uniform(this.images.length * this.repeatFactor),
            uAlphaMapResolution: new THREE.Uniform(new THREE.Vector2()),
            uFilmColor: new THREE.Uniform(window.colors.orange1),
          },
        })
      }  
    
      createMesh() {
        this.mesh = new THREE.Mesh(this.geometry, this.material)
        this.scene.add(this.mesh)
      }
    }

    The main vertex shader uniforms are:

    • uRadius is the radius of the cylinder shape
    • uXZfreq is the frequency of the twists on the (X,Z) plane
    • uYfreq is a cylinder height factor
    • uOffset is the vertical offset of the roll when you scroll up and down

    Here is how they are used in the vertex shader:

    #define PI 3.14159265359
    
    uniform float uPlaneWidth;
    uniform float uXZfreq;
    uniform float uYfreq;
    varying vec2 vUv;
    uniform float uOffset;
    varying vec3 vPosition;
    uniform float uRadius;
    
    void main()
    {
        vec3 np = position;
        float theta = -(PI*np.x)/(uPlaneWidth*0.5);
    
    
        np.x=cos(uXZfreq*theta+uOffset)*uRadius;
        np.y+=theta*uYfreq;
        np.z=sin(uXZfreq*theta+uOffset)*uRadius;
        
        vec4 modelPosition = modelMatrix * vec4(np, 1.0);
    
        
        vec4 viewPosition = viewMatrix * modelPosition;
        vec4 projectedPosition = projectionMatrix * viewPosition;
        gl_Position = projectedPosition;    
    
    
        vUv=uv;
        vPosition=np;
    }

    As you can see they are used to modify the initial position attribute to give it the shape of a cylinder. the modified position’s X Y and Z factors are using uOffset in their frequency. this uniform is linked to a Scrolltrigger timeline that will give the twist on scroll effect.

    const tl = gsap.timeline({
      scrollTrigger: {
        trigger: this.section,
        start: "top bottom",
        end: "bottom top",
        scrub: true,
        invalidateOnRefresh: true,        
      },
    })    
    
    tl.to(
      this.material.uniforms.uOffset,
      {
        value: 10,
        duration: 1,
      },
      0
    )

    Conclusion

    That’s it for the most part! Don’t feel frustrated if you don’t understand everything right away—I often got stuck for days on certain parts and didn’t know every technical detail before I started building.

    I learned so much from this project, and I hope you’ll find it just as useful!

    Thank you for reading, and thanks to Codrops for featuring me again!



    Source link

  • Path.Combine and Path.Join are similar but way different. &vert; Code4IT

    Path.Combine and Path.Join are similar but way different. | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    When you need to compose the path to a folder or file location, you can rely on the Path class. It provides several static methods to create, analyze and modify strings that represent a file system.

    Path.Join and Path.Combine look similar, yet they have some important differences that you should know to get the result you are expecting.

    Path.Combine: take from the last absolute path

    Path.Combine concatenates several strings into a single string that represents a file path.

    Path.Combine("C:", "users", "davide");
    // C:\users\davide
    

    However, there’s a tricky behaviour: if any argument other than the first contains an absolute path, all the previous parts are discarded, and the returned string starts with the last absolute path:

    Path.Combine("foo", "C:bar", "baz");
    // C:bar\baz
    
    Path.Combine("foo", "C:bar", "baz", "D:we", "ranl");
    // D:we\ranl
    

    Path.Join: take everything

    Path.Join does not try to return an absolute path, but it just joins the string using the OS path separator:

    Path.Join("C:", "users", "davide");
    // C:\users\davide
    

    This means that if there is an absolute path in any argument position, all the previous parts are not discarded:

    Path.Join("foo", "C:bar", "baz");
    // foo\C:bar\baz
    
    Path.Join("foo", "C:bar", "baz", "D:we", "ranl");
    // foo\C:bar\baz\D:we\ranl
    

    Final comparison

    As you can see, the behaviour is slightly different.

    Let’s see a table where we call the two methods using the same input strings:

    Path.Combine Path.Join
    ["singlestring"] singlestring singlestring
    ["foo", "bar", "baz"] foo\bar\baz foo\bar\baz
    ["foo", " bar ", "baz"] foo\ bar \baz foo\ bar \baz
    ["C:", "users", "davide"] C:\users\davide C:\users\davide
    ["foo", " ", "baz"] foo\ \baz foo\ \baz
    ["foo", "C:bar", "baz"] C:bar\baz foo\C:bar\baz
    ["foo", "C:bar", "baz", "D:we", "ranl"] D:we\ranl foo\C:bar\baz\D:we\ranl
    ["C:", "/users", "/davide"] /davide C:/users/davide
    ["C:", "users/", "/davide"] /davide C:\users//davide
    ["C:", "\users", "\davide"] \davide C:\users\davide

    Have a look at some specific cases:

    • neither methods handle white and empty spaces: ["foo", " ", "baz"] are transformed to foo\ \baz. Similarly, ["foo", " bar ", "baz"] are combined into foo\ bar \baz, without removing the head and trail whitespaces. So, always remove white spaces and empty values!
    • Path.Join handles in a not-so-obvious way the case of a path starting with / or \: if a part starts with \, it is included in the final path; if it starts with /, it is escaped as //. This behaviour depends on the path separator used by the OS: in my case, I’m running these methods using Windows 11.

    Finally, always remember that the path separator depends on the Operating System that is running the code. Don’t assume that it will always be /: this assumption may be correct for one OS but wrong for another one.

    This article first appeared on Code4IT 🐧

    Wrapping up

    As we have learned, Path.Combine and Path.Join look similar but have profound differences.

    Dealing with path building may look easy, but it hides some complexity. Always remember to:

    • validate and clean your input before using either of these methods (remove empty values, white spaces, and head or trailing path separators);
    • always write some Unit Tests to cover all the necessary cases;

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • How to Style Even and Odd Div.

    How to Style Even and Odd Div.


    We can easily change the background color of div’s even and odd index using the:nth-child pseudo-class with the even and odd keywords, respectively. Odd and even are keywords that can be used to match child elements whose index is odd or even (the index of the first child is 1). Here, we specify two different background colors for odd and even p elements.

    Example:

    <!DOCTYPE html>
    <html>
    <head>
        <title>Even Odd Example</title>
        <style type="text/css">
        div :nth-child(even){
            background-color: yellow;
        }
        div :nth-child(odd){
            background-color: blue;
        }
        </style>
    </head>
    <body>
    <div id="main">
        <div>1</div>
        <div>2</div>
        <div>3</div>
        <div>4</div>
        <div>5</div>
        <div>6</div>
        <div>7</div>
        <div>8</div>
        <div>9</div>
        <div>10</div>
    </div>	
    
    </body>
    </html>

     

     



    Source link

  • Postman's pre-request scripts: how to perform HTTP POST requests (with JSON body) and how to set Cookie authentication.

    Postman's pre-request scripts: how to perform HTTP POST requests (with JSON body) and how to set Cookie authentication.


    In Postman, you can define scripts to be executed before the beginning of a request. Can we use them to work with endpoints using Cookie Authentication?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Nowadays, it’s rare to find services that use Cookie Authentication, yet they still exist. How can we configure Cookie Authentication with Postman? How can we centralize the definition using pre-request scripts?

    I had to answer these questions when I had to integrate a third-party system that was using Cookie Authentication. Instead of generating a new token manually, I decided to centralize the Cookie creation in a single place, making it automatically available to every subsequent request.

    In order to generate the token, I had to send a request to the Authentication endpoint, sending a JSON payload with data coming from Postman’s variables.

    In this article, I’ll recap what I learned, teach you some basics of creating pre-request scripts with Postman, and provide a full example of how I used it to centralize the generation and usage of a cookie for a whole Postman collection.

    Introducing Postman’s pre-request scripts

    As you probably know, Postman allows you to create scripts that are executed before and after an HTTP call.

    These scripts are written in JavaScript and can use some objects and methods that come out of the box with Postman.

    You can create such scripts for a single request or the whole collection. In the second case, you write the script once so that it becomes available for all the requests stored within that collection.

    Postman&rsquo;s pre-request section on a Collection

    The operations defined in the Scripts section of the collection are then executed before (or after) every request in the collection.

    Here, you can either use the standard JavaScript code—like the dear old console.log— or the pm object to reference the context in which the script will be executed.

    For example, you can print the value of a Postman variable by using:

    const tokenUrl = pm.variables.get("TokenUrl")
    console.log(tokenUrl)
    

    How to send a POST request with JSON body in Postman pre-request scripts

    How can we issue a POST request in the pre-request script, specifying a JSON body?

    Postman’s pm object, along with some other methods, exposes the sendRequest function. Its first parameter is the “description” of the request; its second parameter is the callback to execute after the request is completed.

    pm.sendRequest(request, (errorResponse, successfulResponse) => {
      // do something here
    })
    

    You have to carefully craft the request, by specifying the HTTP method, the body, and the content type:

    var authenticationBody = {
      UserName: username,
      Password: password,
    }
    
    const request = {
      method: "POST",
      url: tokenUrl,
      body: {
        mode: "raw",
        raw: JSON.stringify(authenticationBody),
        options: {
          raw: {
            language: "json",
          },
        },
      },
    }
    

    Pay particular attention to the options node: it tells Postman how to treat the body content and what the content type is. Because I was missing this node, I spent too many minutes trying to figure out why this call was badly formed.

    options: {
      raw: {
        language: "json"
      }
    }
    

    Now, the result of the operation is used to execute the callback function. Generally, you want it to be structured like this:

    pm.sendRequest(request, (err, response) => {
      if (err) {
        // handle error
      }
      if (response) {
        // handle success
      }
    })
    

    Storing Cookies in Postman (using a Jar)

    You have received the response with the token, and you have parsed the response to retrieve the value. Now what?

    You cannot store Cookies directly as it they were simple variables. Instead, you must store Cookies in a Jar.

    Postman allows you to programmatically operate with cookies only by accessing them via a Jar (yup, pun intended!), that can be initialized like this:

    const jar = pm.cookies.jar()
    

    From here, you can add, remove or retrieve cookies by working with the jar object.

    To add a new cookie, you must use the set() method of the jar object, specifying the domain the cookie belongs to, its name, its value, and the callback to execute when the operation completes.

    const jar = pm.cookies.jar()
    
    jar.set(
      "add-your-domain-here.com",
      "MyCustomCookieName",
      newToken,
      (error, cookie) => {
        if (error) {
          console.error(`An error occurred: ${error}`)
        } else {
          console.log(`Cookie saved: ${cookie}`)
        }
      }
    )
    

    You can try it now: execute a request, have a look at the console logs, and…

    CookieStore: programmatic access  is denied

    We’ve received a strange error:

    An error occurred: Error: CookieStore: programmatic access to “add-your-domain-here.com” is denied

    Wait, what? What does “programmatic access to X is denied” mean, and how can we solve this error?

    For security reasons, you cannot handle cookies via code without letting Postman know that you explicitly want to operate on the specified domain. To overcome this limitation, you need to whitelist the domain associated with the cookie so that Postman will accept that the operation you’re trying to achieve via code is legit.

    To enable a domain for cookies operations, you first have to navigate to the headers section of any request under the collection and click the Cookies button.

    Headers section in a Postman request

    From here, select Domains Allowlist:

    Cookies list page

    Finally, add your domain to the list of the allowed ones.

    Allowed domains list

    Now Postman knows that if you try to set a cookie via code, it’s because you actively want it, allowing you to add your cookies to the jar.

    If you open again the Cookie section (see above), you will be able to see the current values for the cookies associated with the domain:

    Domain-related cookies in Postman

    Further readings

    Clearly, we’ve just scratched the surface of what you can do with pre-request scripts in Postman. To learn more, have a look at the official documentation:

    🔗 Write pre-request scripts to add dynamic behavior in Postman | Postman docs

    This article first appeared on Code4IT 🐧

    If you want to learn more about how to use the Jar object and what operations are available, you can have a look at the following link:

    🔗 Scripting with request cookie | Postman docs

    Wrapping up (with complete example)

    In this article, we learned what pre-request scripts are, how to execute a POST request passing a JSON object as a body, and how to programmatically add a Cookie in Postman by operating on the Jar object.

    For clarity, here’s the complete code I used in my pre-request script.

    const tokenUrl = pm.variables.get("TokenUrl")
    const username = pm.variables.get("ClientID")
    const password = pm.variables.get("ClientSecret")
    
    var authBody = {
      UserName: username,
      Password: password,
    }
    
    const getTokenRequest = {
      method: "POST",
      url: tokenUrl,
      body: {
        mode: "raw",
        raw: JSON.stringify(authBody),
        options: {
          raw: {
            language: "json",
          },
        },
      },
    }
    
    pm.sendRequest(getTokenRequest, (err, response) => {
      if (err) {
        throw new Error(err)
      }
      if (response) {
        var jresponse = response.json()
    
        var newToken = jresponse["Token"]
    
        console.log("token: ", newToken)
    
        if (newToken) {
          const jar = pm.cookies.jar()
    
          jar.set(
            "add-your-domain-here.com",
            "MyCustomCookieName",
            newToken,
            (error, cookie) => {
              if (error) {
                console.error(`An error occurred: ${error}`)
              } else {
                console.log(`Cookie saved: ${cookie}`)
              }
            }
          )
        } else {
          throw new Error("Token not available")
        }
      }
    })
    

    Notice that to parse the response from the authentication endpoint I used the .json() method, that allows me to access the internal values using the property name, as in jresponse["Token"].

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • ZTNA Use Cases and Benefits for BFSI Companies

    ZTNA Use Cases and Benefits for BFSI Companies


    In an era of digital banking, cloud migration, and a growing cyber threat landscape, traditional perimeter-based security models are no longer sufficient for the Banking, Financial Services, and Insurance (BFSI) sector. Enter Zero Trust Network Access (ZTNA) — a modern security framework that aligns perfectly with the BFSI industry’s need for robust, scalable, and compliant cybersecurity practices.

    This blog explores the key use cases and benefits of ZTNA for BFSI organizations.

    ZTNA Use Cases for BFSI

    1. Secure Remote Access for Employees

    With hybrid and remote work becoming the norm, financial institutions must ensure secure access to critical applications and data outside corporate networks. ZTNA allows secure, identity-based access without exposing internal resources to the public internet. This ensures that only authenticated and authorized users can access specific resources, reducing attack surfaces and preventing lateral movement by malicious actors.

    1. Protect Customer Data Using Least Privileged Access

    ZTNA enforces the principle of least privilege, granting users access only to the resources necessary for their roles. This granular control is vital in BFSI, where customer financial data is highly sensitive. By limiting access based on contextual parameters such as user identity, device health, and location, ZTNA drastically reduces the chances of data leakage or internal misuse.

    1. Compliance with Regulatory Requirements

    The BFSI sector is governed by stringent regulations such as RBI guidelines, PCI DSS, GDPR, and more. ZTNA provides centralized visibility, detailed audit logs, and fine-grained access control—all critical for meeting regulatory requirements. It also helps institutions demonstrate proactive data protection measures during audits and assessments.

    1. Vendor and Third-Party Access Management

    Banks and insurers frequently engage with external vendors, consultants, and partners. Traditional VPNs provide broad access once a connection is established, posing a significant security risk. ZTNA addresses this by granting secure, time-bound, and purpose-specific access to third parties—without ever bringing them inside the trusted network perimeter.

    Key Benefits of ZTNA for BFSI

    1. Reduced Risk of Data Breaches

    By minimizing the attack surface and verifying every user and device before granting access, ZTNA significantly lowers the risk of unauthorized access and data breaches. Since applications are never directly exposed to the internet, ZTNA also protects against exploitation of vulnerabilities in public-facing assets.

    1. Improved Compliance Posture

    ZTNA simplifies compliance by offering audit-ready logs, consistent policy enforcement, and better visibility into user activity. BFSI firms can use these capabilities to ensure adherence to local and global regulations and quickly respond to compliance audits with accurate data.

    1. Enhanced Customer Trust and Loyalty

    Security breaches in financial institutions can erode customer trust instantly. By adopting a Zero Trust approach, organizations can demonstrate their commitment to customer data protection, thereby enhancing credibility, loyalty, and long-term customer relationships.

    1. Cost Savings on Legacy VPNs

    Legacy VPN solutions are often complex, expensive, and challenging to scale. ZTNA offers a modern alternative that is more efficient and cost-effective. It eliminates the need for dedicated hardware and reduces operational overhead by centralizing policy management in the cloud.

    1. Scalability for Digital Transformation

    As BFSI institutions embrace digital transformation—be it cloud adoption, mobile banking, or FinTech partnerships—ZTNA provides a scalable, cloud-native security model that grows with the business. It supports rapid onboarding of new users, apps, and services without compromising on security.

    Final Thoughts

    ZTNA is more than just a security upgrade—it’s a strategic enabler for BFSI organizations looking to build resilient, compliant, and customer-centric digital ecosystems. With its ability to secure access for employees, vendors, and partners while ensuring regulatory compliance and data privacy, ZTNA is fast becoming the cornerstone of modern cybersecurity strategies in the financial sector.

    Ready to embrace Zero Trust? Identify high-risk access points and gradually implement ZTNA for your most critical systems. The transformation may be phased, but the security gains are immediate and long-lasting.

    Seqrite’s Zero Trust Network Access (ZTNA) solution empowers BFSI organizations with secure, seamless, and policy-driven access control tailored for today’s hybrid and regulated environments. Partner with Seqrite to strengthen data protection, streamline compliance, and accelerate your digital transformation journey.



    Source link

  • How to use IHttpClientFactory and WireMock.NET together using Moq

    How to use IHttpClientFactory and WireMock.NET together using Moq


    WireMock.NET is a popular library used to simulate network communication through HTTP. But there is no simple way to integrate the generated in-memory server with an instance of IHttpClientFactory injected via constructor. Right? Wrong!

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Testing the integration with external HTTP clients can be a cumbersome task, but most of the time, it is necessary to ensure that a method is able to perform correct operations – not only sending the right information but also ensuring that we are able to read the content returned from the called API.

    Instead of spinning up a real server (even if in the local environment), we can simulate a connection to a mock server. A good library for creating temporary in-memory servers is WireMock.NET.

    Many articles I read online focus on creating a simple HttpClient, using WireMock.NET to drive its behaviour. In this article, we are going to do a little step further: we are going to use WireMock.NET to handle HttpClients generated, using Moq, via IHttpClientFactory.

    Explaining the dummy class used for the examples

    As per every practical article, we must start with a dummy example.

    For the sake of this article, I’ve created a dummy class with a single method that calls an external API to retrieve details of a book and then reads the returned content. If the call is successful, the method returns an instance of Book; otherwise, it throws a BookServiceException exception.

    Just for completeness, here’s the Book class:

    public class Book
    {
        public int Id { get; set; }
        public string Title { get; set; }
    }
    

    And here’s the BookServiceException definition:

    [Serializable]
    public class BookServiceException: Exception
    {
        public BookServiceException(string message, Exception inner) : base(message, inner) { }
        protected BookServiceException(
          System.Runtime.Serialization.SerializationInfo info,
          System.Runtime.Serialization.StreamingContext context) : base(info, context) { }
    }
    

    Finally, we have our main class:

    public class BookService
    {
        private readonly IHttpClientFactory _httpClientFactory;
    
        public BookService(IHttpClientFactory httpClientFactory)
        {
            _httpClientFactory = httpClientFactory;
        }
    
        public async Task<Book> GetBookById(int id)
        {
    
            string url = $"/api/books/{id}";
            HttpClient httpClient = _httpClientFactory.CreateClient("books_client");
    
            try
            {
                    Book? book = await httpClient.GetFromJsonAsync<Book>(url);
                    return book;
            }
            catch (Exception ex)
            {
                    throw new BookServiceException($"There was an error while getting info about the book {id}", ex);
            }
        }
    }
    

    There are just two things to notice:

    • We are injecting an instance of IHttpClientFactory into the constructor.
    • We are generating an instance of HttpClient by passing a name to the CreateClient method of IHttpClientFactory.

    Now that we have our cards on the table, we can start!

    WireMock.NET, a library to simulate HTTP calls

    WireMock is an open-source platform you can install locally to create a real mock server. You can even create a cloud environment to generate and test HTTP endpoints.

    However, for this article we are interested in the NuGet package that takes inspiration from the WireMock project, allowing .NET developers to generate disposable in-memory servers: WireMock.NET.

    To add the library, you must add the WireMock.NET NuGet package to your project, for example using dotnet add package WireMock.Net.

    Once the package is ready, you can generate a test server in your Unit Tests class:

    public class WireMockTests
    {
        private WireMockServer _server;
    
        [OneTimeSetUp]
        public void OneTimeSetUp()
        {
            _server = WireMockServer.Start();
        }
    
        [SetUp]
        public void Setup()
        {
            _server.Reset();
        }
    
        [OneTimeTearDown]
        public void OneTimeTearDown()
        {
            _server.Stop();
        }
    }
    

    You can instantiate a new instance of WireMockServer in the OneTimeSetUp step, store it in a private field, and make it accessible to every test in the test class.

    Before each test run, you can reset the internal status of the mock server by running the Reset() method. I’d suggest you reset the server to avoid unintentional internal status, but it all depends on what you want to do with the server instance.

    Finally, remember to free up resources by calling the Stop() method in the OneTimeTearDown phase (but not during the TearDown phase: you still need the server to be on while running your tests!).

    Basic configuration of HTTP requests and responses with WireMock.NET

    The basic structure of the definition of a mock response using WireMock.NET is made of two parts:

    1. Within the Given method, you define the HTTP Verb and URL path whose response is going to be mocked.
    2. Using RespondWith you define what the mock server must return when the endpoint specified in the Given step is called.

    In the next example, you can see that the _server instance (the one I instantiated in the OneTimeSetUp phase, remember?) must return a specific body (responseBody) and the 200 HTTP Status Code when the /api/books/42 endpoint is called.

    string responseBody = @"
    {
    ""Id"": 42,
    ""Title"": ""Life, the Universe and Everything""
    }
    ";
    
    _server
     .Given(Request.Create().WithPath("/api/books/42").UsingGet())
     .RespondWith(
      Response.Create()
     .WithStatusCode(200)
     .WithBody(responseBody)
     );
    

    Similarly, you can define that an endpoint will return an error by changing its status code:

    _server
    .Given(Request.Create().WithPath("/api/books/42").UsingGet())
    .RespondWith(
      Response.Create()
     .WithStatusCode(404)
    );
    

    All in all, both the request and the response are highly customizable: you can add HTTP Headers, delays, cookies, and much more.

    Look closely; there’s one part that is missing: What is the full URL? We have declared only the path (/api/books/42) but have no info about the hostname and the port used to communicate.

    How to integrate WireMock.NET with a Moq-driven IHttpClientFactory

    In order to have WireMock.NET react to an HTTP call, we have to call the exact URL – even the hostname and port must match. But when we create a mocked HttpClient – like we did in this article – we don’t have a real hostname. So, how can we have WireMock.NET and HttpClient work together?

    The answer is easy: since WireMockServer.Start() automatically picks a free port in your localhost, you don’t have to guess the port number, but you can reference the current instance of _server.

    Once the WireMockServer is created, internally it contains the reference to one or more URLs it will use to listen for HTTP requests, intercepting the calls and replying in place of a real server. You can then use one of these ports to configure the HttpClient generated by the HttpClientFactory.

    Let’s see the code:

    [Test]
    public async Task GetBookById_Should_HandleBadRequests()
    {
        string baseUrl = _server.Url;
    
        HttpClient myHttpClient = new HttpClient() { BaseAddress = new Uri(baseUrl) };
    
        Mock<IHttpClientFactory> mockFactory = new Mock<IHttpClientFactory>();
        mockFactory.Setup(_ => _.CreateClient("books_client")).Returns(myHttpClient);
    
        _server
            .Given(Request.Create().WithPath("/api/books/42").UsingGet())
            .RespondWith(
                Response.Create()
                .WithStatusCode(404)
            );
    
        BookService service = new BookService(mockFactory.Object);
    
        Assert.CatchAsync<BookServiceException>(() => service.GetBookById(42));
    }
    

    First we access the base URL used by the mock server by accessing _server.Url.

    We use that URL as a base address for the newly created instance of HttpClient.

    Then, we create a mock of IHttpClientFactory and configure it to return the local instance of HttpClient whenever we call the CreateClient method with the specified name.

    In the meanwhile, we define how the mock server must behave when an HTTP call to the specified path is intercepted.

    Finally, we can pass the instance of the mock IHttpClientFactory to the BookService.

    So, the key part to remember is that you can simply access the Url property (or, if you have configured it to handle many URLs, you can access the Urls property, that is an array of strings).

    Let WireMock.NET create the HttpClient for you

    As suggested by Stef in the comments to this post, there’s actually another way to generate the HttpClient with the correct URL: let WireMock.NET do it for you.

    Instead of doing

    string baseUrl = _server.Url;
    
    HttpClient myHttpClient = new HttpClient() { BaseAddress = new Uri(baseUrl) };
    

    you can simplify the process by calling the CreateClient method:

    HttpClient myHttpClient = _server.CreateClient();
    

    Of course, you will still have to pass the instance to the mock of IHttpClientFactory.

    Further readings

    It’s important to notice that WireMock and WireMock.NET are two totally distinct things: one is a platform, and one is a library, owned by a different group of people, that mimics some functionalities from the platform to help developers write better tests.

    WireMock.NET is greatly integrated with many other libraries, such as xUnit, FluentAssertions, and .NET Aspire.

    You can find the official repository on GitHub:

    🔗 WireMock.Net | Github

    This article first appeared on Code4IT 🐧

    It’s important to remember that using an HttpClientFactory is generally more performant than instantiating a new HttpClient. Ever heard of socket exhaustion?

    🔗 Use IHttpClientFactory to generate HttpClient instances | Code4IT

    Finally, for the sake of this article I’ve used Moq. However, there’s a similar library you can use: NSubstitute. The learning curve is quite flat: in the most common scenarios, it’s just a matter of syntax usage.

    🔗 Moq vs NSubstitute: syntax cheat sheet | Code4IT

    Wrapping up

    In this article, we almost skipped all the basic stuff about WireMock.NET and tried to go straight to the point of integrating WireMock.NET with IHttpClientFactory.

    There are lots of articles out there that explain how to use WireMock.NET – just remember that WireMock and WireMock.NET are not the same thing!

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn or Twitter! 🤜🤛

    Happy coding!

    🐧





    Source link