How to Build a Simple API: A Friendly Guide
Source link
برچسب: How
-
How to Build a Simple API: A Friendly Guide
-
How To Create Kinetic Image Animations with React-Three-Fiber
For the past few months, I’ve been exploring different kinetic motion designs with text and images. The style looks very intriguing, so I decided to create some really cool organic animations using images and React Three Fiber.
In this article, we’ll learn how to create the following animation using Canvas2D and React Three Fiber.
Setting Up the View & Camera
The camera’s field of view (FOV) plays a huge role in this project. Let’s keep it very low so it looks like an orthographic camera. You can experiment with different perspectives later. I prefer using a perspective camera over an orthographic one because we can always try different FOVs. For more detailed implementation check source code.
<PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} />
Setting Up Our 3D Shapes
First, let’s create and position 3D objects that will display our images. For this example, we need to make 2 components:
Billboard.tsx – This is a cylinder that will show our stack of images
'use client'; import { useRef } from 'react'; import * as THREE from 'three'; function Billboard({ radius = 5, ...props }) { const ref = useRef(null); return ( <mesh ref={ref} {...props}> <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} /> <meshBasicMaterial color="red" side={THREE.DoubleSide} /> </mesh> ); }
Banner.tsx – This is another cylinder that will work like a moving banner
'use client'; import * as THREE from 'three'; import { useRef } from 'react'; function Banner({ radius = 1.6, ...props }) { const ref = useRef(null); return ( <mesh ref={ref} {...props}> <cylinderGeometry args={[radius, radius, radius * 0.07, radius * 80, radius * 10, true]} /> <meshBasicMaterial color="blue" side={THREE.DoubleSide} /> </mesh> ); } export default Banner;
Once we have our components ready, we can use them on our page.
Now let’s build the whole shape:
1. Create a wrapper group – We’ll make a group that wraps all our components. This will help us rotate everything together later.
page.jsx
'use client'; import styles from './page.module.scss'; import Billboard from '@/components/webgl/Billboard/Billboard'; import Banner from '@/components/webgl/Banner/Banner'; import { View } from '@/webgl/View'; import { PerspectiveCamera } from '@react-three/drei'; export default function Home() { return ( <div className={styles.page}> <View className={styles.view} orbit={false}> <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} /> <group> </group> </View> </div> ); }
2. Render Billboard and Banner components in the loop – Inside our group, we’ll create a loop to render our Billboards and Banners multiple times.
page.jsx
'use client'; import styles from './page.module.scss'; import Billboard from '@/components/webgl/Billboard/Billboard'; import Banner from '@/components/webgl/Banner/Banner'; import { View } from '@/webgl/View'; import { PerspectiveCamera } from '@react-three/drei'; export default function Home() { return ( <div className={styles.page}> <View className={styles.view} orbit={false}> <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} /> <group> {Array.from({ length: COUNT }).map((_, index) => [ <Billboard key={`billboard-${index}`} radius={5} />, <Banner key={`banner-${index}`} radius={5} />, ])} </group> </View> </div> ); }
3. Stack them up – We’ll use the index from our loop and the y position to stack our items on top of each other. Here’s how it looks so far:
page.jsx
'use client'; import styles from './page.module.scss'; import Billboard from '@/components/webgl/Billboard/Billboard'; import Banner from '@/components/webgl/Banner/Banner'; import { View } from '@/webgl/View'; import { PerspectiveCamera } from '@react-three/drei'; const COUNT = 10; const GAP = 3.2; export default function Home() { return ( <div className={styles.page}> <View className={styles.view} orbit={false}> <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} /> <group> {Array.from({ length: COUNT }).map((_, index) => [ <Billboard key={`billboard-${index}`} radius={5} position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]} />, <Banner key={`banner-${index}`} radius={5} position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0]} />, ])} </group> </View> </div> ); }
4. Add some rotation – Let’s rotate things a bit! First, I’ll hard-code the rotation of our banners to make them more curved and fit nicely with the Billboard component. We’ll also make the radius a bit bigger.
page.jsx
'use client'; import styles from './page.module.scss'; import Billboard from '@/components/webgl/Billboard/Billboard'; import Banner from '@/components/webgl/Banner/Banner'; import { View } from '@/webgl/View'; import { PerspectiveCamera } from '@react-three/drei'; const COUNT = 10; const GAP = 3.2; export default function Home() { return ( <div className={styles.page}> <View className={styles.view} orbit={false}> <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} /> <group> {Array.from({ length: COUNT }).map((_, index) => [ <Billboard key={`billboard-${index}`} radius={5} position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]} rotation={[0, index * Math.PI * 0.5, 0]} // <-- rotation of the billboard />, <Banner key={`banner-${index}`} radius={5} rotation={[0, 0, 0.085]} // <-- rotation of the banner position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0]} />, ])} </group> </View> </div> ); }
5. Tilt the whole thing – Now let’s rotate our entire group to make it look like the Leaning Tower of Pisa.
page.jsx
'use client'; import styles from './page.module.scss'; import Billboard from '@/components/webgl/Billboard/Billboard'; import Banner from '@/components/webgl/Banner/Banner'; import { View } from '@/webgl/View'; import { PerspectiveCamera } from '@react-three/drei'; const COUNT = 10; const GAP = 3.2; export default function Home() { return ( <div className={styles.page}> <View className={styles.view} orbit={false}> <PerspectiveCamera makeDefault fov={7} position={[0, 0, 70]} near={0.01} far={100000} /> <group rotation={[-0.15, 0, -0.2]}> // <-- rotate the group {Array.from({ length: COUNT }).map((_, index) => [ <Billboard key={`billboard-${index}`} radius={5} position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]} rotation={[0, index * Math.PI * 0.5, 0]} />, <Banner key={`banner-${index}`} radius={5} rotation={[0, 0, 0.085]} position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0]} />, ])} </group> </View> </div> ); }
6. Perfect! – Our 3D shapes are all set up. Now we can add our images to them.
Creating a Texture from Our Images Using Canvas
Here’s the cool part: we’ll put all our images onto a canvas, then use that canvas as a texture on our Billboard shape.
To make this easier, I created some helper functions that simplify the whole process.
getCanvasTexture.js
import * as THREE from 'three'; /** * Preloads an image and calculates its dimensions */ async function preloadImage(imageUrl, axis, canvasHeight, canvasWidth) { const img = new Image(); img.crossOrigin = 'anonymous'; await new Promise((resolve, reject) => { img.onload = () => resolve(); img.onerror = () => reject(new Error(`Failed to load image: ${imageUrl}`)); img.src = imageUrl; }); const aspectRatio = img.naturalWidth / img.naturalHeight; let calculatedWidth; let calculatedHeight; if (axis === 'x') { // Horizontal layout: scale to fit canvasHeight calculatedHeight = canvasHeight; calculatedWidth = canvasHeight * aspectRatio; } else { // Vertical layout: scale to fit canvasWidth calculatedWidth = canvasWidth; calculatedHeight = canvasWidth / aspectRatio; } return { img, width: calculatedWidth, height: calculatedHeight }; } function calculateCanvasDimensions(imageData, axis, gap, canvasHeight, canvasWidth) { if (axis === 'x') { const totalWidth = imageData.reduce( (sum, data, index) => sum + data.width + (index > 0 ? gap : 0), 0); return { totalWidth, totalHeight: canvasHeight }; } else { const totalHeight = imageData.reduce( (sum, data, index) => sum + data.height + (index > 0 ? gap : 0), 0); return { totalWidth: canvasWidth, totalHeight }; } } function setupCanvas(canvasElement, context, dimensions) { const { totalWidth, totalHeight } = dimensions; const devicePixelRatio = Math.min(window.devicePixelRatio || 1, 2); canvasElement.width = totalWidth * devicePixelRatio; canvasElement.height = totalHeight * devicePixelRatio; if (devicePixelRatio !== 1) context.scale(devicePixelRatio, devicePixelRatio); context.fillStyle = '#ffffff'; context.fillRect(0, 0, totalWidth, totalHeight); } function drawImages(context, imageData, axis, gap) { let currentX = 0; let currentY = 0; context.save(); for (const data of imageData) { context.drawImage(data.img, currentX, currentY, data.width, data.height); if (axis === 'x') currentX += data.width + gap; else currentY += data.height + gap; } context.restore(); } function createTextureResult(canvasElement, dimensions) { const texture = new THREE.CanvasTexture(canvasElement); texture.needsUpdate = true; texture.wrapS = THREE.RepeatWrapping; texture.wrapT = THREE.ClampToEdgeWrapping; texture.generateMipmaps = false; texture.minFilter = THREE.LinearFilter; texture.magFilter = THREE.LinearFilter; return { texture, dimensions: { width: dimensions.totalWidth, height: dimensions.totalHeight, aspectRatio: dimensions.totalWidth / dimensions.totalHeight, }, }; } export async function getCanvasTexture({ images, gap = 10, canvasHeight = 512, canvasWidth = 512, canvas, ctx, axis = 'x', }) { if (!images.length) throw new Error('No images'); // Create canvas and context if not provided const canvasElement = canvas || document.createElement('canvas'); const context = ctx || canvasElement.getContext('2d'); if (!context) throw new Error('No context'); // Preload all images in parallel const imageData = await Promise.all( images.map((image) => preloadImage(image.url, axis, canvasHeight, canvasWidth)) ); // Calculate total canvas dimensions const dimensions = calculateCanvasDimensions(imageData, axis, gap, canvasHeight, canvasWidth); // Setup canvas setupCanvas(canvasElement, context, dimensions); // Draw all images drawImages(context, imageData, axis, gap); // Create and return texture result return createTextureResult(canvasElement, dimensions) }
Then we can also create a
useCollageTexture
hook that we can easily use in our components.useCollageTexture.jsx
import { useState, useEffect, useCallback } from 'react'; import { getCanvasTexture } from '@/webgl/helpers/getCanvasTexture'; export function useCollageTexture(images, options = {}) { const [textureResults, setTextureResults] = useState(null); const [isLoading, setIsLoading] = useState(true); const [error, setError] = useState(null); const { gap = 0, canvasHeight = 512, canvasWidth = 512, axis = 'x' } = options; const createTexture = useCallback(async () => { try { setIsLoading(true); setError(null); const result = await getCanvasTexture({ images, gap, canvasHeight, canvasWidth, axis, }); setTextureResults(result); } catch (err) { setError(err instanceof Error ? err : new Error('Failed to create texture')); } finally { setIsLoading(false); } }, [images, gap, canvasHeight, canvasWidth, axis]); useEffect(() => { if (images.length > 0) createTexture(); }, [images.length, createTexture]); return { texture: textureResults?.texture || null, dimensions: textureResults?.dimensions || null, isLoading, error, }; }
Adding the Canvas to Our Billboard
Now let’s use our
useCollageTexture
hook on our page. We’ll create some simple loading logic. It takes a second to fetch all the images and put them onto the canvas. Then we’ll pass our texture and dimensions of canvas into the Billboard component.page.jsx
'use client'; import styles from './page.module.scss'; import Billboard from '@/components/webgl/Billboard/Billboard'; import Banner from '@/components/webgl/Banner/Banner'; import Loader from '@/components/ui/modules/Loader/Loader'; import images from '@/data/images'; import { View } from '@/webgl/View'; import { PerspectiveCamera } from '@react-three/drei'; import { useCollageTexture } from '@/hooks/useCollageTexture'; const COUNT = 10; const GAP = 3.2; export default function Home() { const { texture, dimensions, isLoading } = useCollageTexture(images); // <-- getting the texture and dimensions from the useCollageTexture hook if (isLoading) return <Loader />; // <-- showing the loader when the texture is loading return ( <div className={styles.page}> <View className={styles.view} orbit={false}> <PerspectiveCamera makeDefault fov={7} position={[0, 0, 100]} near={0.01} far={100000} /> <group rotation={[-0.15, 0, -0.2]}> {Array.from({ length: COUNT }).map((_, index) => [ <Billboard key={`billboard-${index}`} radius={5} rotation={[0, index * Math.PI * 0.5, 0]} position={[0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP, 0]} texture={texture} // <--passing the texture to the billboard dimensions={dimensions} // <--passing the dimensions to the billboard />, <Banner key={`banner-${index}`} radius={5.035} rotation={[0, 0, 0.085]} position={[ 0, (index - (Math.ceil(COUNT / 2) - 1)) * GAP - GAP * 0.5, 0, ]} />, ])} </group> </View> </div> ); }
Inside the Billboard component, we need to properly map this texture to make sure everything fits correctly. The width of our canvas will match the circumference of the cylinder, and we’ll center the y position of the texture. This way, all the images keep their resolution and don’t get squished or stretched.
Billboard.jsx
'use client'; import * as THREE from 'three'; import { useRef } from 'react'; function setupCylinderTextureMapping(texture, dimensions, radius, height) { const cylinderCircumference = 2 * Math.PI * radius; const cylinderHeight = height; const cylinderAspectRatio = cylinderCircumference / cylinderHeight; if (dimensions.aspectRatio > cylinderAspectRatio) { // Canvas is wider than cylinder proportionally texture.repeat.x = cylinderAspectRatio / dimensions.aspectRatio; texture.repeat.y = 1; texture.offset.x = (1 - texture.repeat.x) / 2; } else { // Canvas is taller than cylinder proportionally texture.repeat.x = 1; texture.repeat.y = dimensions.aspectRatio / cylinderAspectRatio; } // Center the texture texture.offset.y = (1 - texture.repeat.y) / 2; } function Billboard({ texture, dimensions, radius = 5, ...props }) { const ref = useRef(null); setupCylinderTextureMapping(texture, dimensions, radius, 2); return ( <mesh ref={ref} {...props}> <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} /> <meshBasicMaterial map={texture} side={THREE.DoubleSide} /> </mesh> ); } export default Billboard;
Now let’s animate them using the
useFrame
hook. The trick to animating these images is to just move the X offset of the texture. This gives us the effect of a rotating mesh, when really we’re just moving the texture offset.Billboard.jsx
'use client'; import * as THREE from 'three'; import { useRef } from 'react'; import { useFrame } from '@react-three/fiber'; function setupCylinderTextureMapping(texture, dimensions, radius, height) { const cylinderCircumference = 2 * Math.PI * radius; const cylinderHeight = height; const cylinderAspectRatio = cylinderCircumference / cylinderHeight; if (dimensions.aspectRatio > cylinderAspectRatio) { // Canvas is wider than cylinder proportionally texture.repeat.x = cylinderAspectRatio / dimensions.aspectRatio; texture.repeat.y = 1; texture.offset.x = (1 - texture.repeat.x) / 2; } else { // Canvas is taller than cylinder proportionally texture.repeat.x = 1; texture.repeat.y = dimensions.aspectRatio / cylinderAspectRatio; } // Center the texture texture.offset.y = (1 - texture.repeat.y) / 2; } function Billboard({ texture, dimensions, radius = 5, ...props }) { const ref = useRef(null); setupCylinderTextureMapping(texture, dimensions, radius, 2); useFrame((state, delta) => { if (texture) texture.offset.x += delta * 0.001; }); return ( <mesh ref={ref} {...props}> <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} /> <meshBasicMaterial map={texture} side={THREE.DoubleSide} /> </mesh> ); } export default Billboard;
I think it would look even better if we made the back of the images a little darker. To do this, I created
MeshImageMaterial
– it’s just an extension ofMeshBasicMaterial
that makes our backface a bit darker.MeshImageMaterial.js
import * as THREE from 'three'; import { extend } from '@react-three/fiber'; export class MeshImageMaterial extends THREE.MeshBasicMaterial { constructor(parameters = {}) { super(parameters); this.setValues(parameters); } onBeforeCompile = (shader) => { shader.fragmentShader = shader.fragmentShader.replace( '#include <color_fragment>', /* glsl */ `#include <color_fragment> if (!gl_FrontFacing) { vec3 blackCol = vec3(0.0); diffuseColor.rgb = mix(diffuseColor.rgb, blackCol, 0.7); } ` ); }; } extend({ MeshImageMaterial });
Billboard.jsx
'use client'; import * as THREE from 'three'; import { useRef } from 'react'; import { useFrame } from '@react-three/fiber'; import '@/webgl/materials/MeshImageMaterial'; function setupCylinderTextureMapping(texture, dimensions, radius, height) { const cylinderCircumference = 2 * Math.PI * radius; const cylinderHeight = height; const cylinderAspectRatio = cylinderCircumference / cylinderHeight; if (dimensions.aspectRatio > cylinderAspectRatio) { // Canvas is wider than cylinder proportionally texture.repeat.x = cylinderAspectRatio / dimensions.aspectRatio; texture.repeat.y = 1; texture.offset.x = (1 - texture.repeat.x) / 2; } else { // Canvas is taller than cylinder proportionally texture.repeat.x = 1; texture.repeat.y = dimensions.aspectRatio / cylinderAspectRatio; } // Center the texture texture.offset.y = (1 - texture.repeat.y) / 2; } function Billboard({ texture, dimensions, radius = 5, ...props }) { const ref = useRef(null); setupCylinderTextureMapping(texture, dimensions, radius, 2); useFrame((state, delta) => { if (texture) texture.offset.x += delta * 0.001; }); return ( <mesh ref={ref} {...props}> <cylinderGeometry args={[radius, radius, 2, 100, 1, true]} /> <meshImageMaterial map={texture} side={THREE.DoubleSide} toneMapped={false} /> </mesh> ); } export default Billboard;
And now we have our images moving around cylinders. Next, we’ll focus on banners (or marquees, whatever you prefer).
Adding Texture to the Banner
The last thing we need to fix is our Banner component. I wrapped it with this texture. Feel free to take it and edit it however you want, but remember to keep the proper dimensions of the texture.
We simply import our texture using the
useTexture
hook, map it onto our material, and animate the texture offset just like we did in our Billboard component.Billboard.jsx
'use client'; import * as THREE from 'three'; import bannerTexture from '@/assets/images/banner.jpg'; import { useTexture } from '@react-three/drei'; import { useFrame } from '@react-three/fiber'; import { useRef } from 'react'; function Banner({ radius = 1.6, ...props }) { const ref = useRef(null); const texture = useTexture(bannerTexture.src); texture.wrapS = texture.wrapT = THREE.RepeatWrapping; useFrame((state, delta) => { if (!ref.current) return; const material = ref.current.material; if (material.map) material.map.offset.x += delta / 30; }); return ( <mesh ref={ref} {...props}> <cylinderGeometry args={[radius, radius, radius * 0.07, radius * 80, radius * 10, true]} /> <meshBasicMaterial map={texture} map-anisotropy={16} map-repeat={[15, 1]} side={THREE.DoubleSide} toneMapped={false} backfaceRepeatX={3} /> </mesh> ); } export default Banner;
Nice! Now we have something cool, but I think it would look even cooler if we replaced the backface with something different. Maybe a gradient? For this, I created another extension of
MeshBasicMaterial
calledMeshBannerMaterial
. As you probably guessed, we just put a gradient on the backface. That’s it! Let’s use it in our Banner component.We replace the
MeshBasicMaterial
withMeshBannerMaterial
and now it looks like this!MeshBannerMaterial.js
import * as THREE from 'three'; import { extend } from '@react-three/fiber'; export class MeshBannerMaterial extends THREE.MeshBasicMaterial { constructor(parameters = {}) { super(parameters); this.setValues(parameters); this.backfaceRepeatX = 1.0; if (parameters.backfaceRepeatX !== undefined) this.backfaceRepeatX = parameters.backfaceRepeatX; } onBeforeCompile = (shader) => { shader.uniforms.repeatX = { value: this.backfaceRepeatX * 0.1 }; shader.fragmentShader = shader.fragmentShader .replace( '#include <common>', /* glsl */ `#include <common> uniform float repeatX; vec3 pal( in float t, in vec3 a, in vec3 b, in vec3 c, in vec3 d ) { return a + b*cos( 6.28318*(c*t+d) ); } ` ) .replace( '#include <color_fragment>', /* glsl */ `#include <color_fragment> if (!gl_FrontFacing) { diffuseColor.rgb = pal(vMapUv.x * repeatX, vec3(0.5,0.5,0.5),vec3(0.5,0.5,0.5),vec3(1.0,1.0,1.0),vec3(0.0,0.10,0.20) ); } ` ); }; } extend({ MeshBannerMaterial });
Banner.jsx
'use client'; import * as THREE from 'three'; import bannerTexture from '@/assets/images/banner.jpg'; import { useTexture } from '@react-three/drei'; import { useFrame } from '@react-three/fiber'; import { useRef } from 'react'; import '@/webgl/materials/MeshBannerMaterial'; function Banner({ radius = 1.6, ...props }) { const ref = useRef(null); const texture = useTexture(bannerTexture.src); texture.wrapS = texture.wrapT = THREE.RepeatWrapping; useFrame((state, delta) => { if (!ref.current) return; const material = ref.current.material; if (material.map) material.map.offset.x += delta / 30; }); return ( <mesh ref={ref} {...props}> <cylinderGeometry args={[radius, radius, radius * 0.07, radius * 80, radius * 10, true]} /> <meshBannerMaterial map={texture} map-anisotropy={16} map-repeat={[15, 1]} side={THREE.DoubleSide} toneMapped={false} backfaceRepeatX={3} /> </mesh> ); } export default Banner;
And now we have it ✨
Check out the demo You can experiment with this method in lots of ways. For example, I created 2 more examples with shapes I made in Blender, and mapped canvas textures on them. You can check them out here:
Final Words
Check out the final versions of all demos:
I hope you enjoyed this tutorial and learned something new!
Feel free to check out the source code for more details!
-
Top 10 Cloud Security Challenges in 2025 And How to Solve Them with Seqrite
In today’s world, organizations are rapidly embracing cloud security to safeguard their data and operations. However, as cloud adoption grows, so do the risks. In this post, we highlight the top cloud security challenges and show how Seqrite can help you tackle them with ease.
1. Misconfigurations
One of the simplest yet most dangerous mistakes is misconfiguring cloud workloads think storage buckets left public, weak IAM settings, or missing encryption. Cybercriminals actively scan for these mistakes. A small misconfiguration can lead to significant data leakage or worst-case, ransomware deployment. Seqrite Endpoint Protection Cloud ensure your cloud environment adheres to best-practice security settings before threats even strike.
2. Shared Responsibility Confusion
The cloud model operates on shared responsibility: providers secure infrastructure, you manage your data and configurations. Too many teams skip this second part. Inadequate control over access, authentication, and setup drives serious risks. With Seqrite’s unified dashboard for access control, IAM, and policy enforcement, you stay firmly in control without getting overwhelmed.
3. Expanded Attack Surface
More cloud services, more code, more APIs, more opportunities for attacks. Whether it’s serverless functions or public API endpoints, the number of access points grows quickly. Seqrite tackles this with integrated API scanning, vulnerability assessment, and real-time threat detection. Every service, even ephemeral ones is continuously monitored.
4. Unauthorized Access & Account Hijacking
Attackers often gain entry via stolen credentials, especially in shared or multi-cloud environments. Once inside, they move laterally and hijack more resources. Seqrite’s multi-factor authentication, adaptive risk scoring, and real-time anomaly detection lock out illicit access and alert you instantly.
5. Insufficient Data Encryption
Unencrypted data whether at rest or in transit is a gold mine for attackers. Industries with sensitive or regulated information, like healthcare or finance, simply can’t afford this. Seqrite ensures enterprise-grade encryption everywhere you store or transmit data and handles key management so that it’s secure and hassle-free.
6. Poor Visibility and Monitoring
Without centralized visibility, security teams rely on manual cloud consoles and piecemeal logs. That slows response and leaves gaps. Seqrite solves this with a unified monitoring layer that aggregates logs and events across all your cloud environments. You get complete oversight and lightning-fast detection.
7. Regulatory Compliance Pressures
Compliance with GDPR, HIPAA, PCI-DSS, DPDPA and other regulations is mandatory—but complex in multi-cloud environments. Seqrite Data Privacy simplifies compliance with continuous audits, policy enforcement, and detailed reports, helping you reduce audit stress and regulatory risk.
8. Staffing & Skills Gap
Hiring cloud-native, security-savvy experts is tough. Many teams lack the expertise to monitor and secure dynamic cloud environments. Seqrite’s intuitive interface, automation, and policy templates remove much of the manual work, allowing lean IT teams to punch above their weight.
9. Multi-cloud Management Challenges
Working across AWS, Azure, Google Cloud and maybe even private clouds? Each has its own models and configurations. This fragmentation creates blind spots and policy drift. Seqrite consolidates everything into one seamless dashboard, ensuring consistent cloud security policies across all environments.
10. Compliance in Hybrid & Multi-cloud Setups
Hybrid cloud setups introduce additional risks, cross-environment data flows, networking complexities, and inconsistent controls. Seqrite supports consistent security policy application across on-premises, private clouds, and public clouds, no matter where a workload lives.
Bring in Seqrite to secure your cloud journey, safe, compliant, and hassle-free.
-
Understanding void(0) in JavaScript: What It Is, Why It’s Used, and How to Fix It
Understanding void(0) in JavaScript: What It Is, Why It’s Used, and How to Fix It
Source link -
How to extract, create, and navigate Zip Files in C# | Code4IT
Learn how to zip and unzip compressed files with C#. Beware: it’s not as obvious as it might seem!
Table of Contents
Just a second! 🫷
If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .If you want to support this blog, please ensure that you have disabled the adblocker for this site.
I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.Thank you for your understanding.
– DavideWhen working with local files, you might need to open, create, or update Zip files.
In this article, we will learn how to work with Zip files in C#. We will learn how to perform basic operations such as opening, extracting, and creating a Zip file.
The main class we will use is named
ZipFile
, and comes from theSystem.IO.Compression
namespace. It’s been present in C# since .NET Framework 4.5, so we can say it’s pretty stable 😉 Nevertheless, there are some tricky points that you need to know before using this class. Let’s learn!Using C# to list all items in a Zip file
Once you have a Zip file, you can access the internal items without extracting the whole Zip.
You can use the
ZipFile.Open
method.using ZipArchive archive = ZipFile.Open(zipFilePath, ZipArchiveMode.Read); System.Collections.ObjectModel.ReadOnlyCollection<ZipArchiveEntry> entries = archive.Entries;
Notice that I specified the
ZipArchiveMode
. This is an Enum whose values areRead
,Create
, andUpdate
.Using the
Entries
property of theZipArchive
, you can access the whole list of files stored within the Zip folder, each represented by aZipArchiveEntry
instance.The
ZipArchiveEntry
object contains several fields, like the file’s name and the full path from the root archive.There are a few key points to remember about the entries listed in the
ZipArchiveEntry
.- It is a
ReadOnlyCollection<ZipArchiveEntry>
: it means that even if you find a way to add or update the items in memory, the changes are not applied to the actual files; - It lists all files and folders, not only those at the root level. As you can see from the image above, it lists both the files at the root level, like File.txt, and those in inner folders, such as TestZip/InnerFolder/presentation.pptx;
- Each file is characterized by two similar but different properties:
Name
is the actual file name (like presentation.pptx), whileFullName
contains the path from the root of the archive (e.g. TestZip/InnerFolder/presentation.pptx); - It lists folders as if they were files: in the image above, you can see TestZip/InnerFolder. You can recognize them because their
Name
property is empty and theirLength
is 0;
Lastly, remember that
ZipFile.Open
returns anIDisposable
, so you should place the operations within ausing
statement.❓❓A question for you! Why do we see an item for the TestZip/InnerFolder folder, but there is no reference to the TestZip folder? Drop a comment below 📩
Extracting a Zip folder is easy but not obvious.
We have only one way to do that: by calling the
ZipFile.ExtractToDirectory
method.It accepts as mandatory parameters the path of the Zip file to be extracted and the path to the destination:
var zipPath = @"C:\Users\d.bellone\Desktop\TestZip.zip"; var destinationPath = @"C:\Users\d.bellone\Desktop\MyDestination"; ZipFile.ExtractToDirectory(zipPath, destinationPath);
Once you run it, you will see the content of the Zip copied and extracted to the MyDestination folder.
Note that this method creates the destination folder if it does not exist.
This method accepts two more parameters:
entryNameEncoding
, by which you can specify the encoding. The default value is UTF-8.overwriteFiles
allows you to specify whether it must overwrite existing files. The default value isfalse
. If set to false and the destination files already exist, this method throws aSystem.IO.IOException
saying that the file already exists.
Using C# to create a Zip from a folder
The key method here is
ZipFile.CreateFromDirectory
, which allows you to create Zip files in a flexible way.The first mandatory value is, of course, the source directory path.
The second mandatory parameter is the destination of the resulting Zip file.
It can be the local path to the file:
string sourceFolderPath = @"\Desktop\myFolder"; string destinationZipPath = @"\Desktop\destinationFile.zip"; ZipFile.CreateFromDirectory(sourceFolderPath, destinationZipPath);
Or it can be a
Stream
that you can use later for other operations:using (MemoryStream memStream = new MemoryStream()) { string sourceFolderPath = @"\Desktop\myFolder"; ZipFile.CreateFromDirectory(sourceFolderPath, memStream); var lenght = memStream.Length;// here the Stream is populated }
You can finally add some optional parameters:
compressionLevel
, whose values areOptimal
,Fastest
,NoCompression
,SmallestSize
.includeBaseDirectory
: a flag that defines if you have to copy only the first-level files or also the root folder.
A quick comparison of the four Compression Levels
As we just saw, we have four compression levels:
Optimal
,Fastest
,NoCompression
, andSmallestSize
.What happens if I use the different values to zip all the photos and videos of my latest trip?
The source folder’s size is 16.2 GB.
Let me zip it with the four compression levels:
private long CreateAndTrack(string sourcePath, string destinationPath, CompressionLevel compression) { Stopwatch stopwatch = Stopwatch.StartNew(); ZipFile.CreateFromDirectory( sourceDirectoryName: sourcePath, destinationArchiveFileName: destinationPath, compressionLevel: compression, includeBaseDirectory: true ); stopwatch.Stop(); return stopwatch.ElapsedMilliseconds; } // in Main... var smallestTime = CreateAndTrack(sourceFolderPath, Path.Combine(rootFolder, "Smallest.zip"), CompressionLevel.SmallestSize); var noCompressionTime = CreateAndTrack(sourceFolderPath, Path.Combine(rootFolder, "NoCompression.zip"), CompressionLevel.NoCompression); var fastestTime = CreateAndTrack(sourceFolderPath, Path.Combine(rootFolder, "Fastest.zip"), CompressionLevel.Fastest); var optimalTime = CreateAndTrack(sourceFolderPath, Path.Combine(rootFolder, "Optimal.zip"), CompressionLevel.Optimal);
By executing this operation, we have this table:
Compression Type Execution time (ms) Execution time (s) Size (bytes) Size on disk (bytes) Optimal 483481 483 17,340,065,594 17,340,067,840 Fastest 661674 661 16,935,519,764 17,004,888,064 Smallest 344756 344 17,339,881,242 17,339,883,520 No Compression 42521 42 17,497,652,162 17,497,653,248 We can see a bunch of weird things:
- Fastest compression generates a smaller file than Smallest compression.
- Fastest compression is way slower than Smallest compression.
- Optimal lies in the middle.
This is to say: don’t trust the names; remember to benchmark the parts where you need performance, even with a test as simple as this.
Wrapping up
This was a quick article about one specific class in the .NET ecosystem.
As we saw, even though the class is simple and it’s all about three methods, there are some things you should keep in mind before using this class in your code.
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛
Happy coding!
🐧
- It is a
-
How Seqrite Endpoint Protection Blocks Bots, Scripts, and Malware
In today’s hyper-connected digital world, the cybersecurity landscape is shifting dramatically. Gone are the days when cyberattacks primarily relied on human intervention. We’re now facing a new breed of silent, swift adversaries: non-human threats. These automated entities—bots, malicious scripts, and sophisticated malware—are designed to operate at machine speed, exploiting vulnerabilities, bypassing traditional defenses, and often remaining undetected until significant damage has occurred. So, how do you defend against something you can’t see, something that moves faster than human reaction? The answer lies in intelligent, automated endpoint security. Enter Seqrite Endpoint Protection (EPP), your robust shield against these invisible invaders. Available for both cloud-based and on-premise deployments, Seqrite EPP is engineered with cutting-edge technologies specifically designed to identify and neutralize these stealthy, non-human threats.
Understanding the Enigma: What Exactly Are Non-Human Cyber Threats?
When we talk about “non-human cyber threats,” we’re referring to automated programs and code snippets that launch attacks without requiring direct human interaction. These include:
- Bots: Automated programs designed to perform repetitive tasks at scale. Think credential stuffing attacks where bots try thousands of username/password combinations, or Distributed Denial of Service (DDoS) attacks that flood a server with traffic.
- Malicious Scripts: These are pieces of automated code, often hidden within legitimate-looking files or web pages, designed to exploit system weaknesses, exfiltrate sensitive data, or spread malware across your network.
- Exploit Kits: These are sophisticated toolkits that automatically scan systems for unpatched vulnerabilities and then deploy exploits to gain unauthorized access or deliver payloads like ransomware.
The key characteristic of these threats is their autonomy and speed. They operate under the radar, making traditional, reactive security measures largely ineffective. This is precisely why proactive, automated detection and prevention mechanisms are absolutely critical for modern businesses.
Seqrite Endpoint Protection: Your Multi-Layered Defense Against Automation
Seqrite’s EPP doesn’t just offer a single line of defense; it deploys a comprehensive, multi-layered security framework. This framework is specifically engineered to detect and block automation-driven threats using a powerful combination of intelligent rule-based systems, behavioral analysis, and advanced AI-powered capabilities.
Let’s dive into the key features that make Seqrite EPP a formidable opponent against non-human threats:
- Advanced Device Control: Many non-human threats, especially scripts and certain types of malware, are delivered via external devices like USB drives. Seqrite’s Advanced Device Control enforces strict usage policies, allowing you to define what devices can connect to your endpoints and how they can be used. By controlling storage, network, and wireless interfaces, you effectively close off a major entry point for automated attacks.
- Application Control with Zero Trust: Imagine only allowing approved applications and scripts to run on your systems. That’s the power of Seqrite’s Application Control. By implementing a Zero Trust model, it blocks unknown or unapproved applications and scripts from executing. Through meticulous allowlisting and blocklisting, only trusted applications can operate, making it incredibly effective against stealthy automation tools that attempt to execute malicious code.
- Behavior-Based Detection (GoDeep.AI): This is where Seqrite truly shines. Leveraging cutting-edge AI and machine learning, GoDeep.AI continuously monitors endpoint activity to identify abnormal and suspicious behaviors that indicate a non-human threat. This includes detecting:
- Repetitive access patterns: A hallmark of bots attempting to brute-force accounts or scan for vulnerabilities.
- Scripted encryption behavior: Instantly flags the tell-tale signs of ransomware encrypting files.
- Silent data exfiltration attempts: Catches automated processes trying to siphon off sensitive information. The system doesn’t just detect; it actively stops suspicious activity in its tracks before it can cause any harm.
- Intrusion Detection & Prevention System (IDS/IPS): Seqrite’s integrated IDS/IPS actively monitors network traffic for known exploit patterns and anomalous behavior. This robust system is crucial for blocking automation-based threats that attempt to infiltrate your network through known vulnerabilities or launch network-based attacks like port scanning.
- File Sandboxing: When a suspicious file or script enters your environment, Seqrite doesn’t let it run directly on your system. Instead, it’s whisked away to a secure, isolated virtual sandbox environment for deep analysis. Here, the file is allowed to execute and its behavior is meticulously observed. If it exhibits any malicious traits—like attempting to mimic user behavior, access restricted resources, or encrypt files—it’s immediately flagged and stopped, preventing any potential damage to your actual endpoints.
- Web Protection & Phishing Control: Many non-human threats, particularly bots and sophisticated malware, rely on communication with remote command-and-control (C2) servers. Seqrite’s Web Protection proactively blocks:
- Access to known malicious domains.
- Phishing sites designed to steal credentials.
- Unauthorized web access that could lead to malware downloads.
- Crucially, it cuts off botnet callbacks, effectively severing the communication lines between bots and their command centers, rendering them inert.
Enhancing Your Defense: Essential Supporting Features
Beyond its core capabilities, Seqrite Endpoint Protection is bolstered by a suite of supporting features that further strengthen your organization’s resilience against non-human threats and beyond:
Feature Benefit Patch Management Automatically identifies and fixes software vulnerabilities that bots and scripts often exploit to gain entry. Proactive patching is key to prevention. Firewall Provides a critical layer of defense by filtering unauthorized network traffic and blocking communication with known botnet IP addresses. Data Loss Prevention (DLP) Prevents automated data theft by monitoring and controlling data in transit, ensuring sensitive information doesn’t leave your network without authorization. Centralized Log Management Offers a unified view of security events, allowing for rapid detection and auditing of unusual or suspicious behaviors across all endpoints. Disk Encryption Management Safeguards your data by encrypting entire disks, stopping automated decryption attempts even if data is stolen, and protecting against ransomware. The Future of Endpoint Security: Why Non-Human Threat Detection is Non-Negotiable
As we move deeper into 2025 and beyond, cyber threats are becoming increasingly automated, sophisticated, and often, AI-driven. Relying on traditional, signature-based security solutions is no longer enough to match the speed, stealth, and evolving tactics of automation-based attacks.
Seqrite Endpoint Protection is built for this future. It leverages intelligent automation to effectively combat automation—blocking bots, malicious scripts, advanced ransomware, and other non-human threats before they can execute and wreak havoc on your systems and data.
Final Takeaway: Don’t Let Invisible Threats Compromise Your Business
In a world where cyberattacks are increasingly executed by machines, your defense must be equally advanced. With its comprehensive suite of features—including cutting-edge device and application control, AI-driven behavioral detection (GoDeep.AI), robust network-level protection, and secure sandboxing—Seqrite Endpoint Protection ensures your endpoints remain locked down and secure.
Whether your organization operates with a cloud-first strategy or relies on a traditional on-premise infrastructure, Seqrite provides the adaptable and powerful security solutions you need.
Ready to Fortify Your Defenses?
It’s time to upgrade your endpoint security and protect your organization from both human-initiated and the ever-growing wave of non-human cyber threats.
Explore how Seqrite can secure your business today. Request a Free Trial or Schedule a Demo.
-
How to kill a process running on a local port in Windows | Code4IT
Now you can’t run your application because another process already uses the port. How can you find that process? How to kill it?
Table of Contents
Just a second! 🫷
If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .If you want to support this blog, please ensure that you have disabled the adblocker for this site.
I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.Thank you for your understanding.
– DavideSometimes, when trying to run your ASP.NET application, there’s something stopping you.
Have you ever found a message like this?
Failed to bind to address https://127.0.0.1:7261: address already in use.
You can try over and over again, you can also restart the application, but the port still appears to be used by another process.
How can you find the process that is running on a local port? How can you kill it to free up the port and, eventually, be able to run your application?
In this article, we will learn how to find the blocking port in Windows 10 and Windows 11, and then we will learn how to kill that process given its PID.
How to find the process running on a port on Windows 11 using PowerShell
Let’s see how to identify the process that is running on port 7261.
Open a PowerShell and run the
netstat
command:NETSTAT is a command that shows info about the active TCP/IP network connections. It accepts several options. In this case, we will use:
-n
: Displays addresses and port numbers in numerical form.-o
: Displays the owning process ID associated with each connection.-a
: Displays all connections and listening ports;-p
: Filter for a specific protocol (TCP or UDP)
Notice that the last column lists the PID (Process ID) bound to each connection.
From here, we can use the
findstr
command to get only the rows with a specific string (the searched port number).netstat -noa -p TCP | findstr 7261
Now, by looking at the last column, we can identify the Process ID: 19160.
How to kill a process given its PID on Windows or PowerShell
Now that we have the Process ID (PID), we can open the Task Manager, paste the PID value in the topmost textbox, and find the related application.
In our case, it was an instance of Visual Studio running an API application. We can now kill the process by hitting End Task.
If you prefer working with PowerShell, you can find the details of the related process by using the
Get-Process
command:Then, you can use the
taskkill
command by specifying the PID, using the/PID
flag, and adding the/F
flag to force the killing of the process.We have killed the process related to the running application. Visual Studio is still working, of course.
Further readings
Hey, what are these fancy colours on the PowerShell?
It’s a customization I added to show the current folder and the info about the associated GIT repository. It’s incredibly useful while developing and navigating the file system with PowerShell.
This article first appeared on Code4IT 🐧
Wrapping up
As you can imagine, this article exists because I often forget how to find the process that stops my development.
It’s always nice to delve into these topics to learn more about what you can do with PowerShell and which flags are available for a command.
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛
Happy coding!
🐧
-
How to Create Responsive and SEO-friendly WebGL Text
Rendering text in WebGL opens up a whole new world of unique visual effects and creative possibilities that are often
impossible to achieve with traditional HTML, CSS and JavaScript alone. By the end of this tutorial, we’ll have created
WebGL-rendered text elements that perfectly mimic the underlying HTML structure. Since we’ll be taking an HTML-first
approach, it will be responsive, SEO-friendly and accessible. We’ll be doing every step manually so you’ll gain a
solid understanding of the principles behind merging HTML and WebGL, and text-specific stuff like how to translate CSS
styles into the 3D world.We’ll be creating the below demo:
We’ll start off with a standard styled HTML setup. Then we’ll recreate the text elements we have inside a 3D world.
From there, we’ll position, scale and make the text responsive with the 3D space. Next, we’ll replicate the “mask
reveal effect” in WebGL. And finally, we’ll apply some scroll-driven post processing effects to the scene.Below are the core steps we’ll follow to achieve the final result:
- Create the text as a HTML element and style it regularly using CSS
- Create a 3D world and recreate the text element within it
- Merge the 3D and 2D world, so that we can style the 3D text by using our viewport’s dimensions
- Sync the key properties like position, size and font — from the HTML element to the WebGL text element
- Hide the original HTML element
- Be left with only the 3D text, fully styled and positioned in sync with the hidden HTML structure
- Apply animations and post-processing to enhance our 3D scene
Necessities and Prerequisites
We’ll be using the Three.js library to create the 3D world, so you should already be familiar with its basics. For the
creation of text meshes, we’ll be using the
troika-three-text
library, but you don’t have to be familiar with the library beforehand. If you’ve used HTML, CSS and JavaScript, know
the basics of Three.JS,
you’re good to go.Let’s get started.
1. Creating the Regular HTML and Making it Responsive
Before diving into the WebGL and Three.js implementation, we first need to create the HTML structure that we’ll later
mimic in the 3D world. I’ve set up a very simple page with some quick responsive content — you can find the
setup content
in the demo repository under
index.html
and
styles.css
.HTML
:<div class="content"> <div class="container"> <section class="section__heading"> <h3 data-animation="webgl-text" class="text__2">THREE.JS</h3> <h2 data-animation="webgl-text" class="text__1"> RESPONSIVE AND ACCESSIBLE TEXT </h2> </section> <section class="section__main__content"> <p data-animation="webgl-text" class="text__2"> THIS TEXT IS STYLED TO LOOK LIKE A TYPICAL BLOCK OF TEXT ON A STANDARD WEBSITE. BUT UNDER THE SURFACE, IT'S BEING RENDERED WITH WEBGL INSTEAD OF TRADITIONAL HTML. </p> <p data-animation="webgl-text" class="text__2"> THIS OPENS THE DOOR TO CUSTOM SHADER EFFECTS AND INTERACTIONS THAT GO BEYOND WHAT'S POSSIBLE WITH TRADITIONAL HTML. </p> <p data-animation="webgl-text" class="text__2"> WE KEEP THE UNDERYLING HTML STRUCTURE PRESENT IN THE DOM. RATHER THAN CREATING MESHES DIRECTLY IN THREE.JS, THE SCENE IS BUILT BY READING FROM THE EXISTING HTML CONTENT. THIS WAY, SCREEN READERS, SEARCH ENGINES, AND OTHER TOOLS CAN STILL INTERPRET THE PAGE AS EXPECTED. </p> </section> <section class="section__footer"> <p data-animation="webgl-text" class="text__3"> NOW GO CRAZY WITH THE SHADERS :) </p> </section> </div> </div>
styles.css
:root { --clr-text: #fdcdf9; --clr-selection: rgba(255, 156, 245, 0.3); --clr-background: #212720; } @font-face { font-family: "Humane"; src: url("/fonts/Humane-Black.ttf") format("truetype"); font-weight: 900; font-style: normal; font-display: swap; } @font-face { font-family: "Humane"; src: url("/fonts/Humane-Bold.ttf") format("truetype"); font-weight: 700; font-style: normal; font-display: swap; } @font-face { font-family: "Humane"; src: url("/fonts/Humane-ExtraBold.ttf") format("truetype"); font-weight: 800; font-style: normal; font-display: swap; } @font-face { font-family: "Humane"; src: url("/fonts/Humane-ExtraLight.ttf") format("truetype"); font-weight: 200; font-style: normal; font-display: swap; } @font-face { font-family: "Humane"; src: url("/fonts/Humane-Light.ttf") format("truetype"); font-weight: 300; font-style: normal; font-display: swap; } @font-face { font-family: "Humane"; src: url("/fonts/Humane-Medium.ttf") format("truetype"); font-weight: 500; font-style: normal; font-display: swap; } @font-face { font-family: "Humane"; src: url("/fonts/Humane-Regular.ttf") format("truetype"); font-weight: 400; font-style: normal; font-display: swap; } @font-face { font-family: "Humane"; src: url("/fonts/Humane-SemiBold.ttf") format("truetype"); font-weight: 600; font-style: normal; font-display: swap; } @font-face { font-family: "Humane"; src: url("/fonts/Humane-Thin.ttf") format("truetype"); font-weight: 100; font-style: normal; font-display: swap; } body { background: var(--clr-background); } canvas { position: fixed; top: 0; left: 0; width: 100vw; height: 100vh; pointer-events: none; } ::selection { background-color: var(--clr-selection); color: var(--clr-text); } ::-moz-selection { background-color: var(--clr-selection); color: var(--clr-text); } .text__1, .text__2, .text__3 { color: var(--clr-text); text-align: center; margin-block-start: 0; margin-block-end: 0; } .content { width: 100%; font-family: Humane; font-size: 0.825vw; @media (max-width: 768px) { font-size: 2vw; } } .container { display: flex; flex-direction: column; align-items: center; width: 70em; gap: 17.6em; padding: 6em 0; @media (max-width: 768px) { width: 100%; } } .container section { display: flex; flex-direction: column; align-items: center; height: auto; } .section__main__content { gap: 5.6em; } .text__1 { font-size: 19.4em; font-weight: 700; max-width: 45em; @media (max-width: 768px) { font-size: 13.979em; } } .text__2 { font-size: 4.9em; max-width: 7.6em; letter-spacing: 0.01em; } .text__3 { font-size: 13.979em; max-width: 2.4em; }
A Few Key Notes about the Setup
-
The
<canvas>
element is set to cover the entire screen, fixed in place behind the main content. We want a full screen canvas
covering the entire screen behind our main content at all times. -
All text elements intended for WebGL rendering are marked with data-animation=”webgl-text” for clarity and easy
selection when we begin scripting.
The purpose of this setup is to function as the “placeholder” that we can mimic in our 3D implementation. So, it’s
important to
position and style your text at this stage
to ensure it matches the final sizing and positioning that you want to achieve. All text formatting properties like
font-size, letter-spacing, line-height etc. are the properties you want to focus on, because we’ll later read these
computed styles directly from the DOM during the WebGL phase. Color is optional here, as we can handle text coloring
later with shaders inside WebGL.That’s it for the HTML and CSS setup! It’s all we need for the foundation to move onto our JavaScript and WebGL
implementation.2. Initial 3D World Setup
Let’s move onto the JavaScript and WebGL implementation. I’ll be using TypeScript, but you can easily follow along
with vanilla JavaScript if you prefer. I’m assuming you’re already familiar with the basics of Three.js, so I’ll focus
on explaining the high-level setup rather than covering every detail.Below is the starter TypeScript and Three.JS base that I’ll be using for this demo.
// main.ts import Commons from "./classes/Commons"; import * as THREE from "three"; /** * Main entry-point. * Creates Commons and Scenes * Starts the update loop * Eventually creates Postprocessing and Texts. */ class App { private commons!: Commons; scene!: THREE.Scene; constructor() { document.addEventListener("DOMContentLoaded", async () => { await document.fonts.ready; // Important to wait for fonts to load when animating any texts. this.commons = Commons.getInstance(); this.commons.init(); this.createScene(); this.addEventListeners(); this.update(); }); } private createScene() { this.scene = new THREE.Scene(); } /** * The main loop handler of the App * The update function to be called on each frame of the browser. * Calls update on all other parts of the app */ private update() { this.commons.update(); this.commons.renderer.render(this.scene, this.commons.camera); window.requestAnimationFrame(this.update.bind(this)); } private addEventListeners() { window.addEventListener("resize", this.onResize.bind(this)); } private onResize() { this.commons.onResize(); } } export default new App();
// Commons.ts import { PerspectiveCamera, WebGLRenderer, Clock } from "three"; import Lenis from "lenis"; export interface Screen { width: number; height: number; aspect: number; } export interface Sizes { screen: Screen; pixelRatio: number } /** * Singleton class for Common stuff. * Camera * Renderer * Lenis * Time */ export default class Commons { private constructor() {} private static instance: Commons; lenis!: Lenis; camera!: PerspectiveCamera; renderer!: WebGLRenderer; private time: Clock = new Clock(); elapsedTime!: number; sizes: Sizes = { screen: { width: window.innerWidth, height: window.innerHeight, aspect: window.innerWidth / window.innerHeight, }, pixelRatio: this.getPixelRatio(), }; private distanceFromCamera: number = 1000; /** * Function to be called to either create Commons Singleton instance, or to return existing one. * TODO AFTER: Call instances init() function. * @returns Commons Singleton Instance. */ static getInstance() { if (this.instance) return this.instance; this.instance = new Commons(); return this.instance; } /** * Initializes all-things Commons. To be called after instance is set. */ init() { this.createLenis(); this.createCamera(); this.createRenderer(); } /** * Creating Lenis instance. * Sets autoRaf to true so we don't have to manually update Lenis on every frame. * Resets possible saved scroll position. */ private createLenis() { this.lenis = new Lenis({ autoRaf: true, duration: 2 }); } private createCamera() { this.camera = new PerspectiveCamera( 70, this.sizes.screen.aspect, 200, 2000 ); this.camera.position.z = this.distanceFromCamera; this.camera.updateProjectionMatrix(); } /** * createRenderer(): Creates the common WebGLRenderer to be used. */ private createRenderer() { this.renderer = new WebGLRenderer({ alpha: true, // Sets scene background to transparent, so our body background defines the background color }); this.renderer.setSize(this.sizes.screen.width, this.sizes.screen.height); this.renderer.setPixelRatio(this.sizes.pixelRatio); // Creating canvas element and appending to body element. document.body.appendChild(this.renderer.domElement); } /** * Single source of truth to get pixelRatio. */ getPixelRatio() { return Math.min(window.devicePixelRatio, 2); } /** * Resize handler function is called from the entry-point (main.ts) * Updates the Common screen dimensions. * Updates the renderer. * Updates the camera. */ onResize() { this.sizes.screen = { width: window.innerWidth, height: window.innerHeight, aspect: window.innerWidth / window.innerHeight, }; this.sizes.pixelRatio = this.getPixelRatio(); this.renderer.setSize(this.sizes.screen.width, this.sizes.screen.height); this.renderer.setPixelRatio(this.sizes.pixelRatio); this.onResizeCamera(); } /** * Handler function that is called from onResize handler. * Updates the perspective camera with the new adjusted screen dimensions */ private onResizeCamera() { this.camera.aspect = this.sizes.screen.aspect; this.camera.updateProjectionMatrix(); } /** * Update function to be called from entry-point (main.ts) */ update() { this.elapsedTime = this.time.getElapsedTime(); } }
A Note About Smooth Scroll
When syncing HTML and WebGL worlds,
you should use a custom scroll
. This is because the native scroll in browsers updates the scroll position at irregular intervals and does not thus
guarantee frame-perfect updates with our requestAnimationFrame loop and our WebGL world, causing a
jittery and unsynchronized movement
.By integrating a custom scroll (Lenis in this case), we ensure our scroll updates perfectly match the frame updates of
our WebGL world.Right now we are seeing an empty 3D world, continuously being rendered.
We’re only missing one thing to get something visible in our 3D world: the creation of the text elements. So let’s
move onto creating our WebGLText class next.3. Creating WebGLText Class and Texts Meshes
For the creation of the text meshes, we’ll be using
troika-three-text
library.npm i troika-three-text
We’ll now create a reusable
WebGLText
class
. This will handle turning each HTML element into a 3D text mesh, using Troika and our Three.js scene.Here’s the basic setup:
// WebGLText.ts import Commons from "./Commons"; import * as THREE from "three"; // @ts-ignore import { Text } from "troika-three-text"; interface Props { scene: THREE.Scene; element: HTMLElement; } export default class WebGLText { commons: Commons; scene: THREE.Scene; element: HTMLElement; computedStyle: CSSStyleDeclaration; font!: string; // Path to our .ttf font file. bounds!: DOMRect; color!: THREE.Color; material!: THREE.ShaderMaterial; mesh!: Text; // We assign the correct font bard on our element's font weight from here weightToFontMap: Record<string, string> = { "900": "/fonts/Humane-Black.ttf", "800": "/fonts/Humane-ExtraBold.ttf", "700": "/fonts/Humane-Bold.ttf", "600": "/fonts/Humane-SemiBold.ttf", "500": "/fonts/Humane-Medium.ttf", "400": "/fonts/Humane-Regular.ttf", "300": "/fonts/Humane-Light.ttf", "200": "/fonts/Humane-ExtraLight.ttf", "100": "/fonts/Humane-Thin.ttf", }; private y: number = 0; // Scroll-adjusted bounds.top private isVisible: boolean = false; constructor({ scene, element }: Props) { this.commons = Commons.getInstance(); this.scene = scene; this.element = element; this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style. } }
We have access to the
Text class
from Troika, which allows us to create text meshes elements and apply styling to it using familiar properties like
fontSize, letterSpacing, and font. I’ll cover everything you need to style your text responsively in this tutorial,
but I implore you to take a look at the full documentation and its possibilities
here
.Troika doesn’t ship with TypeScript definitions out of the box, so if you’re using TS, you can quickly get around this
by creating a type declaration file in the root of your project. It’s not pretty, but it gets the job done while
keeping TypeScript happy.// troika.d.ts declare module "troika-three-text" { const value: any; export default value; }
Let’s start by creating new methods called createFont(), createColor() and createMesh().
createFont()
: Selects the appropriate .ttf file based on the DOM element’s font-weight. If a match isn’t found, we fall back to
the regular weight. Adjust the mapping to match your own font files and multiple font families if needed.// WebGLText.ts private createFont() { this.font = this.weightToFontMap[this.computedStyle.fontWeight] || "/fonts/Humane-Regular.ttf"; }
createColor()
: Converts the computed CSS color into a THREE.Color instance:// WebGLText.ts private createColor() { this.color = new THREE.Color(this.computedStyle.color); }
createMesh():
Instantiates the text mesh and sets some basic properties. Copies the text’s inner text and sets it onto the mesh.
Adds the mesh to our Three.JS scene. We anchor the text from the left-center to match typical HTML layout
expectations.// WebGLText.ts private createMesh() { this.mesh = new Text(); this.mesh.text = this.element.innerText; // Copying HTML content over to the mesh this.mesh.font = this.font; // Anchor the text to the left-center (instead of center-center) this.mesh.anchorX = "0%"; this.mesh.anchorY = "50%"; this.mesh.color = this.color; this.scene.add(this.mesh); }
⚠️ When copying text contents over to the Mesh, avoid using innerHTML or textContent and use innerText instead as it
gives the most layout-accurate and consistent results.setStaticValues
(): Let’s also create a baby setStaticValues() method which will set the critical properties of our text mesh based on
the computedStyle.We sets values like font size based on computed CSS. We’ll expand this more as we sync more styles down the line.
We want to call all these methods in the constructor like this:
// WebGLText.ts constructor({ scene, element }: Props) { this.commons = Commons.getInstance(); this.scene = scene; this.element = element; this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style. this.createFont(); this.createColor(); this.createMesh(); this.setStaticValues(); }
Instantiating Text Elements from DOM
Finally, let’s update our App class (main.ts), and hook this all up by scanning for DOM elements with a
data-animation=”webgl-text” attribute — creating a WebGLText instance for each one:// main.ts texts!: Array<WebGLText>; // ... private createWebGLTexts() { const texts = document.querySelectorAll('[data-animation="webgl-text"]'); if (texts) { this.texts = Array.from(texts).map((el) => { const newEl = new WebGLText({ element: el as HTMLElement, scene: this.scene, }); return newEl; }); } }
Make sure to call this method in the constructor on initialization. This will populate our scene with styled text
meshes based on our DOM content.That’s all we need to have our text meshes visible, it’s not the prettiest sight to behold, but at least we got
everything working:Next Challenge: Screen vs. 3D Space Mismatch
Even though we copy the font size directly from the DOM, the scale looks different in 3D. That’s because
WebGL units don’t map 1:1 with screen pixels
, and they operate in different coordinate systems. This mismatch will become even more obvious if we start
positioning and animating elements.To get true visual parity between our DOM elements and WebGL text, we need to bridge the gap between screen space and
3D space. Let’s tackle that next so our text sizes and positions actually match with what we see in the browser.4. Syncing Dimensions
The major problem when syncing HTML and WebGL dimensions is that things between them aren’t exactly pixel-perfect.
This is because the DOM and WebGL don’t “speak the same units” by default.- Web browsers work in screen pixels.
- WebGL uses arbitrary units
Our goal is simple:
💡 Make one unit in the WebGL scene equal one pixel on the screen.
To achieve this, we’ll adjust the camera’s field of view (FOV) so that visible area through the camera exactly matches
the dimensions of the browser window in pixels.So, we’ll create a
syncDimensions()
function under our Commons class, which calculates our camera’s field of view such that 1 unit in the WebGL scene
corresponds to 1 pixel on the screen —
at a given distance from the camera.// Commons.ts /** * Helper function that is called upon creation and resize * Updates the camera's fov according to the new dimensions such that the window's pixels match with that of WebGL scene */ private syncDimensions() { this.camera.fov = 2 * Math.atan(this.sizes.screen.height / 2 / this.distanceFromCamera) * (180 / Math.PI); }
This function will be called once when we create the camera, and every time that the screen is resized.
//Commons.ts private createCamera() { this.camera = new PerspectiveCamera( 70, this.sizes.screen.aspect, 200, 2000 ); this.camera.position.z = this.distanceFromCamera; this.syncDimensions(); // Syncing dimensions this.camera.updateProjectionMatrix(); } // ... private onResizeCamera() { this.syncDimensions(); // Syncing dimensions this.camera.aspect = this.sizes.screen.aspect; this.camera.updateProjectionMatrix(); }
Let’s break down what’s actually going on here using the image below:
We know:
- The height of the screen
- The distance from camera (Z)
- The FOV of the camera is the vertical angle (fov y in the image)
So our main goal is to set how wide (vertical angle) we see according to our screen height.
Because the Z (distance from camera) and half of the screen height
forms a right triangle
(distance + height), we can solve for the angle using some basic trigonometry, and compute the FOV using the inverse
tangent (
atan
) of this triangle.Step-by-step Breakdown of the Formula
this.sizes.screen.height / 2
→ This gives us half the screen’s pixel height — the opposite side of our triangle.
this.distanceFromCamera
→ This is the adjacent side of the triangle — the distance from the camera to the 3D scene.
Math.atan(opposite / adjacent)
→ Calculates half of the vertical FOV (in radians).
*2
→ Since atan only gives half of the angle, we multiply it by 2 to get the full FOV.
* (180 / Math.PI)
→ Converts the angle from radians to degrees (Three.js expects degrees for PerspectiveCamera’s fov)
So the final formula comes down to:
this.camera.fov = 2 * Math.atan(this.sizes.screen.height / 2 / this.distanceFromCamera) * (180 / Math.PI);
That’s all we need to sync our dimensions, and this setup ensures that 1 unit in WebGL = 1 pixel on screen.
Let’s move back to the text implementation.
5. Setting Text Properties and Positioning
Now that we’ve synced the WebGL scene to our screen’s pixel dimensions, we can start mapping HTML styles to our 3D
text.If everything’s wired up correctly, you should see that the font size of the WebGL-rendered text matches the size of
the underlying HTML, although the positioning is still off.Let’s sync more styling properties and positioning.
Before we can position the 3D text, we need to get the DOM element’s position and size. We’ll create a new method in
the WebGLText class called
createBounds()
,
and use the browser’s built-in getBoundingClientRect() method:// WebGLText.ts private createBounds() { this.bounds = this.element.getBoundingClientRect(); this.y = this.bounds.top + this.commons.lenis.actualScroll; }
And call this in the constructor:
// WebGLText.ts constructor({ scene, element }: Props) { this.commons = Commons.getInstance(); this.scene = scene; this.element = element; this.computedStyle = window.getComputedStyle(this.element); // Saving initial computed style. this.createFont(); this.createColor(); this.createBounds(); // Creating bounds this.createMesh(); this.setStaticValues(); }
Next, we’ll pull important typographic properties from the DOM (computed style) and pass them to the 3D mesh, so that
it behaves like our native HTML text. (Again, you can see the full documentation and possible properties of troika
here
). Below I’ve included the most important ones.// WebGLText.ts private setStaticValues() { const { fontSize, letterSpacing, lineHeight, whiteSpace, textAlign } = this.computedStyle; const fontSizeNum = window.parseFloat(fontSize); this.mesh.fontSize = fontSizeNum; this.mesh.textAlign = textAlign; // Troika defines letter spacing in em's, so we convert to them this.mesh.letterSpacing = parseFloat(letterSpacing) / fontSizeNum; // Same with line height this.mesh.lineHeight = parseFloat(lineHeight) / fontSizeNum; // Important to define maxWidth for the mesh, so that our text doesn't overflow this.mesh.maxWidth = this.bounds.width; // Match whiteSpace behavior (e.g., 'pre', 'nowrap') this.mesh.whiteSpace = whiteSpace; }
Troika accepts some of the properties in local em units, so we have to convert pixels into em’s by dividing the pixel
values by the font size.Also, it’s important to set a maximum width (in pixels) to constrain the mesh’s layout — this prevents text from
overflowing and ensures proper text wrapping.And finally, let’s create an
update()
function to be called on each frame that consistently positions our mesh according to the underlying DOM position.This is what it looks like:
//WebGLText.ts update() { this.mesh.position.y = -this.y + this.commons.lenis.animatedScroll + this.commons.sizes.screen.height / 2 - this.bounds.height / 2; this.mesh.position.x = this.bounds.left - this.commons.sizes.screen.width / 2; }
Breakdown:
-
this.y
shifts the mesh upward by the element’s absolute Y offset. -
lenis.animatedScroll
re-applies the live animated scroll position. - Together, they give the current relative position inside the viewport.
Since our WebGL coordinate system is centered in the middle of the screen (Y = 0 is center), we also:
- Add half the screen height (to convert from DOM top-left origin to WebGL center origin)
- Subtract half the text height to vertically center the text
- Subtract half the screen width
Now, we call this update function for each of the text instances in our entry-file:
// main.ts private update() { this.commons.update(); this.commons.renderer.render(this.scene, this.commons.camera); if (this.texts) { this.texts.forEach((el) => el.update()); } window.requestAnimationFrame(this.update.bind(this)); }
And now, the
texts will perfectly follow DOM counterparts
, even as the user scrolls.Let’s finalize our base text class implementation before diving into effects:
Resizing
We need to ensure that our WebGL text updates correctly on window resize events. This means
recreating the computedStyle, bounds, and static values
whenever the window size changes.Here’s the resize event handler:
// WebGLText.ts onResize() { this.computedStyle = window.getComputedStyle(this.element); this.createBounds(); this.setStaticValues(); }
And, call it in the entry-point for each of the text instances:
// main.ts private onResize() { this.commons.onResize(); // Resizing texts if (this.texts) { this.texts.forEach((el) => el.onResize()); } }
Once everything is working responsively and perfectly synced with the DOM, we can finally
hide the original HTML text by setting it transparent
— but we’ll keep it in place so it’s still selectable and accessible to the user.// WebGLText.ts this.createFont(); this.createColor(); this.createBounds(); this.createMesh(); this.setStaticValues(); this.element.style.color = "transparent"; // Hide DOM element
We should now have our perfectly responsive text meshes, and the user only sees the rendered WebGL text, while the DOM
element remains fully intact for accessibility.Let’s add some effects!
6. Adding a Custom shader and Replicating Mask Reveal Animations
Troika also lets us use custom shader materials for meshes, giving us the flexibility to create complex effects beyond
just setting colors.Let’s set up our initial custom shaders:
Fragment Shader:
// text.frag uniform vec3 uColor; varying vec2 vUv; void main() { gl_FragColor = vec4(uColor, 1.0); // Applying our custom color. }
The fragment shader defines the color of the text using the uColor uniform.
Vertex Shader:
// text.vert varying vec2 vUv; void main() { vUv = uv; gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0); }
The vertex shader passes the texture coordinates (uv) to the fragment shader for the text rendering.
Shader File Imports using Vite
To handle shader files more easily, we can use the
vite-plugin-glsl
plugin together with Vite to directly import shader files like .frag and .vert in code:npm i vite-plugin-glsl -D
// vite.config.ts import { defineConfig } from "vite"; import glsl from "vite-plugin-glsl"; export default defineConfig({ plugins: [ glsl({ include: [ "**/*.glsl", "**/*.wgsl", "**/*.vert", "**/*.frag", "**/*.vs", "**/*.fs", ], warnDuplicatedImports: true, defaultExtension: "glsl", watch: true, root: "/", }), ], });
If you’re using TypeScript, you also need to declare the modules for shader files so TypeScript can understand how to
import them:// shaders.d.ts declare module "*.frag" { const value: string; export default value; } declare module "*.vert" { const value: string; export default value; } declare module "*.glsl" { const value: string; export default value; }
Creating Custom Shader Materials
Let’s now create our custom ShaderMaterial and apply it to our mesh:
// WebGLText.ts // Importing shaders import fragmentShader from "../../shaders/text/text.frag"; import vertexShader from "../../shaders/text/text.vert"; //... this.createFont(); this.createColor(); this.createBounds(); this.createMaterial(); // Creating material this.createMesh(); this.setStaticValues(); //... private createMaterial() { this.material = new THREE.ShaderMaterial({ fragmentShader, vertexShader uniforms: { uColor: new THREE.Uniform(this.color), // Passing our color to the shader }, }); }
In the
createMaterial()
method, we define the
ShaderMaterial
using the imported shaders and pass in the uColor uniform, which allows us to dynamically control the color of the
text based on our DOM-element.And now, instead of setting the color directly on the default mesh material, we apply our new custom material:
// WebGLText.ts private createMesh() { this.mesh = new Text(); this.mesh.text = this.element.innerText; // Always use innerText (not innerHTML or textContent). this.mesh.font = this.font; this.mesh.anchorX = "0%"; this.mesh.anchorY = "50%"; this.mesh.material = this.material; //Using custom material instead of color }
At this point, we are using our custom shader material, but so far, nothing in our output has changed. Let’s now setup
show and hide animations using our custom shader, and replicate the mask reveal effect.Setting up Reveal Animations
We’ll create an animation that uses a progress uniform (uProgress) to control the visibility and reveal progress of
the text. The animation will be controlled using the motion library.First, we must install
motion
and import its
animate
and
inView
functions to our WebGLText class.npm i motion
// WebGLText.ts import { inView, animate } from "motion";
Now, let’s configure our class so that when the text steps into view,
the show() function is called
, and when it steps away,
the hide() function is called
. These methods also control the current visibility variable
this.isVisible
. These functions will control the uProgress variable, and animate it between 0 and 1.For this, we also must setup an addEventListeners() function:
// WebGLText.ts /** * Inits visibility tracking using motion's inView function. * Show is called when the element steps into view, and hide is called when the element steps out of view */ private addEventListeners() { inView(this.element, () => { this.show(); return () => this.hide(); }); } show() { this.isVisible = true; animate( this.material.uniforms.uProgress, { value: 1 }, { duration: 1.8, ease: [0.25, 1, 0.5, 1] } ); } hide() { animate( this.material.uniforms.uProgress, { value: 0 }, { duration: 1.8, onComplete: () => (this.isVisible = false) } ); }
Just make sure to call addEventListeners() in your constructor after setting up the class.
Updating the Shader Material for Animation
We’ll also add two additional uniform variables in our material for the animations:
-
uProgress
: Controls the reveal progress (from 0 to 1). -
uHeight
: Used by the vertex shader to calculate vertical position offset.
Updated
createMaterial()
method:// WebGLText.ts private createMaterial() { this.material = new THREE.ShaderMaterial({ fragmentShader, vertexShader, uniforms: { uProgress: new THREE.Uniform(0), uHeight: new THREE.Uniform(this.bounds.height), uColor: new THREE.Uniform(this.color), }, }); }
Since the uHeight is dependent on bounds, we also want to update the uniform variable upon resizing:
// WebGLText.ts onResize() { this.computedStyle = window.getComputedStyle(this.element); this.createBounds(); this.setStaticValues(); this.material.uniforms.uHeight.value = this.bounds.height; }
We now have the text class instance automatically calling show() and hide(), and animating the uProgress according to
the visibility of our underlying DOM-element.For performance, you might want to update the update() method to only calculate a new position when the mesh is
visible:update() { if (this.isVisible) { this.mesh.position.y = -this.y + this.commons.lenis.animatedScroll + this.commons.sizes.screen.height / 2 - this.bounds.height / 2; this.mesh.position.x = this.bounds.left - this.commons.sizes.screen.width / 2; } }
Mask Reveal Theory and Shader Implementation
Creating a mask reveal effect with custom shaders in WebGL is surprisingly simple when we break it down into two
separate movements: one happening in the fragment shader and the other in the vertex shader. You might’ve seen this
effect happen in WebGL on the page of
Zajno
, for example.Instead of overcomplicating the concept with complex masks or thinking about “lifting it up” behind a window (as we do
in traditional HTML), we can think of it as two distinct actions that work together.-
Fragment Shader
: We clip the text vertically, revealing it gradually from top to bottom. -
Vertex Shader
: We translate the text’s position from the bottom to the top by its height.
Together these two movements create the illusion of the text lifting itself up from behind a mask.
Let’s update our fragment shader code:
//text.frag uniform float uProgress; // Our progress value between 0 and 1 uniform vec3 uColor; varying vec2 vUv; void main() { // Calculate the reveal threshold (bottom to top reveal) float reveal = 1.0 - vUv.y; // Discard fragments above the reveal threshold based on progress if (reveal > uProgress) discard; // Apply the color to the visible parts of the text gl_FragColor = vec4(uColor, 1.0); }
- When uProgress is 0, the mesh is fully clipped out, and nothing is visible
- When uProgress increases towards 1, the mesh reveals itself from top to bottom.
For the vertex shader, we can simply pass the new uniform called uHeight, which stands for the height of our
DOM-element (this.bounds.height), and translate the output vertically according to it and uProgress.//text.vert uniform float uProgress; uniform float uHeight; // Total height of the mesh passed in from JS varying vec2 vUv; void main() { vUv = uv; vec3 transformedPosition = position; // Push the mesh upward as it reveals transformedPosition.y -= uHeight * (1.0 - uProgress); gl_Position = projectionMatrix * modelViewMatrix * vec4(transformedPosition, 1.0); }
-
uHeight
: Total height of the DOM-element (and mesh), passed in from JS. -
When
uProgress
is
0
, the mesh is fully pushed down. -
As
uProgress
reaches
1
, it resolves to its natural position.
Now, we should have a beautifully on-scroll animating scene, where the texts reveal themselves as in regular HTML when
they scroll into view.To spice things up, let’s add some scroll-velocity based post processing effects to our scene as the final step!
7. Adding Post-processing
Now that we’ve built our animated WebGL text with custom shaders and scroll-triggered reveals, we can push the visuals
further with
post-processing
.Post-processing allows us to apply full-screen visual effects after the scene has been rendered. This is done by
passing the final image through a series of custom shader passes.So, in this final section, we’ll:
- Set up a PostProcessing class using Three.js’s EffectComposer
- Add a custom RGB shift and wave distortion effect
- Drive the distortion strength dynamically using the scroll velocity from our Lenis custom scroll instance
Creating a PostProcessing class with EffectComposer
Let’s create a PostProcessing class that will be intialized from our entry-point, and which will handle everything
regarding postprocessing using Three.JS’s EffectComposer. Read more about the EffectComposer class
here from Three.js’s documentation
. We’ll also create new fragment and vertex shaders for the postprocessing class to use.// PostProcessing.ts import { EffectComposer, RenderPass, ShaderPass, } from "three/examples/jsm/Addons.js"; import Commons from "./Commons"; import * as THREE from "three"; // Importing postprocessing shaders import fragmentShader from "../../shaders/postprocessing/postprocessing.frag"; import vertexShader from "../../shaders/postprocessing/postprocessing.vert"; interface Props { scene: THREE.Scene; } export default class PostProcessing { // Scene and utility references private commons: Commons; private scene: THREE.Scene; private composer!: EffectComposer; private renderPass!: RenderPass; private shiftPass!: ShaderPass; constructor({ scene }: Props) { this.commons = Commons.getInstance(); this.scene = scene; this.createComposer(); this.createPasses(); } private createComposer() { this.composer = new EffectComposer(this.commons.renderer); this.composer.setPixelRatio(this.commons.sizes.pixelRatio); this.composer.setSize( this.commons.sizes.screen.width, this.commons.sizes.screen.height ); } private createPasses() { // Creating Render Pass (final output) first. this.renderPass = new RenderPass(this.scene, this.commons.camera); this.composer.addPass(this.renderPass); // Creating Post-processing shader for wave and RGB-shift effect. const shiftShader = { uniforms: { tDiffuse: { value: null }, // Default input from previous pass uVelocity: { value: 0 }, // Scroll velocity input uTime: { value: 0 }, // Elapsed time for animated distortion }, vertexShader, fragmentShader, }; this.shiftPass = new ShaderPass(shiftShader); this.composer.addPass(this.shiftPass); } /** * Resize handler for EffectComposer, called from entry-point. */ onResize() { this.composer.setPixelRatio(this.commons.sizes.pixelRatio); this.composer.setSize( this.commons.sizes.screen.width, this.commons.sizes.screen.height ); } update() { this.shiftPass.uniforms.uTime.value = this.commons.elapsedTime; this.composer.render(); } }
Since we don’t have our postprocessing shaders created yet, make sure you create placeholder postprocessing.frag and
postprocessing.vert shaders so the imports don’t fail.Example placeholders below:
//postprocessing.frag uniform sampler2D tDiffuse; uniform float uVelocity; uniform float uTime; varying vec2 vUv; void main() { gl_FragColor = texture2D(tDiffuse, vUv); } //postprocessing.vert varying vec2 vUv; void main() { vUv = uv; gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0); }
Breakdown of the PostProcessing class
Constructor:
Initializes the class by storing the provided scene, grabbing the shared Commons instance, and then calling
createComposer()
and
createPasses()
.createComposer():
Sets up the EffectComposer with the correct pixel ratio and canvas size:- EffectComposer wraps the WebGL renderer and allows chaining of multiple render passes.
- Sized according to current viewport dimensions and pixel ratio
createPasses():
This method sets up all rendering passes applied to the scene.-
RenderPass
: The first pass that simply renders the scene with the main camera as regular. -
ShaderPass (shiftPass)
: A custom full-screen shader pass that we’ll create and which will create the RGB shift and wavy distortion
effects.
update():
Method called on every frame. Updates the uTime uniform so we can animate effects over time, and renders the final
post-processed image using
composer.render()
Initializing Post-processing
To wire the post-processing system into our existing app, we update our main.ts:
//main.ts private postProcessing!: PostProcessing; //.... constructor() { document.addEventListener("DOMContentLoaded", async () => { await document.fonts.ready; this.commons = Commons.getInstance(); this.commons.init(); this.createScene(); this.createWebGLTexts(); this.createPostProcessing(); // Creating post-processing this.addEventListeners(); this.update(); }); } // ... private createPostProcessing() { this.postProcessing = new PostProcessing({ scene: this.scene }); } // ... private update() { this.commons.update(); if (this.texts) { this.texts.forEach((el) => el.update()); } // Don't need line below as we're rendering everything using EffectComposer. // this.commons.renderer.render(this.scene, this.commons.camera); this.postProcessing.update(); // Post-processing class handles rendering of output from now on window.requestAnimationFrame(this.update.bind(this)); } private onResize() { this.commons.onResize(); if (this.texts) { this.texts.forEach((el) => el.onResize()); } this.postProcessing.onResize(); // Resize post-processing }
So in the new update() function, instead of rendering directly from there, we now hand off rendering responsibility to
the PostProcessing class.Creating Post-processing Shader and Wiring Scroll Velocity
We want to modify the PostProcessing class further, so that we update the postprocessing fragment shader with the
current scroll velocity from Lenis.For this, I’m adding a new property lerpedVelocity and lerpFactor, which control the smoothed out velocity. The raw
velocity values from lenis can be spiky and sudden, especially with fast scrolling or scroll jumps. If we pass that
raw value directly into a shader, it can cause a really jittery output.private lerpedVelocity = 0; // Smoothed scroll velocity for post-processing. private lerpFactor = 0.05; // Controls how quickly lerpedVelocity follows the real velocity // ... update() { this.shiftPass.uniforms.uTime.value = this.commons.elapsedTime; // Reading current velocity form lenis instance. const targetVelocity = this.commons.lenis.velocity; // We use the lerped velocity as the actual velocity for the shader, just for a smoother experience. this.lerpedVelocity += (targetVelocity - this.lerpedVelocity) * this.lerpFactor; this.shiftPass.uniforms.uVelocity.value = this.lerpedVelocity; this.composer.render(); }
Post-processing Shaders
For the vertex shader, we can keep all things default, we pass the texture coordinates to the fragment shader.
//postprocessing.vert varying vec2 vUv; void main() { vUv = uv; gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0); }
And for the fragment shader:
//postprocessing.frag uniform sampler2D tDiffuse; uniform float uVelocity; uniform float uTime; varying vec2 vUv; void main() { vec2 uv = vUv; // Calculating wave distortion based on velocity float waveAmplitude = uVelocity * 0.0009; float waveFrequency = 4.0 + uVelocity * 0.01; // Applying wave distortion to the UV coordinates vec2 waveUv = uv; waveUv.x += sin(uv.y * waveFrequency + uTime) * waveAmplitude; waveUv.y += sin(uv.x * waveFrequency * 5. + uTime * 0.8) * waveAmplitude; // Applying the RGB shift to the wave-distorted coordinates float r = texture2D(tDiffuse, vec2(waveUv.x, waveUv.y + uVelocity * 0.0005)).r; vec2 gb = texture2D(tDiffuse, waveUv).gb; gl_FragColor = vec4(r, gb, r); }
Breakdown
// Calculating wave distortion based on velocity float waveAmplitude = uVelocity * 0.0009; float waveFrequency = 4.0 + uVelocity * 0.01;
Wave amplitude controls how strongly the wave effect distorts the screen according to our scroll velocity.
Wave frequency controls how frequently the waves occur.
Next, we distort the UV-coordinates using sin functions and the uTime uniform:
// Applying wave distortion to the UV coordinates vec2 waveUv = uv; waveUv.x += sin(uv.y * waveFrequency + uTime) * waveAmplitude; waveUv.y += sin(uv.x * waveFrequency * 5. + uTime * 0.8) * waveAmplitude;
The red channel is offset slightly based on the velocity, creating the RGB shift effect.
// Applying the RGB shift to the wave-distorted coordinates float r = texture2D(tDiffuse, vec2(waveUv.x, waveUv.y + uVelocity * 0.0005)).r; vec2 gb = texture2D(tDiffuse, waveUv).gb; gl_FragColor = vec4(r, gb, r);
This will create a subtle color separation in the final image that shifts according to our scroll velocity.
Finally, we combine red, green, blue, and alpha into the output color.
8. Final Result
And there you have it! We’ve created a responsive text scene, with scroll-triggered mask reveal animations and
wavy/rgb shifted post-processing.This setup provides a solid, modular foundation for building more advanced WebGL text effects. If you’re curious to explore further, consider adding particles, fluid simulations, audio reactivity, or more complex materials and shaders. If you’re interested in breakdowns of any of these, feel free to reach out on X.
Thanks so much for following along 🙌
-
How to create custom snippets in Visual Studio 2022 | Code4IT
A simple way to improve efficiency is knowing your IDE shortcuts. Let’s learn how to create custom ones to generate code automatically.
Table of Contents
Just a second! 🫷
If you are here, it means that you are a software developer.
So, you know that storage, networking, and domain management have a cost .If you want to support this blog, please ensure that you have disabled the adblocker for this site.
I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.Thank you for your understanding.
– DavideOne of the best tricks to boost productivity is knowing your tools.
I’m pretty sure you’ve already used some predefined snippets in Visual Studio. For example, when you type ctor and hit Tab twice, VS automatically creates an empty constructor for the current class.
In this article, we will learn how to create custom snippets: in particular, we will design a snippet that automatically creates a C# Unit Test method with some placeholders and predefined Arrange-Act-Assert blocks.
Snippet Designer: a Visual Studio 2022 extension to add a UI to your placeholders
Snippets are defined in XML-like files with
.snippet
extension. But we all know that working with XMLs can be cumbersome, especially if you don’t have a clear idea of the expected structure.Therefore, even if not strictly necessary, I suggest installing a VS2022 extension called Snippet Designer 2022.
This extension, developed by Matthew Manela, can be found on GitHub, where you can view the source code.
This extension gives you a UI to customize the snippet instead of manually editing the XML nodes. It allows you to customize the snippet, the related metadata, and even the placeholders.
Create a basic snippet in VS2022 using a .snippet file
As we saw, snippets are defined in a simple XML.
In order to have your snippets immediately available in Visual Studio, I suggest you create those files in a specific VS2022 folder under the path
\Documents\Visual Studio 2022\Code Snippets\Visual C#\My Code Snippets\
.So, create an empty file, change its extension to
.snippet
, and save it to that location.Now, you can open Visual Studio (it’s not necessary to open a project, but I’d recommend you to do so). Then, head to File > Open, and open the file you saved under the My Code Snippets directory.
Thanks to Snippet Designer, you will be able to see a nice UI instead of plain XML content.
Have a look at how I filled in the several parts to create a snippet that generates a variable named
x
, assigns to it a value, and then callsx++
;Have a look at the main parts:
- the body, which contains the snippet to be generated;
- the top layer, where we specified:
- the Snippet name: Int100; it’s the display name of the shortcut
- the code language: C#;
- the shortcut: int100; it’s the string you’ll type in that allows you to generate the expected snippet;
- the bottom table, which contains the placeholders used in the snippet; more on this later;
- the properties tab, on the sidebar: here is where you specify some additional metadata, such as:
- Author, Description, and Help Url of the snippet, in case you want to export it;
- the kind of snippet: possible values are MethodBody, MethodDecl and TypeDecl. However, this value is supported only in Visual Basic.
Now, hit save and be ready to import it!
Just for completeness, here’s the resulting XML:
<?xml version="1.0" encoding="utf-8"?> <CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet"> <CodeSnippet Format="1.0.0"> <Header> <SnippetTypes> <SnippetType>Expansion</SnippetType> </SnippetTypes> <Title>Int100</Title> <Author> </Author> <Description> </Description> <HelpUrl> </HelpUrl> <Shortcut>int100</Shortcut> </Header> <Snippet> <Code Kind="method decl" Language="csharp" Delimiter="$"><![CDATA[int x = 100; x++;]]></Code> </Snippet> </CodeSnippet> </CodeSnippets>
Notice that the actual content of the snippet is defined in the
CDATA
block.Import the snippet in Visual Studio
It’s time to import the snippet. Open the Tools menu item and click on Code Snippets Manager.
From here, you can import a snippet by clicking the Import… button. Given that we’ve already saved our snippet in the correct folder, we’ll find it under the My Code Snippets folder.
Now it’s ready! Open a C# class, and start typing
int100
. You’ll see our snippet in the autocomplete list.By hitting Tab twice, you’ll see the snippet’s content being generated.
How to use placeholders when defining snippets in Visual Studio
Wouldn’t it be nice to have the possibility to define customizable parts of your snippets?
Let’s see a real example: I want to create a snippet to create the structure of a Unit Tests method with these characteristics:
- it already contains the AAA (Arrange, Act, Assert) sections;
- the method name should follow the pattern “SOMETHING should DO STUFF when CONDITION”. I want to be able to replace the different parts of the method name by using placeholders.
You can define placeholders using the
$
symbol. You will then see the placeholders in the table at the bottom of the UI. In this example, the placeholders are$TestMethod$
,$DoSomething$
, and$Condition$
. I also added a description to explain the purpose of each placeholder better.The XML looks like this:
<?xml version="1.0" encoding="utf-8"?> <CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet"> <CodeSnippet Format="1.0.0"> <Header> <SnippetTypes> <SnippetType>Expansion</SnippetType> </SnippetTypes> <Title>Test Sync</Title> <Author>Davide Bellone</Author> <Description>Scaffold the AAA structure for synchronous NUnit tests</Description> <HelpUrl> </HelpUrl> <Shortcut>testsync</Shortcut> </Header> <Snippet> <Declarations> <Literal Editable="true"> <ID>TestMethod</ID> <ToolTip>Name of the method to be tested</ToolTip> <Default>TestMethod</Default> <Function> </Function> </Literal> <Literal Editable="true"> <ID>DoSomething</ID> <ToolTip>Expected behavior or result</ToolTip> <Default>DoSomething</Default> <Function> </Function> </Literal> <Literal Editable="true"> <ID>Condition</ID> <ToolTip>Initial conditions</ToolTip> <Default>Condition</Default> <Function> </Function> </Literal> </Declarations> <Code Language="csharp" Delimiter="$" Kind="method decl"><![CDATA[[Test] public void $TestMethod$_Should_$DoSomething$_When_$Condition$() { // Arrange // Act // Assert }]]></Code> </Snippet> </CodeSnippet> </CodeSnippets>
Now, import it as we already did before.
Then, head to your code, start typing
testsync
, and you’ll see the snippet come to life. The placeholders we defined are highlighted. You can then fill in these placeholders, hit tab, and move to the next one.Bonus: how to view all the snippets defined in VS
If you want to learn more about your IDE and the available snippets, you can have a look at the Snippet Explorer table.
You can find it under View > Tools > Snippet Explorer.
Here, you can see all the snippets, their shortcuts, and the content of each snippet. You can also see the placeholders highlighted in green.
It’s always an excellent place to learn more about Visual Studio.
Further readings
As always, you can read more on Microsoft Docs. It’s a valuable resource, although I find it difficult to follow.
🔗 Create a code snippet in Visual Studio | Microsoft docs
I prefer working with the UI. If you want to have a look at the repo of the extension we used in this article, here’s the link:
🔗 SnippetDesigner extension | GitHub
This article first appeared on Code4IT 🐧
Wrapping up
There are some tips that may improve both the code quality and the developer productivity.
If you want to enforce some structures or rules, add such snippets in your repository; when somebody joins your team, teach them how to import those snippets.
I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛
Happy coding!
🐧