نویسنده: post Bina

  • Interactive Video Projection Mapping with Three.js

    Interactive Video Projection Mapping with Three.js



    Projection mapping has long fascinated audiences in the physical world, turning buildings, sculptures, and entire cityscapes into moving canvases. What if you could recreate that same sense of spectacle directly inside the browser?

    With WebGL and Three.js, you can project video not onto walls or monuments but onto dynamic 3D grids made of hundreds of cubes, each one carrying a fragment of the video like a digital mosaic. Many will surely recognize this effect from Rogier de Boevé’s portfolio, which gained wide attention for showcasing exactly this approach.

    In this tutorial we’ll explore how to simulate video projection mapping in a purely digital environment, from building a grid of cubes, to UV-mapping video textures, to applying masks that determine which cubes appear. The demo for this tutorial is inspired by Rogier’s work, which he breaks down beautifully in his case study for anyone interested in the concept behind it.

    The result is a mesmerizing effect that feels both sculptural and cinematic, perfect for interactive installations, portfolio showcases, or simply as a playground to push your creative coding skills further.

    What is Video Projection Mapping in the Real World?

    When describing video projection mapping, it’s easiest to think of huge buildings lit up with animations during festivals, or art installations where a moving image is “painted” onto sculptures.

    Here are some examples of real-world video projections:

    Bringing it to our 3D World

    In 3D graphics, we can do something similar: instead of shining a physical projector, we map a video texture onto objects in a scene.

    Therefore, let’s build a grid of cubes using a mask image that will determine which cubes are visible. A video texture is UV-mapped so each cube shows the exact video fragment that corresponds to its grid cell—together they reconstruct the video, but only where the mask is dark.

    Prerequesites:

    • Three.js r155+
    • A small, high-contrast mask image (e.g. a heart silhouette).
    • A video URL with CORS enabled.

    Our Boilerplate and Starting Point

    Here is a basic starter setup, i.e. the minimum amount of code and structure you need to get a scene rendering in the browser, without worrying about the specific creative content yet.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            const geometry = new THREE.BoxGeometry( 1, 1, 1 );
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            const cube = new THREE.Mesh( geometry, this.material );
            this.group.add( cube );
            this.is_ready = true
        }
        
        ...
    }

    The result is a spinning red cube:

    Creating the Grid

    A centered grid of cubes (10×10 by default). Every cube has the same size and material. The grid spacing and overall scale are configurable.

    export default class Models {
    	constructor(gl_app) {
            ...
    
    		this.gridSize = 10;
            this.spacing = 0.75;
            this.createGrid()
        }
    
        createGrid() {
            this.material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
            
            // Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    const mesh = new THREE.Mesh(geometry, this.material);
                    mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                    mesh.position.z = 0;
    
                    this.group.add(mesh);
                }
            }
            this.group.scale.setScalar(0.5)
            ...
        }   
        ...
    }

    Key parameters

    World-space distance between cube centers. Increase for larger gaps, decrease to pack tighter.

    How many cells per side. A 10×10 grid ⇒ 100 cubes

    Creating the Video Texture

    This function creates a video texture in Three.js so you can use a playing HTML <video> as the texture on 3D objects.

    • Creates an HTML <video> element entirely in JavaScript (not added to the DOM).
    • We’ll feed this element to Three.js to use its frames as a texture.
    • loop = true → restarts automatically when it reaches the end.
    • muted = true → most browsers block autoplay for unmuted videos, so muting ensures it plays without user interaction.
    • .play() → starts playback.
    • ⚠️ Some browsers still need a click/touch before autoplay works — you can add a fallback listener if needed.
    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createVideoTexture() {
    		this.video = document.createElement('video')
    		this.video.src = 'https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4'
    		this.video.crossOrigin = 'anonymous'
    		this.video.loop = true
    		this.video.muted = true
    		this.video.play()
    
    		// Create video texture
    		this.videoTexture = new THREE.VideoTexture(this.video)
    		this.videoTexture.minFilter = THREE.LinearFilter
    		this.videoTexture.magFilter = THREE.LinearFilter
    		this.videoTexture.colorSpace = THREE.SRGBColorSpace
    		this.videoTexture.wrapS = THREE.ClampToEdgeWrap
    		this.videoTexture.wrapT = THREE.ClampToEdgeWrap
    
    		// Create material with video texture
    		this.material = new THREE.MeshBasicMaterial({ 
    			map: this.videoTexture,
    			side: THREE.FrontSide
    		})
        }
    
        createGrid() {
            this.createVideoTexture()
            ...
        }
        ...
    }

    This is the video we are using: Big Buck Bunny (without CORS)

    All the meshes have the same texture applied:

    Attributing Projection to the Grid

    We will be turning the video into a texture atlas split into a gridSize × gridSize lattice.
    Each cube in the grid gets its own little UV window (sub-rectangle) of the video so, together, all cubes reconstruct the full frame.

    Why per-cube geometry? Because we can create a new BoxGeometry for each cube since the UVs must be unique per cube. If all cubes shared one geometry, they’d also share the same UVs and show the same part of the video.

    export default class Models {
    	constructor(gl_app) {
            ...
            this.createGrid()
        }
    
        createGrid() {
            ...
    		// Grid parameters
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
                    
    				// Create individual geometry for each box to have unique UV mapping
    				// Calculate UV coordinates for this specific box
    				const uvX = x / this.gridSize
    				const uvY = y / this.gridSize // Remove the flip to match correct orientation
    				const uvWidth = 1 / this.gridSize
    				const uvHeight = 1 / this.gridSize
    				
    				// Get the UV attribute
    				const uvAttribute = geometry.attributes.uv
    				const uvArray = uvAttribute.array
    				
    				// Map each face of the box to show the same portion of video
    				// We'll focus on the front face (face 4) for the main projection
    				for (let i = 0; i < uvArray.length; i += 2) {
    					// Map all faces to the same UV region for consistency
    					uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
    					uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
    				}
    				
    				// Mark the attribute as needing update
    				uvAttribute.needsUpdate = true
                    ...
                }
            }
            ...
        }
        ...
    }

    The UV window for cell (x, y)
    For a grid of size N = gridSize:

    • UV origin of this cell:
      – uvX = x / N
      – uvY = y / N
    • UV size of each cell:
      – uvWidth = 1 / N
      – uvHeight = 1 / N

    Result: every face of the box now samples the same sub-region of the video (and we noted “focus on the front face”; this approach maps all faces to that region for consistency).

    Creating Mask

    We need to create a canvas using a mask that determines which cubes are visible in the grid.

    • Black (dark) pixels → cube is created.
    • White (light) pixels → cube is skipped.

    To do this, we need to:

    1. Load the mask image.
    2. Scale it down to match our grid size.
    3. Read its pixel color data.
    4. Pass that data into the grid-building step.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            // Create a canvas to read mask pixel data
            const canvas = document.createElement('canvas')
            const ctx = canvas.getContext('2d')
    
            const maskImage = new Image()
            maskImage.crossOrigin = 'anonymous'
            maskImage.onload = () => {
                // Get original image dimensions to preserve aspect ratio
                const originalWidth = maskImage.width
                const originalHeight = maskImage.height
                const aspectRatio = originalWidth / originalHeight
    
                // Calculate grid dimensions based on aspect ratio
                this.gridWidth
    			this.gridHeight
                if (aspectRatio > 1) {
                    // Image is wider than tall
                    this.gridWidth = this.gridSize
                    this.gridHeight = Math.round(this.gridSize / aspectRatio)
                } else {
                    // Image is taller than wide or square
                    this.gridHeight = this.gridSize
                    this.gridWidth = Math.round(this.gridSize * aspectRatio)
                }
    
                canvas.width = this.gridWidth
                canvas.height = this.gridHeight
                ctx.drawImage(maskImage, 0, 0, this.gridWidth, this.gridHeight)
    
                const imageData = ctx.getImageData(0, 0, this.gridWidth, this.gridHeight)
                this.data = imageData.data
    			this.createGrid()
    		}
    
            maskImage.src = '../images/heart.jpg'
    	}
        ...
    }

    Match mask resolution to grid

    • We don’t want to stretch the mask — this keeps it proportional to the grid.
    • gridWidth and gridHeight are how many mask pixels we’ll sample horizontally and vertically.
    • This matches the logical cube grid, so each cube can correspond to one pixel in the mask.

    Applying the Mask to the Grid

    Let’s combines mask-based filtering with custom UV mapping to decide where in the grid boxes should appear, and how each box maps to a section of the projected video.
    Here’s the concept step by step:

    • Loops through every potential (x, y) position in a virtual grid.
    • At each grid cell, it will decide whether to place a box and, if so, how to texture it.
    • flippedY: Flips the Y-axis because image coordinates start from the top-left, while the grid’s origin starts from the bottom-left.
    • pixelIndex: Locates the pixel in the this.data array.
    • Each pixel stores 4 values: red, green, blue, alpha.
    • Extracts the R, G, and B values for that mask pixel.
    • Brightness is calculated as the average of R, G, B.
    • If the pixel is dark enough (brightness < 128), a cube will be created.
    • White pixels are ignored → those positions stay empty.
    export default class Models {
    	constructor(gl_app) {
            ...
    		this.createMask()
        }
    
    	createMask() {
            ...
    	}
    
        createGrid() {
            ...
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                    
                    const geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
    
                    // Get pixel color from mask (sample at grid position)
                    // Flip Y coordinate to match image orientation
                    const flippedY = this.gridHeight - 1 - y
                    const pixelIndex = (flippedY * this.gridWidth + x) * 4
                    const r = this.data[pixelIndex]
                    const g = this.data[pixelIndex + 1]
                    const b = this.data[pixelIndex + 2]
    
                    // Calculate brightness (0 = black, 255 = white)
                    const brightness = (r + g + b) / 3
    
                    // Only create box if pixel is dark (black shows, white hides)
                    if (brightness < 128) { // Threshold for black vs white
    
                        // Create individual geometry for each box to have unique UV mapping
                        // Calculate UV coordinates for this specific box
                        const uvX = x / this.gridSize
                        const uvY = y / this.gridSize // Remove the flip to match correct orientation
                        const uvWidth = 1 / this.gridSize
                        const uvHeight = 1 / this.gridSize
                        
                        // Get the UV attribute
                        const uvAttribute = geometry.attributes.uv
                        const uvArray = uvAttribute.array
                        
                        // Map each face of the box to show the same portion of video
                        // We'll focus on the front face (face 4) for the main projection
                        for (let i = 0; i < uvArray.length; i += 2) {
                            // Map all faces to the same UV region for consistency
                            uvArray[i] = uvX + (uvArray[i] * uvWidth)     // U coordinate
                            uvArray[i + 1] = uvY + (uvArray[i + 1] * uvHeight) // V coordinate
                        }
                        
                        // Mark the attribute as needing update
                        uvAttribute.needsUpdate = true
                        
                        const mesh = new THREE.Mesh(geometry, this.material);
    
                        mesh.position.x = (x - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.y = (y - (this.gridSize - 1) / 2) * this.spacing;
                        mesh.position.z = 0;
    
                        this.group.add(mesh);
                    }
                }
            }
            ...
        }
        ...
    }

    Further steps

    • UV mapping is the process of mapping 2D video pixels onto 3D geometry.
    • Each cube gets its own unique UV coordinates corresponding to its position in the grid.
    • uvWidth and uvHeight are how much of the video texture each cube covers.
    • Modifies the cube’s uv attribute so all faces display the exact same portion of the video.

    Here is the result with the mask applied:

    Adding Some Depth and Motion to the Grid

    Adding subtle motion along the Z-axis brings the otherwise static grid to life, making the projection feel more dynamic and dimensional.

    update() {
        if (this.is_ready) {
            this.group.children.forEach((model, index) => {
                model.position.z = Math.sin(Date.now() * 0.005 + index * 0.1) * 0.6
            })
        }
    }

    It’s the time for Multiple Grids

    Up until now we’ve been working with a single mask and a single video, but the real fun begins when we start layering multiple projections together. By combining different mask images with their own video sources, we can create a collection of independent grids that coexist in the same scene. Each grid can carry its own identity and motion, opening the door to richer compositions, transitions, and storytelling effects.

    1. A Playlist of Masks and Videos

    export default class Models {
    	constructor(gl_app) {
            ...
            this.grids_config = [
                {
                    id: 'heart',
                    mask: `heart.jpg`,
                    video: `fruits_trail_squared-transcode.mp4`
                },
                {
                    id: 'codrops',
                    mask: `codrops.jpg`,
                    video: `KinectCube_1350-transcode.mp4`
                },
                {
                    id: 'smile',
                    mask: `smile.jpg`,
                    video: `infinte-grid_squared-transcode.mp4`
                },
            ]
            this.grids_config.forEach((config, index) => this.createMask(config, index))
            this.grids = []
        }
    ...
    }

    Instead of one mask and one video, we now have a list of mask-video pairs.

    Each object defines:

    • id → name/id for each grid.
    • mask → the black/white image that controls which cubes appear.
    • video → the texture that will be mapped onto those cubes.

    This allows you to have multiple different projections in the same scene.

    2. Looping Over All Grids

    Once we have our playlist of mask–video pairs defined, the next step is to go through each item and prepare it for rendering.

    For every configuration in the list we call createMask(config, index), which takes care of loading the mask image, reading its pixels, and then passing the data along to build the corresponding grid.

    At the same time, we keep track of all the grids by storing them in a this.grids array, so later on we can animate them, show or hide them, and switch between them interactively.

    3. createMask(config, index)

    createMask(config, index) {
        ...
        maskImage.onload = () => {
            ...
            this.createGrid(config, index)
        }
        maskImage.src = `../images/${config.mask}`
    }
    • Loads the mask image for the current grid.
    • When the image is loaded, runs the mask pixel-reading logic (as explained before) and then calls createGrid() with the same config and index.
    • The mask determines which cubes are visible for this specific grid.

    4. createVideoTexture(config, index)

    createVideoTexture(config, index) {
        this.video = document.createElement('video')
        this.video.src = `../videos/${config.video}`
        ...
    }
    • Creates a <video> element using the specific video file for this grid.
    • The video is then converted to a THREE.VideoTexture and assigned as the material for the cubes in this grid.
    • Each grid can have its own independent video playing.

    5. createGrid(config, index)

    createGrid(config, index) {
            this.createVideoTexture(config, index)
            const grid_group = new THREE.Group()
            this.group.add(grid_group)
    
            for (let x = 0; x < this.gridSize; x++) {
                for (let y = 0; y < this.gridSize; y++) {
                        ...
                        grid_group.add(mesh);
                }
            }
            grid_group.name = config.id
            this.grids.push(grid_group);
            grid_group.position.z = - 2 * index 
            ...
        }
    • Creates a new THREE.Group for this grid so all its cubes can be moved together.
    • This keeps each mask/video projection isolated.
    • grid_group.name: Assigns a name (you might later use config.id here).
    • this.grids.push(grid_group): Stores this grid in an array so you can control it later (e.g., show/hide, animate, change videos).
    • grid_group.position.z: Offsets each grid further back in Z-space so they don’t overlap visually.

    And here is the result for the multiple grids:

    And finally: Interaction & Animations

    Let’s start by creating a simple UI with some buttons on our HTML:

    <ul class="btns">
    	<li class="btns__item">
    		<button class="active" data-id="heart">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="codrops">
    			...
    		</button>
    	</li>
    	<li class="btns__item">
    		<button data-id="smile">
    			...
    		</button>
    	</li>
    </ul>

    We’ll also create a data-current="heart" to our canvas element, it will be necessary to change its background-color depending on which button was clicked.

    <canvas id="sketch" data-current="heart"></canvas>

    Let’s not create some colors for each grid using CSS:

    [data-current="heart"] {
    	background-color: #e19800;
    }
    
    [data-current="codrops"] {
    	background-color: #00a00b
    }
    
    [data-current="smile"] {
    	background-color: #b90000;
    }

    Time to apply to create the interactions:

    createGrid(config, index) {
        ...
        this.initInteractions()
    }

    1. this.initInteractions()

    initInteractions() {
        this.current = 'heart'
        this.old = null
        this.is_animating = false
        this.duration = 1
    
        this.DOM = {
            $btns: document.querySelectorAll('.btns__item button'),
            $canvas: document.querySelector('canvas')
        }
        this.grids.forEach(grid => {
            if(grid.name != this.current) {
                grid.children.forEach(mesh => mesh.scale.setScalar(0))
            }
        })
        this.bindEvents()
    }
    • this.current → The currently active grid ID. Starts as "heart" so the "heart" grid will be visible by default.
    • this.old → Used to store the previous grid ID when switching between grids.
    • this.is_animating → Boolean flag to prevent triggering a new transition while one is still running.
    • this.duration → How long the animation takes (in seconds).
    • $btns → Selects all the buttons inside .btns__item. Each button likely corresponds to a grid you can switch to.
    • $canvas → Selects the main <canvas> element where the Three.js scene is rendered.

    Loops through all the grids in the scene.

    • If the grid is not the current one (grid.name != this.current),
    • → It sets all of that grid’s cubes (mesh) to scale = 0 so they are invisible at the start.
    • This means only the "heart" grid will be visible when the scene first loads.

    2. bindEvents()

    bindEvents() {
        this.DOM.$btns.forEach(($btn, index) => {
            $btn.addEventListener('click', () => {
                if (this.is_animating) return
                this.is_animating = true
                this.DOM.$btns.forEach(($btn, btnIndex) => {
                    btnIndex === index ? $btn.classList.add('active') : $btn.classList.remove('active')
                })
                this.old = this.current
                this.current = `${$btn.dataset.id}`
                this.revealGrid()
                this.hideGrid()
            })
        })
    }

    This bindEvents() method wires up the UI buttons so that clicking one will trigger switching between grids in the 3D scene.

    • For each button, attach a click event handler.
    • If an animation is already running, do nothing — this prevents starting multiple transitions at the same time.
    • Sets is_animating to true so no other clicks are processed until the current switch finishes.

    Loops through all buttons again:

    • If this is the clicked button → add the active CSS class (highlight it).
    • Otherwise → remove the active class (unhighlight).
    • this.old → keeps track of which grid was visible before the click.
    • this.current → updates to the new grid’s ID based on the button’s data-id attribute.
      • Example: if the button has data-id="heart", this.current becomes "heart".

    Calls two separate methods:

    • revealGrid() → makes the newly selected grid appear (by scaling its cubes from 0 to full size).
    • hideGrid() → hides the previous grid (by scaling its cubes back down to 0).

    3. revealGrid() & hideGrid()

    revealGrid() {
        // Filter the current grid based on this.current value
        const grid = this.grids.find(item => item.name === this.current);
        
        this.DOM.$canvas.dataset.current = `${this.current}` 
        const tl = gsap.timeline({ delay: this.duration * 0.25, defaults: { ease: 'power1.out', duration: this.duration } })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 1, y: 1, z: 1, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, { z: 0 }, '<')
        })
    }
    
    hideGrid() {
        // Filter the current grid based on this.old value
        const grid = this.grids.find(item => item.name === this.old);
        const tl = gsap.timeline({
            defaults: { ease: 'power1.out', duration: this.duration },
            onComplete: () => { this.is_animating = false }
        })
        grid.children.forEach((child, index) => {
            tl
                .to(child.scale, { x: 0, y: 0, z: 0, ease: 'power3.inOut' }, index * 0.001)
                .to(child.position, {
                    z: 6, onComplete: () => {
                        gsap.set(child.scale, { x: 0, y: 0, z: 0 })
                        gsap.set(child.position, { z: - 6 })
                    }
                }, '<')
        })
    }

    And that is it! A full animated and interactive Video Projection Slider, made with hundreds of small cubes (meshes).

    ⚠️ Perfomance considerations

    The approach used in this tutorial, is the simplest and more digestable way to apply the projection concept; However, it can create too many draw calls: 100–1,000 cubes might fine; tens of thousands can be slow. If you need more detailed grid or more meshes on it, consider InstancedMesh and Shaders.

    Going further

    This a fully functional and versatile concept; Therefore, it opens so many possibilities.
    Which can be applied in some really cool ways, like scrollable story-telling, exhibition simulation, intro animations, portfolio showcase and etc.

    Here are some links for you to get inspired:

    Final Words

    I hope you’ve enjoyed this tutorial, and give a try on your projects or just explore the possibilities by changing the grid parameters, masks and videos.

    And talking about the videos, those used on this example are screen-recording of the Creative Code lessons contained in my Web Animations platform vwlab.io, where you can learn how to create more interactions and animations like this one.

    Come join us, you will be more than welcome! ☺️❤️



    Source link

  • NoisyBear Targets Kazakhstan Oil & Gas

    NoisyBear Targets Kazakhstan Oil & Gas


    Contents

    • Introduction
    • Key Targets
      • Industries Affected.
      • Geographical Focus.
    • Infection Chain.
    • Initial Findings
      • Looking into the malicious email.
      • Looking into the decoy-document.
    • Technical Analysis
      • Stage 0 – Malicious ZIP & LNK files.
      • Stage 1 – Malicious BATCH scripts.
      • Stage 2 – Malicious DOWNSHELL loaders.
      • Stage 3 – Malicious DLL implant.
    • Infrastructure and Hunting.
    • Attribution
    • Conclusion
    • Seqrite Protection.
    • IOCs
    • MITRE ATT&CK.

    Authors: Subhajeet Singha & Sathwik Ram Prakki

    Introduction

    Seqrite Labs APT-Team has been tracking and uncovered a supposedly new threat group since April 2025, that we track by the name Noisy Bear as Noisy Bear. This threat group has targeted entities in Central Asia, such as targeting the Oil and Gas or energy sector of Kazakhstan. The campaign is targeted towards employees of KazMunaiGas or KMG where the threat entity delivered a fake document related to KMG IT department, mimicking official internal communication and leveraging themes such as policy updates, internal certification procedures, and salary adjustments.

    In this blog, we will explore the in-depth technical details of the campaign, we encountered during our analysis. We will examine the various stages of this campaign, where infection starts with a phishing email having a ZIP attachment, which contains a malicious LNK downloader along with a decoy, which further downloads a malicious BATCH script, leading to PowerShell loaders, which we dubbed as DOWNSHELL reflectively loading a malicious DLL implant. We will also look into the infrastructure covering the entire campaign.

    Key Targets

    Industries Affected.

    • Energy Sector [Oil and Gas]

    Geographical Focus.

    Infection Chain

    Initial Findings

    Initially, we have been tracking this threat actor since April 2025, and we observed that this threat entity launched a campaign against KazMunaiGas employees in May 2025 using a spear-phishing-oriented method. A compromised business email was used to deliver a malicious ZIP file, which contained a decoy along with a malicious initial infection-based shortcut (.LNK) file known as График зарплат.lnk, which can be translated to Salary Schedule.lnk. The sample initially surfaced on Virus Total in the first half of May 2025.

    Now, let us look into the malicious email and decoy file.

    Looking into the malicious email.

    Initially, looking into the email file’s sender, we found that the threat actor used a compromised business email of an individual working in Finance Department of KazMunaiGas, using the email and an urgent prioritized subject URGENT! Review the updated salary schedule, they emailed it to the employees of KMG.

    Later, upon looking at the contents of the email, it became clear that the message was mostly crafted to look like an internal HR communication related to salary-oriented discussion or decision. The message basically says about reviewing an updated information about lot of things such as work schedules, salaries and incentives related policies and decisions. The TA also instructs the targets of KMG to check for a file known as График.zip translated to Schedule.zip and then to open a file known as График зарплат which translates to Salary Schedule , which is basically the shortcut (LNK) file to be executed to download further stagers.

    Well, last but not the least, the email also mentions to complete the instructions by 15th May 2025 enhancing a sense of urgency. Now, let us go ahead and analyze the decoy file.

    Looking into the decoy-document.

    Looking into the decoy document, we can see that it has an official logo of the targeted entity I.e., KazMunaiGas, along with instructions in both Russian and Kazakh language which instructs the employees through a series of simple steps which is to open the Downloads folder in the browser, extract a ZIP archive named KazMunayGaz_Viewer.zip, and run a file called KazMunayGaz_Viewer, although the file-name is irrelevant, but we believe, this is the exact file dropped from the malicious email. The decoy also mentions users to wait for a console window to appear and specifically advised them not to close or interact with it, to limit suspicion on targets’ ends. Last, not the least, it also mentions the IT-Support team in salutations to make it look completely legitimate, with above artefacts present in the decoy.

    Technical Analysis.

    We have divided the technical analysis into four parts, where initially we will look into the malicious ZIP containing the LNK file, which further downloads the malicious Batch script, and going ahead with downloading the script-based loader followed by the malicious DLL.

    Stage 0 – Malicious ZIP & LNK Files.

    Initially, looking into the ZIP file, we found three files, out of which one of them stands to be the decoy document, which we saw initially, the second one turns out to be README.txt, which once again makes sure that the instructions are present, so that it does not seem suspicious and the later one turns out to be malicious LNK file.

    Now, upon looking into the malicious shortcut(.LNK) file, named as График зарплат , we found that is using powershell.exe LOLBIN to execute a downloader-based behavior.

    It downloads a malicious batch script known as 123.bat, from a remote-server, which is hxxps[://]77[.]239[.]125[.]41[:]8443 and once it is downloaded, it stores the batch script under the path C:\Users\Public, it then executes the batch script using the Start-Process cmdlet from the path.

    Similarly, hunting for similar LNK file, we found another LNK, which belongs to the same campaign, looks slightly different.

    This malicious LNK file, uses a little operand shenanigan to avoid static signature detection, but concatenation of the string literals and further downloading a batch script from the same remote server, saving it to the Public folder, further executing it via cmdlet.

    In, the next section, we will examine the malicious BATCH scripts.

    Stage 1 – Malicious BATCH Scripts.

    Now, looking into the one of the BATCH scripts, I.e., it.bat , we can see that it is downloading PowerShell Loaders, which we have dubbed as DOWNSHELL, from a remote server known as support.ps1 and a.ps1, once they are downloaded, it then sleeps for a total of 11 seconds.

    Now, looking into the second batch script I.e., the 123.bat file, it also does the same which is downloading the PowerShell loaders, followed by a sleep of 10 seconds.

    In the next section, we will move ahead to understanding the working of the DOWNSHELL loaders written in PowerShell.

    Stage 2 – Malicious DOWNSHELL Loaders.

    In, this section we will look into the set of malicious PowerShell scripts, which we have dubbed as DOWNSHELL, the first PowerShell file, also known as support.ps1 is basically a script which is responsible for impairing defense on the target machine and the latter is responsible for performing loader-oriented function.

    Looking into the code, we figured out that the script is basically obfuscating, the target namespace by building “System.Management.Automation” via string concatenation, then enumerates all loaded .NET assemblies in the current AppDomain and filters for the one whose FullName matches that namespace.

    Then, using reflection technique, it resolves the internal type System.Management.Automation.AmsiUtils, which basically retrieves the private static field amsiInitiFailed, so changing or flipping this flag convinces PowerShell that the AMSI has failed to initialize, so the other malicious script belonging to DOWNSHELL family, does not get scanned and executes without any hassle or interruption. Now, let us look into the second PowerShell script.

    Looking into the first part of the code, it looks like a copied version of the famous red-team emulation-based tool known as PowerSploit, the function LookUpFunc basically dynamically retrieves the memory address of any exported function from a specified DLL without using traditional DllImport or Add-Type calls. It performs this by locating the Microsoft.Win32.UnsafeNativeMethods type within the already-loaded System.dll assembly, then extracting and invoking the hidden .NET wrappers for GetModuleHandle and GetProcAddress. By first resolving the base address of the target module ($moduleName) and then passing it along with the target function name ($functionName), it returns a raw function pointer to that API, which is required.

    Then, looking into the second part of the code, the function getDelegateType basically creates a custom .NET delegate on the fly, entirely in memory. It takes the parameter types and returns certain type, builds a new delegate class with those, and gives it an Invoke method so it can be used like a normal function. This lets the entire script wrap the raw function pointers (from LookupFunc) into something PowerShell can call directly, making it easy to run WinAPI functions without having to import them in the usual way, followed by querying the process ID of the explorer.exe process and storing it inside a variable.

    The latter part of the script is followed by a byte array containing the meterpreter reverse_tcpshellcode, which is basically using classical Create-RemoteThread Injection technique using OpenProcess, VirtualAllocEx, WriteProcessMemory & CreateRemoteThread to inject the shellcode inside the target process which is explorer.exe , followed by a message Injected! Check your listener!.

    Well, an interesting part of this script is some part of this is commented, which performs Reflective DLL injection into remote process, which is notepad in this case, using a tool known as PowerSploit , hosted at the remote server, which is downloaded, and the Meterpreter based DLL is being used. Another slight interesting case are the comments in Russian Language. In the next case, we will examine the DLL.

    Stage 3 – Malicious DLL Implant.

    Initially, we did check out the DLL implant, in a PE-analysis tool, and it was confirmed that the DLL implant or shellcode loader is a 64-bit binary.

    Next, moving ahead with the code, we saw that the implant is using Semaphores as a sort of gatekeeper to make sure only one copy of itself runs at a time, in this case the implant uses a named object Local\doSZQmSnP12lu4Pb5FRD. When it starts, it tries to create this semaphore then if it already exists, that means another instance is active. To double-check, it uses WaitForSingleObject on the semaphore and then looks for a specific named event. If the event exists, it knows another instance has already completed its setup. If it doesn’t, it creates the event itself.

    Now, depending on the previous function, which is responsible for checking the number of instances, the next step is it spawns a rundll32.exe process in a suspended manner.

    After creating the process in a suspended state, the implant performs classic thread-context hijacking: it calls GetThreadContext on the primary thread, uses VirtualAllocEx to reserve RWX memory in the target, WriteProcessMemory to drop the shellcode, updates the thread’s RIP to point to that buffer via SetThreadContext, and finally calls ResumeThread so execution continues at the injected shellcode. In this case, the shellcode basically is a reverse shell.

    Infrastructure & Hunting.

    Upon looking into the infrastructure, the threat entity had been using, we found a few slightly interesting details about it.

    Tool-Arsenal

    Along, with the tools, which we saw had been used by the threat actor, we also found that there are more open-source red-team oriented tools, which had been hosted by the threat actor for further usage.

    Pivoting

    Using similar fingerprint, we hunted a similar infrastructure, which belongs to the similar threat actor.

    One of most interesting part, being both the infrastructure is hosted under a sanctioned hosting firm known as Aeza Group LLC.

    Another interesting part is, we also discovered a lot of suspicious web applications being hosted, related to wellness, fitness and health assistance for Russian individuals.

    Attribution.

    Attribution is a very important metric when describing a threat entity. It involved analyzing and correlating various domains, which include Tactics, Techniques and Procedures (TTPs), operational mistakes, rotation and re-use of similar infrastructural artefacts, operational mistakes which could lead to attribution and much more.

    In our ongoing tracking of Noisy Bear, we have a lot of artefacts, such as languages present inside the tooling, usage of sanctioned web-hosting services and similar behavioral artefacts with related to Russian threat entities which have previously targeted similar Central Asian nations, we attribute the threat actor possibly could be of Russian origin.

    Conclusion.

    We have found that a threat entity, dubbed as NoisyBear is targeting Kazakh Energy Sector using company specific lure while heavily depending on PowerShell and open-source post-exploitation tools such as Metasploit, hosting them over a sanctioned web-hosting provider, we can also conclude that the threat actor has been active since the month of April 2025.

    SEQRITE Protection.

    TBD.

    IOCs

    File-Type SHA-256
    Outlook 5168a1e22ee969db7cea0d3e9eb64db4a0c648eee43da8bacf4c7126f58f0386
    ZIP 021b3d53fe113d014a9700488e31a6fb5e16cb02227de5309f6f93affa4515a6
    ZIP f5e7dc5149c453b98d05b73cad7ac1c42b381f72b6f7203546c789f4e750eb26
    LNK a40e7eb0cb176d2278c4ab02c4657f9034573ac83cee4cde38096028f243119c
    LNK 26f009351f4c645ad4df3c1708f74ae2e5f8d22f3b0bbb4568347a2a72651bee
    Batch Script d48aeb6afcc5a3834b3e4ca9e0672b61f9d945dd41046c9aaf782382a6044f97
    Batch Script 1eecfc1c607be3891e955846c7da70b0109db9f9fdf01de45916d3727bff96e0
    PowerShell da98b0cbcd784879ba38503946898d747ade08ace1d4f38d0fb966703e078bbf
    PowerShell 6d6006eb2baa75712bfe867bf5e4f09288a7d860a4623a4176338993b9ddfb4b
    PowerShell fb0f7c35a58a02473f26aabea4f682e2e483db84b606db2eca36aa6c7e7d9cf8
    DLL 1bfe65acbb9e509f80efcfe04b23daf31381e8b95a98112b81c9a080bdd65a2d
    Domains/IPs
    77[.]239[.]125[.]41
    wellfitplan[.]ru
    178[.]159[.]94[.]8

    MITRE ATT&CK

    Tactic Technique ID Name
    Reconnaissance T1589.002 Gather Victim Identity Information: Email Addresses
    Initial Access T1204.002

    T1078.002

    User Execution: Malicious File
    Valid Accounts: Domain Accounts
    Execution T1059.001

    T1059.00

    Command and Scripting Interpreter: PowerShell
    Defense Evasion T1562

    T1027.007

    T1027.013

    T1055.003

    T1620

    T1218.011

    Impair Defenses

    Dynamic API Resolution

    Encrypted/Encoded File

    Thread Execution Hijacking

    Reflective Code Loading

    System Binary Proxy Execution: Rundll32

    Command and Control T1105 Ingress Tool Transfer
    Exfiltration T1567.002 Exfiltration to Cloud Storage

     



    Source link

  • Exception handling with WHEN clause &vert; Code4IT

    Exception handling with WHEN clause | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    From C# 6 on, you can use the when keyword to specify a condition before handling an exception.

    Consider this – pretty useless, I have to admit – type of exception:

    public class RandomException : System.Exception
    {
        public int Value { get; }
        public RandomException()
        {
            Value = (new Random()).Next();
        }
    }
    

    This exception type contains a Value property which is populated with a random value when the exception is thrown.

    What if you want to print a different message depending on whether the Value property is odd or even?

    You can do it this way:

    try
    {
        throw new RandomException();
    }
    catch (RandomException re)
    {
        if(re.Value % 2 == 0)
            Console.WriteLine("Exception with even value");
        else
            Console.WriteLine("Exception with odd value");
    }
    

    But, well, you should keep your catch blocks as simple as possible.

    That’s where the when keyword comes in handy.

    CSharp when clause

    You can use it to create two distinct catch blocks, each one of them handles their case in the cleanest way possible.

    try
    {
        throw new RandomException();
    }
    catch (RandomException re) when (re.Value % 2 == 0)
    {
        Console.WriteLine("Exception with even value");
    }
    catch (RandomException re)
    {
        Console.WriteLine("Exception with odd value");
    }
    

    You must use the when keyword in conjunction with a condition, which can also reference the current instance of the exception being caught. In fact, the condition references the Value property of the RandomException instance.

    A real usage: HTTP response errors

    Ok, that example with the random exception is a bit… useless?

    Let’s see a real example: handling different HTTP status codes in case of failing HTTP calls.

    In the following snippet, I call an endpoint that returns a specified status code (506, in my case).

    try
    {
        var endpoint = "https://mock.codes/506";
        var httpClient = new HttpClient();
        var response = await httpClient.GetAsync(endpoint);
        response.EnsureSuccessStatusCode();
    }
    catch (HttpRequestException ex) when (ex.StatusCode == (HttpStatusCode)506)
    {
        Console.WriteLine("Handle 506: Variant also negotiates");
    }
    catch (HttpRequestException ex)
    {
        Console.WriteLine("Handle another status code");
    }
    

    If the response is not a success, the response.EnsureSuccessStatusCode() throws an exception of type HttpRequestException. The thrown exception contains some info about the returned status code, which we can use to route the exception handling to the correct catch block using when (ex.StatusCode == (HttpStatusCode)506).

    Quite interesting, uh? 😉

    This article first appeared on Code4IT

    To read more, you can head to the official documentation, even though there’s not so much.

    Happy coding!

    🐧



    Source link

  • 7 Must-Know GSAP Animation Tips for Creative Developers

    7 Must-Know GSAP Animation Tips for Creative Developers


    Today we’re going to go over some of my favorite GSAP techniques that can bring you great results with just a little code.

    Although the GSAP documentation is among the best, I find that developers often overlook some of GSAP’s greatest features or perhaps struggle with finding their practical application. 

    The techniques presented here will be helpful to GSAP beginners and seasoned pros. It is recommended that you understand the basics of loading GSAP and working with tweens, timelines and SplitText. My free beginner’s course GSAP Express will guide you through everything you need for a firm foundation.

    If you prefer a video version of this tutorial, you can watch it here:

    https://www.youtube.com/watch?v=EKjYspj9MaM

    Tip 1: SplitText Masking

    GSAP’s SplitText just went through a major overhaul. It has 14 new features and weighs in at roughly 7kb.

    SplitText allows you to split HTML text into characters, lines, and words. It has powerful features to support screen-readers, responsive layouts, nested elements, foreign characters, emoji and more.

    My favorite feature is its built-in support for masking (available in SplitText version 3.13+).

    Prior to this version of SplitText you would have to manually nest your animated text in parent divs that have overflow set to hidden or clip in the css.

    SplitText now does this for you by creating “wrapper divs” around the elements that we apply masking to.

    Basic Implementation

    The code below will split the h1 tag into chars and also apply a mask effect, which means the characters will not be visible when they are outside their bounding box.

    const split = SplitText.create("h1", {
    	type:"chars",
    	mask:"chars"
    })

    Demo: Split Text Masking (Basic)

    See the Pen
    Codrops Tip 1: Split Text Masking – Basic by Snorkl.tv (@snorkltv)
    on CodePen.

    This simple implementation works great and is totally fine.

    However, if you inspect the DOM you will see that 2 new <div> elements are created for each character:

    • an outer div with overflow:clip
    • an inner div with text 

    With 17 characters to split this creates 34 divs as shown in the simplified DOM structure below

    <h1>SplitText Masking
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>S</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>p</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>l</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>i</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>t</div>
    	</div>	
    	...
    </h1>

    The More Efficient Approach

    If you want to minimize the amount of DOM elements created you can split your text into characters and lines. Then you can just set the masking on the lines element like so:

    const split = SplitText.create("h1", {
    	type:"chars, lines",
    	mask:"lines"
    })

    Demo: Split Text Masking (Better with chars and lines)

    See the Pen
    Codrops Tip 1: Split Text Masking – Better with chars and lines by Snorkl.tv (@snorkltv)
    on CodePen.

    Now if you inspect the DOM you will see that there is

    • 1 line wrapper div with overflow:clip
    • 1 line div
    • 1 div per character 

    With 17 to characters to split this creates only 19 divs in total:

    <h1>SplitText Masking
    	<div> <!-- line wrapper with overflow:clip -->
    		<div> <!-- line -->
    			<div>S</div>
    			<div>p</div>
    			<div>l</div>
    			<div>i</div>
    			<div>t</div>
    			...
    		</div> 
    	</div> 
    </h1>

    Tip 2: Setting the Stagger Direction

    From my experience 99% of stagger animations go from left to right. Perhaps that’s just because it’s the standard flow of written text.

    However, GSAP makes it super simple to add some animation pizzazz to your staggers.

    To change the direction from which staggered animations start you need to use the object-syntax for the stagger value

    Normal Stagger

    Typically the stagger value is a single number which specifies the amount of time between the start of each target element’s animation.

    gsap.to(targets, {x:100, stagger:0.2}) // 0.2 seconds between the start of each animation

    Stagger Object

    By using the stagger object we can specify multiple parameters to fine-tune our staggers such as each, amount, from, ease, grid and repeat. See the GSAP Stagger Docs for more details.
    Our focus today will be on the from property which allows us to specify from which direction our staggers should start.

    gsap.to(targets, {x:100,
       stagger: {
         each:0.2, // amount of time between the start of each animation
         from:”center” // animate from center of the targets array   
    }

    The from property in the stagger object can be any one of these string values

    • “start” (default)
    • “center”
    • “end”
    • “edges”
    • “random”

    Demo: Stagger Direction Timeline

    In this demo the characters animate in from center and then out from the edges.

    See the Pen
    Codrops Tip 2: Stagger Direction Timeline by Snorkl.tv (@snorkltv)
    on CodePen.

    Demo: Stagger Direction Visualizer

    See the Pen
    Codrops Tip 2: Stagger Direction Visualizer by Snorkl.tv (@snorkltv)
    on CodePen.

    Tip 3: Wrapping Array Values

    The gsap.utils.wrap() function allows you to pull values from an array and apply them to multiple targets. This is great for allowing elements to animate in from opposite directions (like a zipper), assigning a set of colors to multiple objects and many more creative applications.

    Setting Colors From an Array

    I love using gsap.utils.wrap() with a set() to instantly manipulate a group of elements.

    // split the header
    const split = SplitText.create("h1", {
    	type:"chars"
    })
    
    //create an array of colors
    const colors = ["lime", "yellow", "pink", "skyblue"]
    
    // set each character to a color from the colors array
    gsap.set(split.chars, {color:gsap.utils.wrap(colors)})

    When the last color in the array (skyblue) is chosen GSAP will wrap back to the beginning of the array and apply lime to the next element.

    Animating from Alternating Directions

    In the code below each target will animate in from alternating y values of -50 and 50. 

    Notice that you can define the array directly inside of the wrap() function.

    const tween = gsap.from(split.chars, {
    	y:gsap.utils.wrap([-50, 50]),
    	opacity:0,
    	stagger:0.1
    }) 

    Demo: Basic Wrap

    See the Pen
    Codrops Tip 3: Basic Wrap by Snorkl.tv (@snorkltv)
    on CodePen.

    Demo: Fancy Wrap

    In the demo below there is a timeline that creates a sequence of animations that combine stagger direction and wrap. Isn’t it amazing what GSAP allows you to do with just a few simple shapes and a few lines of code?

    See the Pen
    Codrops Tip 3: Fancy Wrap by Snorkl.tv (@snorkltv)
    on CodePen.

    As you watch the animation be sure to go through the GSAP code to see which tween is running each effect. 

    I strongly recommend editing the animation values and experimenting.

    Tip 4: Easy Randomization with the “random()” String Function

    GSAP has its own random utility function gsap.utils.random() that lets you tap into convenient randomization features anywhere in your JavaScript code.

    // generate a random number between 0 and 450
    const randomNumber = gsap.utils.random(0, 450)

    To randomize values in animations we can use the random string shortcut which saves us some typing.

    //animate each target to a random x value between 0 and 450
    gsap.to(targets, {x:"random(0, 450)"})
    
    //the third parameter sets the value to snap to
    gsap.to(targets, {x:"random(0, 450, 50)"}) // random number will be an increment of 50
    
    //pick a random value from an array for each target
    gsap.to(targets, fill:"random([pink, yellow, orange, salmon])" 

    Demo: Random String

    See the Pen
    Codrops Tip 4: Random String by Snorkl.tv (@snorkltv)
    on CodePen.

    TIP 5: repeatRefresh:true

    This next tip appears to be pure magic as it allows our animations to produce new results each time they repeat.

    GSAP internally stores the start and end values of an animation the first time it runs. This is a performance optimization so that each time it repeats there is no additional work to do. By default repeating tweens always produce the exact same results (which is a good thing).

    When dealing with dynamic or function-based values such as those generated with the random string syntax “random(0, 100)” we can tell GSAP to record new values on repeat by setting repeatRefresh:true

    You can set repeatRefresh:true in the config object of a single tween OR on a timeline.

    //use on a tween
    gsap.to(target, {x:”random(50, 100”, repeat:10, repeatRefresh:true})
    
    //use on a timeline
    const tl = gsap.timeline({repeat:10, repeatRefresh:true})

    Demo: repeatRefresh Particles

    The demo below contains a single timeline with repeatRefresh:true.

    Each time it repeats the circles get assigned a new random scale and a new random x destination.

    Be sure to study the JS code in the demo. Feel free to fork it and modify the values.

    See the Pen
    Codrops Tip 5: repeatRefresh Particles by Snorkl.tv (@snorkltv)
    on CodePen.

    TIP 6: Tween The TimeScale() of an Animation

    GSAP animations have getter / setter values that allow you to get and set properties of an animation.

    Common Getter / Setter methods:

    • paused() gets or sets the paused state
    • duration() gets or sets the duration
    • reversed() gets or sets the reversed state
    • progress() gets or sets the progress
    • timeScale() gets or sets the timeScale

    Getter Setter Methods in Usage

    animation.paused(true) // sets the paused state to true
    console.log(animation.paused()) // gets the paused state
    console.log(!animation.paused()) // gets the inverse of the paused state

    See it in Action

    In the demo from the previous tip there is code that toggles the paused state of the particle effect.

    //click to pause
    document.addEventListener("click", function(){
    	tl.paused(!tl.paused()) 
    })

    This code means “every time the document is clicked the timeline’s paused state will change to the inverse (or opposite) of what it currently is”.

    If the animation is paused, it will become “unpaused” and vice-versa.

    This works great, but I’d like to show you trick for making it less abrupt and smoothing it out.

    Tweening Numeric Getter/Setter Values

    We can’t tween the paused() state as it is either true or false.

    Where things get interesting is that we can tween numeric getter / setter properties of animations like progress() and timeScale().

    timeScale() represents a factor of an animation’s playback speed.

    • timeScale(1): playback at normal speed
    • timeScale(0.5) playback at half speed
    • timeScale(2) playback at double speed

    Setting timeScale()

    //create an animation with a duration of 5 seconds
    const animation = gsap.to(box, {x:500, duration:5})
    
    //playback at half-speed making it take 10 seconds to play
    animation.timeScale(0.5)

    Tweening timeScale()

    const animation = gsap.to(box, {x:500, duration:5}) // create a basic tween
    
    // Over the course of 1 second reduce the timeScale of the animation to 0.5
    gsap.to(animation, {timeScale:0.5, duration:1})

    Dynamically Tweening timeScale() for smooth pause and un-pause

    Instead of abruptly changing the paused state of animation as the particle demo above does we are now going to tween the timeScale() for a MUCH smoother effect.

    Demo: Particles with timeScale() Tween

    See the Pen
    Codrops Tip 6: Particles with timeScale() Tween by Snorkl.tv (@snorkltv)
    on CodePen.

    Click anywhere in the demo above to see the particles smoothly slow down and speed up on each click.

    The code below basically says “if the animation is currently playing then we will slow it down or else we will speed it up”. Every time a click happens the isPlaying value toggles between true and false so that it can be updated for the next click.

    Tip 7: GSDevTools Markers and Animation IDs

    Most of the demos in this article have used GSDevTools to help us control our animations. When building animations I just love being able to scrub at my own pace and study the sequencing of all the moving parts.

    However, there is more to this powerful tool than just scrubbing, playing and pausing.

    Markers

    The in and out markers allow us to loop ANY section of an animation. As an added bonus GSDevTools remembers the previous position of the markers so that each time we reload our animation it will start  and end at the same time.

    This makes it very easy to loop a particular section and study it.

    Image from GSDevTools Docs

    Markers are a huge advantage when building animations longer than 3 seconds.

    To explore, open The Fancy Wrap() demo in a new window, move the markers and reload.

    Important: The markers are only available on screens wider than 600px. On small screens the UI is minimized to only show basic controls.

    Setting IDs for the Animation Menu

    The animation menu allows us to navigate to different sections of our animation based on an animation id. When dealing with long-form animations this feature is an absolute life saver.

    Since GSAP’s syntax makes creating complex sequences a breeze, it is not un-common to find yourself working on animations that are beyond 10, 20 or even 60 seconds!

    To set an animation id:

    const tl = gsap.timeline({id:"fancy"})
    
    //Add the animation to GSDevTools based on variable reference
    GSDevTools.create({animation:tl})
    
    //OR add the animation GSDevTools based on id
    GSDevTools.create({animation:"fancy"})

    With the code above the name “fancy” will display in GSDevTools.

    Although you can use the id with a single timeline, this feature is most helpful when working with nested timelines as discussed below.

    Demo: GSAP for Everyone

    See the Pen
    Codrops Tip 7: Markers and Animation Menu by Snorkl.tv (@snorkltv)
    on CodePen.

    This demo is 26 seconds long and has 7 child timelines. Study the code to see how each timeline has a unique id that is displayed in the animation menu.

    Use the animation menu to navigate to and explore each section.

    Important: The animation menu is only available on screens wider than 600px.

    Hopefully you can see how useful markers and animation ids can be when working with these long-form, hand-coded animations!

    Want to Learn More About GSAP?

    I’m here to help. 

    I’ve spent nearly 5 years archiving everything I know about GSAP in video format spanning 5 courses and nearly 300 lessons at creativeCodingClub.com.

    I spent many years “back in the day” using GreenSock’s ActionScript tools as a Flash developer and this experience lead to me being hired at GreenSock when they switched to JavaScript. My time at GreenSock had me creating countless demos, videos and learning resources.

    Spending years answering literally thousands of questions in the support forums has left me with a unique ability to help developers of all skill levels avoid common pitfalls and get the most out of this powerful animation library.

    It’s my mission to help developers from all over the world discover the joy of animating with code through affordable, world-class training.

    Visit Creative Coding Club to learn more.



    Source link

  • injecting and testing the current time with TimeProvider and FakeTimeProvider &vert; Code4IT

    injecting and testing the current time with TimeProvider and FakeTimeProvider | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Things that depend on concrete stuff are difficult to use when testing. Think of the file system: to have tests work properly, you have to ensure that the file system is structured exactly as you are expecting it to be.

    A similar issue occurs with dates: if you create tests based on the current date, they will fail the next time you run them.

    In short, you should find a way to abstract these functionalities, to make them usable in the tests.

    In this article, we are going to focus on the handling of dates: we’ll learn what the TimeProvider class is, how to use it and how to mock it.

    The old way for handling dates: a custom interface

    Back in the days, the most straightforward approach to add abstraction around the date management was to manually create an interface, or an abstract class, to wrap the access to the current date:

    public interface IDateTimeWrapper
    {
      DateTime GetCurrentDate();
    }
    

    Then, the standard implementation implemented the interface by using only the UTC date:

    public class DateTimeWrapper : IDateTimeWrapper
    {
      public DateTime GetCurrentDate() => DateTime.UtcNow;
    }
    

    A similar approach is to have an abstract class instead:

    public abstract class DateTimeWrapper
    {
      public virtual DateTime GetCurrentDate() => DateTime.UctNow;
    }
    

    Easy: you then have to add an instance of it in the DI engine, and you are good to go.

    The only problem? You have to do it for every project you are working on. Quite a waste of time!

    How to use TimeProvider in a .NET application to get the current date

    Along with .NET 8, the .NET team released an abstract class named TimeProvider. This abstract class, beyond providing an abstraction for local time, exposes methods for working with high-precision timestamps and TimeZones.

    It’s important to notice that dates are returned as DateTimeOffset, and not as DateTime instances.

    TimeProvider comes out-of-the-box with a .NET Console application, accessible as a singleton:

    static void Main(string[] args)
    {
      Console.WriteLine("Hello, World!");
      
      DateTimeOffset utc = TimeProvider.System.GetUtcNow();
      Console.WriteLine(utc);
    
      DateTimeOffset local = TimeProvider.System.GetLocalNow();
      Console.WriteLine(local);
    }
    

    On the contrary, if you need to use Dependency Injection, for example, in .NET APIs, you have to inject it as a singleton, like this:

    builder.Services.AddSingleton(TimeProvider.System);
    

    So that you can use it like this:

    public class SummerVacationCalendar
    {
      private readonly TimeProvider _timeProvider;
    
      public SummerVacationCalendar(TimeProvider timeProvider)
     {
        this._timeProvider = timeProvider;
     }
    
      public bool ItsVacationTime()
     {
        var today = _timeProvider.GetLocalNow();
        return today.Month == 8;
     }
    }
    

    How to test TimeProvider with FakeTimeProvider

    Now, how can we test the ItsVacationTime of the SummerVacationCalendar class?

    We can use the Microsoft.Extensions.TimeProvider.Testing NuGet library, still provided by Microsoft, which provides a FakeTimeProvider class that acts as a stub for the TimeProvider abstract class:

    TimeProvider.Testing NuGet package

    By using the FakeTimeProvider class, you can set the current UTC and Local time, as well as configure the other options provided by TimeProvider.

    Here’s an example:

    [Fact]
    public void WhenItsAugust_ShouldReturnTrue()
    {
     // Arrange
      var fakeTime = new FakeTimeProvider();
      fakeTime.SetUtcNow(new DateTimeOffset(2025, 8, 14, 22, 24, 12, TimeSpan.Zero));
      var sut = new SummerVacationCalendar(fakeTime);
    
     // Act
      var isVacation = sut.ItsVacationTime();
    
     // Assert
      Assert.True(isVacation);
    }
    
    [Fact]
    public void WhenItsNotAugust_ShouldReturnFalse()
    {
     // Arrange
      var fakeTime = new FakeTimeProvider();
      fakeTime.SetUtcNow(new DateTimeOffset(2025, 3, 14, 22, 24, 12, TimeSpan.Zero));
      var sut = new SummerVacationCalendar(fakeTime);
    
     // Act
      var isVacation = sut.ItsVacationTime();
    
     // Assert
      Assert.False(isVacation);
    }
    

    Further readings

    Actually, TimeProvider provides way more functionalities than just returning the UTC and the Local time.

    Maybe we’ll explore them in the future. But for now, do you know how the DateTimeKind enumeration impacts the way you create new DateTimes?

    🔗 C# tip: create correct DateTimes with DateTimeKind | Code4IT

    This article first appeared on Code4IT 🐧

    However, always remember to test the code not against the actual time but against static values. But, if for some reason you cannot add TimeProvider in your classes, there are other less-intrusive strategies that you can use (and that can work for other types of dependencies as well, like the file system):

    🔗 3 ways to inject DateTime and test it | Code4IT

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Design as Rhythm and Rebellion: The Work of Enrico Gisana

    Design as Rhythm and Rebellion: The Work of Enrico Gisana


    My name is Enrico Gisana, and I’m a creative director, graphic and motion designer.

    I’m the co-founder of GG—OFFICE, a small independent visual arts studio based in Modica, Sicily. I consider myself a multidisciplinary designer because I bring together different skills and visual languages. I work across analog and digital media, combining graphic design, typography, and animation, often blending these elements through experimental approaches. My design approach aims to push the boundaries of traditional graphic conventions, constantly questioning established norms to explore new visual possibilities.

    My work mainly focuses on branding, typography, and motion design, with a particular emphasis on kinetic typography.

    Between 2017 and 2025, I led numerous graphic and motion design workshops at various universities and art academies in Italy, including Abadir (Catania), Accademia di Belle Arti di Frosinone, Accademia di Belle Arti di Roma, CFP Bauer (Milan), and UNIRSM (San Marino). Since 2020, I’ve been teaching motion design at Abadir Academy in Catania, and since 2025, kinetic typography at CFP Bauer in Milan.

    Featured work

    TYPEXCEL — Variable font

    I designed an online half-day workshop for high school students on the occasion of an open day at the Academy of Design and Visual Communication Abadir, held in 2021.

    The goal of this workshop was to create a first contact with graphic design, but most of all with typography, using an Excel spreadsheet as a modular grid composed of editable and variable cells, instead of professional software which requires specific knowledge.

    The cell pattern allowed the students to create letters, icons, and glyphs. It was a stimulating exercise that helped them discover and develop their own design and creative skills.

    This project was published in Slanted Magazine N°40 “Experimental Type”.

    DEMO Festival

    DEMO Festival (Design in Motion Festival) is one of the world’s most prominent motion design festivals, founded by the renowned Dutch studio Studio Dumbar. The festival takes over the entire digital screen network of Amsterdam Central Station, transforming public space into a 24-hour exhibition of cutting-edge motion work from around the globe.

    I’ve had the honor of being selected multiple times to showcase my work at DEMO: in 2019 with EYE SEQUENCE; in 2022 with ALIEN TYPE and VERTICAL; and again in 2025 with ALIEN TRIBE, HELLOCIAOHALLOSALUTHOLA, and FREE JAZZ.

    In the 2025 edition, ALIEN TRIBE and HELLOCIAOHALLOSALUTHOLA were also selected for the Special Screens program, which extended the festival’s presence beyond the Netherlands. These works were exhibited in digital spaces across cities including Eindhoven, Rotterdam, Tilburg, Utrecht, Hamburg, and Düsseldorf, reaching a broader international audience.

    MARCO FORMENTINI

    My collaboration with Italian footwear designer Marco Formentini, based in Amsterdam, began with the creation of his visual identity and gradually expanded into other areas, including apparel experiments and the design of his personal website.

    Each phase of the project reflects his eclectic and process-driven approach to design, while also allowing me to explore form, texture, and narrative through different media.

    Below is a closer look at the three main outputs of this collaboration: logo, t-shirt, and website.

    Logo

    Designed for Italian footwear designer Marco Formentini, this logo reflects his broad, exploratory approach to design. Rather than sticking to a traditional monogram, I fused the letters “M” and “F” into a single, abstract shape, something that feels more like a symbol than a set of initials. The result is a wild, otherworldly mark that evokes movement, edge, and invention, mirroring Marco’s ability to shift across styles and scales while always keeping his own perspective.

    Website

    I conceived Marco Formentini’s website as a container, a digital portfolio without a fixed structure. It gathers images, sketches, prototypes, and renderings not through a linear narrative but through a visual flow that embraces randomness.

    The layout is split into two vertical columns, each filled with different types of visual content. By moving the cursor left or right, the columns dynamically resize, allowing the user to shift focus and explore the material in an intuitive and fluid way. This interactive system reflects Marco’s eclectic approach to footwear design, a space where experimentation and process take visual form.

    Website development by Marco Buccolo.

    Check it out: marco-formentini.com

    T—Shirt

    Shortly after working on his personal brand, I shared with Marco Formentini a few early graphic proposals for a potential t-shirt design, while he happened to be traveling through the Philippines with his friend Jo.

    Without waiting for a full release, he spontaneously had a few pieces printed at a local shop he stumbled upon during the trip, mixing one of the designs on the front with a different proposal on the back. An unexpected real-world test run for the identity, worn into the streets before even hitting the studio.

    Ditroit

    This poster was created to celebrate the 15th anniversary of Ditroit, a motion design and 3D studio based in Milan.

    At the center is an expressive “15”, a tribute to the studio’s founder, a longtime friend and former graffiti companion. The design reconnects the present with our shared creative roots and the formative energy of those early years.

    Silver on black: a color pairing rooted in our early graffiti experiments, reimagined here to celebrate fifteen years of visual exploration.

    Tightype

    A series of typographic animations I created for the launch of Habitas, the typeface designed by Tightype and released in 2021.

    The project explores type in motion, not just as a vehicle for content but as a form of visual expression in itself. Shapes bounce, rotate and multiply, revealing the personality of the font through rhythm and movement.

    Jane Machine

    SH SH SH SH is the latest LP from Jane Machine.

    The cover is defined by the central element of the lips, directly inspired by the album’s title. The lips not only mimic the movement of the “sh” sound but also evoke the noise of tearing paper. I amplified this effect through the creative process by first printing a photograph of the lips and then tearing it, introducing a tactile quality that contrasts with and complements the more electronic aesthetic of the colors and typography.

    Background

    I’m a creative director and graphic & motion designer with a strong focus on typography.

    My visual journey started around the age of 12, shaped by underground culture: I was into graffiti, hip hop, breakdancing, and skateboarding.

    As I grew up, I explored other scenes, from punk to tekno, from drum and bass to more experimental electronic music. What always drew me in, beyond the music itself, was the visual world around it: free party flyers, record sleeves, logos, and type everywhere.

    Between 2004 and 2010, I produced tekno music, an experience that deeply shaped my approach to design. That’s where I first learned about timelines, beats, and rhythm, all elements that today are at the core of how I work with motion.

    Art has also played a major role in shaping my visual culture, from the primitive signs of hieroglyphs to Cubism, Dadaism, Russian Constructivism, and the expressive intensity of Antonio Ligabue.

    The aesthetics and attitude of those worlds continue to influence everything I do and how I see things.

    In 2013, I graduated in Graphic Design from IED Milano and started working with various agencies. In 2014, I moved back to Modica, Sicily, where I’m still based today.

    Some of my animation work has been featured at DEMO Festival, the international motion design event curated by Studio Dumbar, in the 2019, 2022, and 2025 editions.

    In 2022, I was published in Slanted Magazine #40 (EXPERIMENTAL TYPE) with TYPEXCEL, Variable font, a project developed for a typography workshop aimed at high school students, entirely built inside an Excel spreadsheet.

    Since 2020, I’ve been teaching Motion Design at Abadir, Academy of Design and Visual Communication in Catania, and in 2025 I started teaching Type in Motion at Bauer in Milan.

    In 2021, together with Francesca Giampiccolo, I founded GG—OFFICE, a small independent visual studio based in Modica, Sicily.

    GG—OFFICE is a design space where branding and motion meet through a tailored and experimental approach. Every project grows from dialogue, evolves through research, and aims to shape contemporary, honest, and visually forward identities.

    In 2025, Francesca and I gave a talk on the theme of madness at Desina Festival in Naples, a wild, fun, and beautifully chaotic experience.

    Design Philosophy

    My approach to design is rooted in thought, I think a lot, as well as in research, rhythm, and an almost obsessive production of drafts.

    Every project is a unique journey where form always follows meaning, and never simply does what the client says.

    This is not about being contrary; it’s about bringing depth, intention and a point of view to the process.

    I channel the raw energy and DIY mindset of the subcultures that shaped me early on. I’m referring to those gritty, visual sound-driven scenes that pushed boundaries and blurred the line between image and sound. I’m not talking about the music itself, but about the visual culture that surrounded it. That spirit still fuels my creative engine today.

    Typography is my playground, not just a visual tool but a way to express structure, rhythm and movement.

    Sometimes I push letterforms to their limit, to the point where they lose readability and become pure visual matter.

    Whether I’m building a brand identity or animating graphics, I’m always exploring new visual languages, narrative rhythms and spatial poetry.

    Tools and Techniques

    I work across analog and digital tools, but most of my design and animation takes shape in Adobe Illustrator, After Effects, InDesign and Photoshop. And sometimes even Excel 🙂 especially when I want to break the rules and rethink typography in unconventional ways.

    I’m drawn to processes that allow for exploration and controlled chaos. I love building visual systems, breaking them apart and reconstructing them with intention.

    Typography, to me, is a living structure, modular, dynamic and often influenced by visual or musical rhythm.

    My workflow starts with in-depth research and a large amount of hand sketching.

    I then digitize the material, print it, manipulate it manually by cutting, collaging and intervening physically, then scan it again and bring it back into the digital space.

    This back-and-forth between mediums helps me achieve a material quality and a sense of imperfection that pure digital work often lacks.

    Inspiration

    Beyond the underground scenes and art movements I mentioned earlier, my inspiration comes from everything around me. I’m a keen observer and deeply analytical. Since I was a kid, I’ve been fascinated by people’s gestures, movements, and subtle expressions.

    For example, when I used to go to parties, I would often stand next to the DJ, not just to watch their technique, but to study their body language, movements, and micro-expressions. Even the smallest gesture can spark an idea.

    I believe inspiration is everywhere. It’s about being present and training your eye to notice the details most people overlook.

    Future Goals

    I don’t have a specific goal or destination. My main aim is to keep doing things well and to never lose my curiosity. For me, curiosity is the fuel that drives creativity and growth, so I want to stay open, keep exploring, and enjoy the process without forcing a fixed outcome.

    Message to Readers

    Design is not art!

    Design is method, planning, and process. However, that method can, and sometimes should, be challenged, as long as you remain fully aware of what you are doing. It is essential that what you create can be reproduced consistently and, depending on the project, works effectively across different media and formats. I always tell my students that you need to know the rules before you can break them. To do good design, you need a lot of passion and a lot of patience.

    Contact



    Source link

  • Try Cross-browser Testing! (For Free!)

    Try Cross-browser Testing! (For Free!)


    TLDR: You can cross-browser test your website in real browsers for free without installing anything by using Browserling. It runs all browsers (Chrome, Firefox, Safari, Edge, etc) on all systems so you don’t need to download them or keep your own browser stack.

    What Is Cross-browser Testing?

    Cross-browser testing means checking how a website looks and works in different browsers. Every browser, like Chrome, Firefox, Edge, or Safari, shows websites a little differently. Sometimes your site looks fine in one but breaks in another. Cross-browser testing makes sure your site works for everyone.

    Why Do I Need It?

    Because your visitors don’t all use the same browser. Some people are on Chrome, others on Safari or Firefox, and some still use Internet Explorer. If your site only works on one browser, you’ll lose visitors. Cross-browser testing helps you catch bugs before your users do.

    Can I Test Mobile Browsers Too?

    Yes, cross-browser testing tools like Browserling let you check both desktop and mobile versions. You can quickly switch between screen sizes and devices to see how your site looks on phones, tablets, and desktops.

    Do I Have to Install Different Browsers?

    Nope! That’s the best part. You don’t need to clutter your computer with ten different browsers. Instead, cross-browser testing runs them in the cloud. You just pick the browser you want and test right from your own browser window.

    Is It Safe?

    Totally. You’re not installing anything shady, and you’re not downloading random browsers from sketchy websites. Everything runs on Browserling’s secure servers.

    What If I Just Want to Test a Quick Fix?

    That’s exactly what the free version is for. Got a CSS bug? A weird layout issue? Just load up the browser you need, test your page, and see how it behaves.

    How Is This Different From Developer Tools?

    Dev tools are built into browsers and help you inspect your site, but they can’t show you how your site looks in browsers you don’t have. Cross-browser testing lets you actually run your site in those missing browsers and see the real deal.

    Is It Good for Developers and Testers?

    For sure. Developers use cross-browser testing to make websites look right across platforms. QA testers use it to make sure new releases don’t break old browsers. Even hobbyists can use it to make their personal sites look better.

    Is It Free?

    Yes, Browserling has a free plan with limited time per session. If you need more testing power, they also have paid options. But for quick checks, the free plan is usually enough.

    What Is Browserling?

    Browserling is a free cloud-based cross-browser testing service. It lets you open real browsers on real machines and test your sites instantly. The latest geo-browsing feature allows you to route your tests through 20+ countries to see how websites behave across regions or to bypass sites that try to block datacenter traffic. Plus, the latest infrastructure update added admin rights, WSL with Ubuntu/Kali, build tools, custom resolutions, and more.

    Who Uses Browserling?

    Browserling is trusted by developers, IT teams, schools, banks, and even governments. Anyone who needs websites to “just work” across browsers uses Browserling. Millions of people test their sites on it every month.

    Happy testing!



    Source link

  • Try Cross-browser Testing! (For Free!)

    Try Cross-browser Testing! (For Free!)


    TLDR: You can cross-browser test your website in real browsers for free without installing anything by using Browserling. It runs all browsers (Chrome, Firefox, Safari, Edge, etc) on all systems so you don’t need to download them or keep your own browser stack.

    What Is Cross-browser Testing?

    Cross-browser testing means checking how a website looks and works in different browsers. Every browser, like Chrome, Firefox, Edge, or Safari, shows websites a little differently. Sometimes your site looks fine in one but breaks in another. Cross-browser testing makes sure your site works for everyone.

    Why Do I Need It?

    Because your visitors don’t all use the same browser. Some people are on Chrome, others on Safari or Firefox, and some still use Internet Explorer. If your site only works on one browser, you’ll lose visitors. Cross-browser testing helps you catch bugs before your users do.

    Can I Test Mobile Browsers Too?

    Yes, cross-browser testing tools like Browserling let you check both desktop and mobile versions. You can quickly switch between screen sizes and devices to see how your site looks on phones, tablets, and desktops.

    Do I Have to Install Different Browsers?

    Nope! That’s the best part. You don’t need to clutter your computer with ten different browsers. Instead, cross-browser testing runs them in the cloud. You just pick the browser you want and test right from your own browser window.

    Is It Safe?

    Totally. You’re not installing anything shady, and you’re not downloading random browsers from sketchy websites. Everything runs on Browserling’s secure servers.

    What If I Just Want to Test a Quick Fix?

    That’s exactly what the free version is for. Got a CSS bug? A weird layout issue? Just load up the browser you need, test your page, and see how it behaves.

    How Is This Different From Developer Tools?

    Dev tools are built into browsers and help you inspect your site, but they can’t show you how your site looks in browsers you don’t have. Cross-browser testing lets you actually run your site in those missing browsers and see the real deal.

    Is It Good for Developers and Testers?

    For sure. Developers use cross-browser testing to make websites look right across platforms. QA testers use it to make sure new releases don’t break old browsers. Even hobbyists can use it to make their personal sites look better.

    Is It Free?

    Yes, Browserling has a free plan with limited time per session. If you need more testing power, they also have paid options. But for quick checks, the free plan is usually enough.

    What Is Browserling?

    Browserling is a free cloud-based cross-browser testing service. It lets you open real browsers on real machines and test your sites instantly. The latest geo-browsing feature allows you to route your tests through 20+ countries to see how websites behave across regions or to bypass sites that try to block datacenter traffic. Plus, the latest infrastructure update added admin rights, WSL with Ubuntu/Kali, build tools, custom resolutions, and more.

    Who Uses Browserling?

    Browserling is trusted by developers, IT teams, schools, banks, and even governments. Anyone who needs websites to “just work” across browsers uses Browserling. Millions of people test their sites on it every month.

    Happy testing!



    Source link

  • A Behind-the-Scenes Look at the New Jitter Website

    A Behind-the-Scenes Look at the New Jitter Website



    If Jitter isn’t on your radar yet, it’s a motion design tool for creative teams that makes creating animated content, from social media assets and ads to product animations and interface mockups, easy and fun.

    Think of it as Figma meets After Effects: intuitive, collaborative, and built for designers who want to bring motion into their workflows without the steep learning curve of traditional tools.

    Why We Redesigned Our Website

    Our previous site had served us well, but it also remained mostly unchanged since we launched Jitter nearly two years ago. The old website focused heavily on the product’s features, but didn’t really communicate its value and use cases. In 2025, we decided it was time for a full refresh.

    The main goal? Not just to highlight what Jitter does, but articulate why it changes the game for motion design.

    We’ve had hundreds of conversations with creative professionals, from freelancers and brand designers to agencies and startups, and heard four key benefits mentioned consistently:

    1. Ease of use
    2. Creativity
    3. Speed
    4. Collaboration

    These became the pillars of the new site experience.

    We also wanted to make room for growth: a more cohesive brand, better storytelling, real-world customer examples, and educational content to help teams get the most out of Jitter.

    Another major shift was in our audience. The first version of the website was speaking to every designer, highlighting simplicity and familiarity. But as the product evolved, it became clear that Jitter shines the most when used collaboratively across teams. The new website reflects that focus.

    Shaping Our Positioning

    We didn’t define our “how, what, and why” in isolation. Throughout 2024, we spoke to dozens of creative teams, studios, and design leaders, and listened closely.

    We used this ongoing feedback to shape the way we talk about Jitter ourselves: which problems it solves, where it fits in the design workflow, and why teams love it. The new website is a direct result of that research.

    At the same time, we didn’t want Jitter to feel too serious or corporate. Even though it’s built for teams, we aimed to keep the brand light, fun, and relatable. Motion design should be exciting, not intimidating, and we wanted that to come through in the way Jitter sounds and feels.

    Designing With Jitter

    We also walked the talk, using Jitter to design all animations and prototype every interaction across the new site.

    From menu transitions to the way cards animate on scroll, all micro-interactions were designed in Jitter. It gave us speed, clarity, and a single source of truth, and eliminated a lot of the back-and-forth in the handoff process.

    Our development partners at Antinomy Studio and Ingamana used Jitter too. They prototyped transitions and UI motion directly in the tool to validate ideas and communicate back to our team. It was great to see developers using motion as a shared language, not a handoff artifact.

    Building Together with Antinomy Studio

    The development of the new site was handled in collaboration with the talented team at Antinomy Studio.

    The biggest technical challenge was the large horizontal scroll experience on the homepage. It needed to feel natural, responsive, and smooth across devices, and maintain high performance without compromising on the visuals.

    The site was built using React and GSAP for complex, timeline-based animations and transitions.

    “The large horizontal scroll was particularly complicated and required significant responsive changes. Instead of defining overly complex timelines where screen width values would change the logic of the animation in JavaScript, we used progress values as CSS variables. This allowed us to use calc() functions to translate and scale elements, while the GSAP timeline only updates values from 0 to 1. So easy to understand and maintain!

    — Baptiste Briel, Antinomy

    We’ve promoted the use of CSS as much as possible for high performances hover effects and transitions. We’ve even used the new linear() easing functions to bring a bouncy feeling to our CSS animations.

    There’s a great tool created by Jake Archibald on generating spring-like CSS easing functions that you can paste as CSS variables. It’s so much fun to play with, and it’s also something that the Jitter team has implemented in their software, so it was super easy to review and tweak for both design and engineering teams.

    Jitter animations were exported as Lottie files and integrated directly, making the experience dynamic and lightweight. It’s a modern stack that supports our need for speed and flexibility, both in the frontend and behind the scenes.

    — Baptiste Briel, Antinomy

    What We Learned

    This redesign taught us a few valuable lessons:

    • Start with benefits, not features. Users don’t care what your product does until they understand how it can help them.
    • Design with your real audience in mind. Jitter for solo designers and Jitter for teams are two different stories. Clarifying our audience helped us craft a stronger, clearer narrative.
    • Prototyping with Jitter helped us move faster, iterate more confidently, and keep design and development in sync.

    We’ve already seen an impact: a sharper brand perception, higher engagement and conversion across all pages, and a new wave of qualified inbound leads from the best brands in the world, including Microsoft, Dropbox, Anthropic, Lyft, Workday, United Airlines, and more. And this is just the beginning.

    What’s Next?

    We see our new website as a constantly evolving platform. In the coming months, we’ll be adding more:

    • Case studies and customer stories
    • Use case pages
    • Learning resources and motion design tutorials
    • Playful experiments and interactive demos

    Our mission remains the same: to make motion design accessible, collaborative, and fun. Our website is now better equipped to carry that message forward.

    Let us know what you think, and if there’s anything you’d love to see next.

    Thanks for reading, and stay in motion 🚀

    Give Jitter a Try

    Get started with Jitter for free and explore 300+ free templates to jumpstart your next project. Once you’re ready to upgrade, get 25% off the first year of paid annual plans with JITTERCODROPS25.



    Source link

  • A Deep Dive into the UNC6040 Cyber Attack

    A Deep Dive into the UNC6040 Cyber Attack


    Executive Summary

    In early June 2025, Google’s corporate Salesforce instance (used to store contact data for small‑ and medium‑sized business clients) was compromised through a sophisticated vishing‑extortion campaign orchestrated by the threat‑group tracked as UNC6040 & UNC6240 (online cybercrime collective known as “The Com” linked to “ShinyHunters).”

    The attackers combined three core vectors:

    1. Voice‑phishing (vishing) – Impersonating IT staff in a convincing phone call, persuading a Google employee to approve a malicious application connected to Salesforce, a rapid‑reply extortion scheme demanding Bitcoin payments within 72 hrs.
    2. OAuth app abuse – the deployment of custom Python scripts that emulate Salesforce’s DataLoader, allowing automated bulk exports.
    3. Anonymity layers – Mullvad VPN‑initiated calls followed by TOR‑based data exfiltration, which anonymized the actors’ true location.

    Though Google confirmed that no user passwords were stolen, the breached dataset, included business names, email addresses, phone numbers and related notes. The implications reach far beyond the affected small and medium business customers: while associating compliance, brand integrity, partner security, and regulatory scrutiny of SaaS risk management practices.

    Meanwhile, the Salesloft Drift attack orchestrated by UNC6395 has emerged as one of the most significant cyber incidents in late 2025, which compromised the Salesloft Drift (AI chat-bot/assistant) used for its Salesforce integration. The theft of OAuth token appears to have resulted in running SOQL queries on Salesforce databases that held objects such as cases, accounts, users and opportunities. The attack affected hundreds of Salesforce customers, impacting not just Salesforce users but also other third-party integrations. Salesloft said “Initial findings have shown that the actor’s primary objective was to steal credentials, specifically focusing on sensitive information like AWS access keys, passwords and Snowflake-related access tokens”. Google explicitly warned of the breach’s extensive scope beyond its own systems.

    Primary Tactics & Attack Vectors:

    • Initial Access: Unauthorized OAuth apps installed via trial accounts (using legitimate email domains) and later via compromised accounts from unrelated orgs.
    • Vishing, Social Engineering: Voice phishing calls to employees
    • Exfiltration: Custom Python scripts that replicate DataLoader operations.
      Infrastructure: Initial calls routed via Mullvad VPN IPs; data transfer via TOR exit nodes.
    • Extortion: Requesting immediate Bitcoin payment.

    Threat Attribution

    UNC5537, UNC6040 & UNC6240 likely linked with “Scattered LAPSUS$ Hunters” (“Chos hub”) exhibits similar attack patterns.

    A Telegram channel called “Scattered LAPSUS$ Hunters”, blending the names of ShinyHunters, Scattered Spider and Lapsus$ groups emerged, which researchers describe as a chaotic hub for leaks and threats. The group focuses in exploiting the human element to gain access to company networks. The channel ran public polls where members voted on which victim’s data to fully dump, advertised zero-day exploits and a supposed new ransomware toolkit, touting the collective’s action.

    GOOGLE - SALESFORCE BREACH

    UNC6395 shared the theme of abusing OAuth mechanisms for Salesforce access via compromised 3rd party integration – evolving their tactics against cloud ecosystems. Meanwhile, UNC6040 uses vishing and OAuth abuse to access Salesforce through social engineering. Overlapping TTPs indicate targeting trusted access applications and the name ShinyHunters appears across these incidents. Al the same time, Google tracks this cluster separately as UNC6395, ShinyHunters extortion group initially told BleepingComputer that they were behind the SalesLoft Drift attack.

    Parallel Campaigns

    Similar tactics applied in attacks targeting Adidas, Qantas, Allianz Life, LVMH brands (Louis Vuitton, Dior, Tiffany & Co.), Chanel, AT&T, Santander, Starbucks Singapore, Snowflake breach at Ticketmaster, Cisco, Pandora, Bouygues Telecom, Tokopedia, Homechef, Chatbooks, Portail Orange, Farmers Insurance, TransUnion, UK Legal Aid Agency, Gucci, Salesforce, Fairhaven Homes, Workday, Mazars.fr, Adidas, Air France-KLM, Phantom Wallet, Neiman Marcus, Coca-Cola, ZScaler.

    • Qantas Airways: Employee credentials & sensitive flight/customer records targeted. Attack blended SIM swapping + SaaS compromise.
    • Air France-KLM: Airline loyalty accounts and CRM cloud environment probed.
    • Retailers (generalized set) → Used social engineering and SIM-swap vishing to gain access to IT/helpdesk portals.
    • Okta: Service provider breach led to downstream impact on multiple clients (identity federation exploited).
    • MGM Resorts: Social engineering of IT desk led to ransomware deployment, slot machines & hotel services down for days.
    • Caesars Entertainment: Extortion campaign where ransom was allegedly paid; loyalty program records got leaked.
    • AT&T: Call metadata (500M+ records, including phone numbers, call/SMS logs) stolen and advertised for sale.
    • Ticketmaster (Live Nation): ~560M customer records including event ticketing details, addresses, payment info leaked.
    • Advance Auto Parts: Data set of supply chain and retail customer info stolen.
    • Santander Bank: Customer financial records compromised; reported 30M records affected.
    • LendingTree: Customer PII and loan data exposed.
    • Neiman Marcus: Customer loyalty and credit program data targeted.
    • Los Angeles Unified School District (LAUSD): Student/employee data exfiltrated from Snowflake environment.
    • Pandora, Adidas, LVMH (Louis Vuitton, Chanel, Dior): Retail brand data exposed (customer PII + sales info).
    • ZScaler: UNC6395 compromised Salesforce instance through Salesloft Drift and steals customer data

     

     

    With the attack that involves compromise of the Salesloft Drift AI OAuth token, any data that could potentially be compromised from the databases (that held information on users, accounts, cases, etc,) can be utilized by the attacker in various ways. The stolen data could either be sold to third parties or used to access emails (as reported from a very small number of Google Workspace accounts) launch further credential-reuse attacks on other SaaS accounts.

    Indicators of Compromise:

    UNC6040, UNC6240 UNC6395
    81.17.28.95

    31.133.0.210

    45.138.16.69

    45.90.185.109

    45.141.215.19

    45.90.185.115

    45.90.185.107

    37.114.50.27

    45.90.185.118

    179.43.159.201

    38.135.24.30

    91.199.42.164

    192.159.99.74

    208.68.36.90

    44.215.108.109

    154.41.95.2

    176.65.149.100

    179.43.159.198

    185.130.47.58

    185.207.107.130

    185.220.101.133

    185.220.101.143

    185.220.101.164

    185.220.101.167

    185.220.101.169

    185.220.101.180

    185.220.101.185

    185.220.101.33

    192.42.116.179

    192.42.116.20

    194.15.36.117

    195.47.238.178

    195.47.238.83

    shinycorp@tuta[.]com

    shinygroup@tuta[.]com

    shinyhuntersgroups@tutamail[.]com

    ticket-dior[.]com

    ticket-nike[.]com

    ticket-audemarspiguet[.]com

    Salesforce-Multi-Org-Fetcher/1.0

    Salesforce-CLI/1.0

    python-requests/2.32.4

    Python/3.11 aiohttp/3.12.15

     

    In both the campaigns Google observed TOR exit nodes being used to access compromised Salesforce accounts.

    • Majority of attacks orchestrated by UNC6040 and UNC6240 (ShinyHunters) could be traced to originate from TOR exit nodes hosted either in Netherlands or Poland. These were hosted primarily at Macarne or Private Layer INC.

    • Attackers were found to blend TOR traffic with legitimate OAuth sessions to obscure origin and make detection harder. Attacks orchestrated by UNC6395 could be traced to originate from TOR exit nodes hosted either in Germany or Netherlands. These were hosted primarily at Stiftung Erneuerbare Freiheit.
    • Many suspicious SOQL queries (data exfiltration) and deletion of scheduled jobs were initiated from TOR IP addresses, indicating adversaries were anonymizing data theft operations.

    Similarly, Scattered Spider used TOR exit IPs as a cover for account takeovers and extortion activity.

    • Attackers combined vishing (helpdesk calls) with credential access, then routed subsequent access through Tor.
    • Tor traffic was especially noted when adversaries escalated privileges or accessed sensitive SaaS applications.
    • Europe-heavy nodes with a notable U.S. presence.

    Common Threads Across Both Campaigns

    • TOR IPs as operational cover was consistently used to hide adversary infrastructure.
    • Identity-based intrusions by both groups abused identity trust rather than exploiting zero-days.
    • Overlap with Scattered Spider tradecraft where both campaigns show attackers mixing social engineering or stolen credentials with TOR.
    • TOR exit nodes have different ASNs, but both campaigns leverage NL exit nodes. ASN 58087 (Florian Kolb, DE) overlaps across both the campaigns.

    Threat Landscape

    Threat actors such as UNC6040 (ShinyHunters-affiliated), Scattered Spider (UNC3944), and UNC5537 have targeted organizations in the hospitality, retail, and education sectors in the Americas and Europe.

    Scattered Spider (UNC3944) is known for sophistication and stealth:

    • Reliably uses commercial VPN services to mask origin: Mullvad VPN, ExpressVPN, NordVPN, Ultrasurf, Easy VPN, ZenMate.
    • Employs Tools and TTPs including disabling Antivirus/EDR, lateral movement via ADRecon, credential dumping with Mimikatz/LaZagne, and persistence via RMM and cloud VMs.

    “The Com”, short for The Community, is less a formal hacking group and more a sociopathic cybercriminal subculture:

    • Comprised of 1,000+ members and mostly aged 11–25, they operate across Canada, the U.S., and the U.K.
    • Engages in SIM swapping, cryptocurrency theft, swatting, sextortion, spear-phishing, and even extreme coercion or violence.
    • Intel471 reports that members are recruited via social media/gaming and coerced into crimes ranging from grooming to violent acts; the network has also issued a manual (“The Bible”) detailing techniques such as ATM skimming, IP grabbing, doxxing, extortion, and grooming.
    Source: DHS’s Joint Regional Intelligence Center and the Central California Intelligence Center

    UNC5537 orchestrated a large-scale breach targeting Snowflake customer environments:

    • In April–June 2024, accessed over 160 organizations including AT&T, Ticketmaster/Live Nation, Santander, Advance Auto Parts, LendingTree, Neiman Marcus, and LA Unified School District – via stolen credentials, often from infostealers, and constraints due to lack of MFA.
    • Data stolen included sensitive PII, event tickets, DEA numbers, and call/text metadata (500M+ records in aggregate).
    • Targets were later advertised and extorted through forums.

    DataBreaches.net received screenshots of a Telegram message from ShinyHunters claiming to outpace law enforcement, mocking capabilities of agencies like the NSA and stating: “Even the NSA can’t stop or identify us anymore. The FBI… is irrelevant and incompetent…”. In conversation, “Shiny” asserted that Scattered Spider sources voice calls and share access and hinted at a future “Snowflake 3.0” campaign, promising even greater operations ahead.

    Source: DataBreaches.Net

    Cross-Actor Victim Overlaps

    • Cloud SaaS as a hub: Salesforce (UNC6040), Okta (Scattered Spider), and Snowflake (UNC5537) breaches show pivot via cloud identity/data platforms.
    • Retail & hospitality: Multiple actors target customer/loyalty records
      • Scattered Spider targeted casinos.
      • UNC6040 targeted retailers.
      • UNC5537 targeted luxury brands.
    • Education: UNC6040 and UNC5537 both hit educational institutions, stealing student/faculty data.
    • Financial institutions: Santander (UNC5537) vs smaller fintech/payment targets by The Com/Scattered Spider (SIM swaps).

    Detection & Monitoring Guidance

    Additional indicators and associated detection rules for detecting the threat group is made available through STI and SMAP.

    What we recommend

    • Monitoring Logs
      Continuously scan for LOGIN events from unfamiliar IP ranges (especially Mullvad or TOR exit nodes). Flag any API activity exhibiting a high volume of requests every hour.
    • OAuth App Watch‑list
      Maintain a dynamic registry of approved apps. Trigger alerts on new or anomalous app registrations. Enforce a mandatory admin sign‑off workflow. The below detection rule is an example to detect suspicious signin events with OAuth:2.0:
      `SigninLogs | where ResultType == “0” | where AuthenticationDetails has “OAuth:2.0” | where AppDisplayName startswith “Salesforce” | summarize count() by UserPrincipalName, AppDisplayName, IPAddress | where count_ > 5`
    • Vishing Detection
      Implement caller‑ID verification, deploy voice‑analytics modules that detect key phrases (eg: “please pay”, “this is Google”) and cross‑reference against known threat‑intelligence feeds. Integrate with your call‑center platform to surface suspicious calls in real time.
    • Network Traffic Analysis
      Inspect outbound traffic for TOR exit nodes and VPN tunnels that deviate from corporate baselines. Use DPI to spot unusually large, encrypted payloads.
    • Threat‑Intelligence Feeds
      Subscribe to the latest ATT&CK and IOC updates for UNC6040/ShinyHunters. Monitor public Telegram channels for freshly disclosed IOCs.
    • Zero‑Trust IAM to reduce credential‑compromise impact
      MFA, least‑privilege, RBAC for all Salesforce users.
    • OAuth App Governance to stop rogue app installations
      Manual approval + periodic review
    • IP‑Based Restrictions to limit exfiltration paths
      Allow only corporate VPN IPs; block TOR exits
    • Endpoint Security to stop malicious code execution
      EDR to detect custom Python scripts
    • Call‑Center Hardening to mitigate human‑facing social engineering
      Caller‑ID verification, recorded scripts, staff training
    • Data Loss Prevention to detects anomalous data movements
      DLP on outbound exports (volume limits + alerts)
    • Strategic Initiative: SaaS Posture Management – continuous inventory & policy enforcement for third‑party integrations. Early rogue‑app detection is our key takeaway.
    • Revoke and rotate tokens/credentials: Immediately revoke OAuth tokens tied to Salesloft Drift and reset all exposed API keys.
    • Audit activity logs: Review SOQL queries and job deletions between Aug 8–18, 2025 for suspicious access.
    • Limit OAuth permissions: Enforce least privilege, review app scopes regularly, and tighten approval workflows.
    • Govern tokens: Ensure short-lived tokens, track their use, and revoke unused ones.
    • Secure stored credentials: Move AWS keys, Snowflake tokens, and other secrets out of Salesforce objects into vaults.
    • Enhance monitoring: Use UEBA to detect unusual SaaS behavior and consolidate logs across Salesforce, identity providers, and third-party apps.
    • Restrict integrations: Apply IP/network restrictions and remove untrusted apps until validated.

    Strategic Outlook

    • TTP Evolution – The ShinyHunters group hints at a potential pivot towards ransomware‑as‑a‑service (ShinySP1D3R).
    • Broader Targeting – High‑profile brands (Adidas, Qantas, Chanel, etc.) demonstrate that the same methodology can be scaled.
    • Regulatory Momentum – Expect stricter SaaS risk‑management mandates, amplifying the need for proactive controls.
    • Attribution Difficulty – Continued use of VPN/TOR & compromised third‑party accounts will heighten detection complexity; behavioral analytics will become indispensable.

    Final Note from Our Research Team

    The Google Salesforce breach is a textbook illustration of how modern threat actors blend technical supply‑chain exploitation with fast‑turnover social engineering. For organizations that rely on cloud‑native platforms, we see a critical need to:

    • Revisit SaaS integration policies – treat every third‑party app as a potential attack vector.
    • Strengthen human‑facing security – call‑center hardening and real‑time vishing detection should become a standard part of the security stack.
    • Adopt a data‑centric risk perspective – even smaller datasets can fuel large-scale phishing campaigns.
    • Our threat‑intelligence platform remains actively monitoring the ShinyHunters/Tor‑Mullvad threat chain and will update clients with emerging IOCs and risk indicators. We encourage you to integrate these insights into your defensive posture and to collaborate with our team for a tailored, intelligence‑driven response.

    Conclusion

    The Google internal Salesforce breach orchestrated by UNC6040 (“ShinyHunters”) underscores critical vulnerabilities in modern SaaS environments. The attack demonstrates that even data traditionally considered “low-sensitivity” can be weaponized for targeted phishing and extortion schemes, while also posing significant regulatory, reputational, operational, and financial risks. Organizations must adopt robust Identity & Access Management controls, enforce strict OAuth governance, and integrate comprehensive monitoring to mitigate evolving threats.

    The UNC6395 campaign highlights how third-party OAuth integrations can undermine SaaS security. By abusing trusted tokens, attackers bypassed MFA and exfiltrated sensitive data from hundreds of organizations. This attack reinforces SaaS ecosystems and not just core apps as prime targets. Strong governance over OAuth apps, token lifecycles, and SaaS behaviors is critical to reducing risk. Proactive monitoring, least privilege, and credential hygiene are essential to defending against token-based intrusions like this.

     

    Authors

    Deepak Thomas Philip

    Kartikkumar Jivani

    Sathwik Ram Prakki

    Subhajeet Singha

    Rhishav Kanjilal

    Shayak Tarafdar



    Source link