نویسنده: post Bina

  • When Cells Collide: The Making of an Organic Particle Experiment with Rapier & Three.js

    When Cells Collide: The Making of an Organic Particle Experiment with Rapier & Three.js



    Every project begins with a spark of curiosity. It often emerges from exploring techniques outside the web and imagining how they might translate into interactive experiences. In this case, inspiration came from a dive into particle simulations.

    The Concept

    The core idea for this project came after watching a tutorial on creating cell-like particles using the xParticles plugin for Cinema 4D. The team often draws inspiration from 3D motion design techniques, and the question frequently arises in the studio: “Wouldn’t this be cool if it were interactive?” That’s where the idea was born.

    After building our own set up in C4D based on the example, we created a general motion prototype to demonstrate the interaction. The result was a kind of repelling effect, where the cells displaced according to the cursor’s position. To create the demo, we added a simple sphere and gave it a collider tag so that the particles would be pushed away as the sphere moved through the simulation, emulating the mouse movement. An easy way to add realistic movement is to add a vibrate tag to the collider, and play around with the movement levels and frequency until it looks good.

    Art Direction

    With the base particle and interaction demo sorted, we rendered out the sequence and moved in After Effects to start playing around with the look and feel. We knew we wanted to give the particles a unique quality, one that felt more stylised as opposed to ultra realistic or scientific. After some exploration we landed on a lo-fi gradient mapped look, which felt like an interesting direction to move forward with. We achieved this by layer up a few effects:

    • Effect > Generate > 4 Colour Gradient: Add this to a new shape layer. This black and white gradient will act as a mask to control the blur intensities.
    • Effect > Blur > Camera Blur: Add this to a new adjustment layer. This general blur will smooth out the particles.
    • Effect > Blur > Compound Blur: Add this to the same adjustment layer as above. Set the blur layer to use the same shape layer we applied to the 4 colour gradient as its mask, make sure it is set to “Effects & Mask” mode in the drop down.
    • Effect > Color Correction > Colorama: Add this as a new adjustment layer. This is where the fun starts! You can add custom gradients into the output cycle and play around with the phase shift to customise the look according to your preference.

    Next, we designed a simple UI to match the futuristic cell-based visual direction. A concept we felt would work well for a bio-tech company – so created a simple brand with key messaging to fit and voila! That’s the concept phase complete.

    (Hot tip: If you’re doing an interaction concept in 3d software like C4D, create a plane with a cursor texture on and parent it to your main interaction component – in the case, the sphere collider. Render that out as a sequence so that it matches up perfectly with your simulation – you can then layer it over text, etc, and UI in After Effects)

    Technical Approach and Tools

    As this was a simple one page static site without need of a backend, we used our in-house boilerplate using Astro with Vite and Three.js. For the physics, we went with Rapier as it handles collision detection efficiently and is compatible with Three.js. That was our main requirement, since we didn’t need simulations or soft-body calculations. 

    For the Cellular Technology project, we specifically wanted to show how you can achieve a satisfying result without overcrowding the screen with tons of features or components. Our key focus was the visuals and interactivity – to make this satisfying for the user, it needed to feel smooth and seamless. A fluid-like simulation is a good way to achieve this. At Unseen, we often implement this effect as an added interaction component. For this project, we wanted to take a slightly different approach that would still achieve a similar result.

    Based on the concept from our designers, there were a couple of directions for the implementation to consider. To keep the experience optimised, even at a large scale, having the GPU handle the majority of the calculations is usually the best approach. For this, we’d need the effect to be in a shader, and use more complicated implementations such as packing algorithms and custom voronoi-like patterns. However, after testing the Rapier library, we realised that simple rigid body object collision would suffice in re-creating the concept in real-time. 

    Physics Implementation

    To do so, we needed to create a separate physics world next to our 3D rendered world, as the Rapier library only handles the physics calculations, and the graphics are left for the implementation of the developer’s choosing. 

    Here’s a snippet from the part were we create the rigid bodies:

    for (let i = 0; i < this.numberOfBodies; i++) {
      const x = Math.random() * this.bounds.x - this.bounds.x * 0.5
      const y = Math.random() * this.bounds.y - this.bounds.y * 0.5
      const z = Math.random() * (this.bounds.z * 0.95) - (this.bounds.z * 0.95) * 0.5
    
      const bodyDesc = RAPIER.RigidBodyDesc.dynamic().setTranslation(x, y, z)
      bodyDesc.setGravityScale(0.0) // Disable gravity
      bodyDesc.setLinearDamping(0.7)
      const body = this.physicsWorld.createRigidBody(bodyDesc)
    
      const radius = MathUtils.mapLinear(Math.random(), 0.0, 1.0, this._cellSizeRange[0], this._cellSizeRange[1])
      const colliderDesc = RAPIER.ColliderDesc.ball(radius)
      const collider = this.physicsWorld.createCollider(colliderDesc, body)
      collider.setRestitution(0.1) // bounciness 0 = no bounce, 1 = full bounce
    
      this.bodies.push(body)
      this.colliders.push(collider)
    }

    The meshes that represent the bodies are created separately, and on each tick, their transforms get updated by those from the physics engine. 

    // update mesh positions
    for (let i = 0; i < this.numberOfBodies; i++) {
      const body = this.bodies[i]
      const position = body.translation()
    
      const collider = this.colliders[i]
      const radius = collider.shape.radius
    
      this._dummy.position.set(position.x, position.y, position.z)
      this._dummy.scale.setScalar(radius)
      this._dummy.updateMatrix()
    
      this.mesh.setMatrixAt(i, this._dummy.matrix)
    }
    
    this.mesh.instanceMatrix.needsUpdate = true

    With performance in mind, we first decided to try the 2D version of the Rapier library, however it soon became clear that with cells distributed only in one plane, the visual was not convincing enough. The performance impact of additional calculations in the Z plane was justified by the improved result. 

    Building the Visual with Post Processing

    Evidently, the post processing effects play a big role in this project. By far the most important is the blur, which makes the cells go from clear simple rings to a fluid, gooey mass. We implemented the Kawase blur, which is similar to Gaussian blur, but uses box blurring instead of the Gaussian function and is more performant at higher levels of blur. We applied it to only some parts of the screen to keep visual interest. 

    This already brought the implementation closer to the concept. Another vital part of the experience is the color-grading, where we mapped the colours to the luminosity of elements in the scene. We couldn’t resist adding our typical fluid simulation, so the colours get slightly offset based on the fluid movement. 

    if (uFluidEnabled) {
        fluidColor = texture2D(tFluid, screenCoords);
    
        fluid = pow(luminance(abs(fluidColor.rgb)), 1.2);
        fluid *= 0.28;
    }
    
    vec3 color1 = uColor1 - fluid * 0.08;
    vec3 color2 = uColor2 - fluid * 0.08;
    vec3 color3 = uColor3 - fluid * 0.08;
    vec3 color4 = uColor4 - fluid * 0.08;
    
    if (uEnabled) {
        // apply a color grade
        color = getColorRampColor(brightness, uStops.x, uStops.y, uStops.z, uStops.w, color1, color2, color3, color4);
    }
    
    color += color * fluid * 1.5;
    color = clamp(color, 0.0, 1.0);
    
    color += color * fluidColor.rgb * 0.09;
    
    gl_FragColor = vec4(color, 1.0);
    

    Performance Optimisation

    With the computational power required for the physics engine increasing quickly due to the number of calculations required, we aimed to make the experience as optimised as possible. The first step was to find the minimum number of cells without affecting the visual too much, i.e. without making the cells too sparse. To do so, we minimised the area in which the cells get created and made the cells slightly larger. 

    Another important step was to make sure no calculation is redundant, meaning each calculation must be justified by a result visible on the screen. To make sure of that, we limited the area in which cells get created to only just cover the screen, regardless of the screen size. This basically means that all cells in the scene are visible in the camera. Usually this approach involves a slightly more complex derivation of the bounding area, based on the camera field of view and distance from the object, however, for this project, we used an orthographic camera, which simplifies the calculations.

    this.camera._width = this.camera.right - this.camera.left
    this.camera._height = this.camera.top - this.camera.bottom
    
    // .....
    
    this.bounds = {
      x: (this.camera._width / this.options.cameraZoom) * 0.5,
      y: (this.camera._height / this.options.cameraZoom) * 0.5,
      z: 0.5
    }

    Check out the live demo.

    We’ve also exposed some of the settings on the live demo so you can adjust colours yourself here.

    Thanks for reading our break down of this experiment! If you have any questions don’t hesitate to write to us @uns__nstudio.





    Source link

  • How to resolve dependencies in .NET APIs based on current HTTP Request


    Did you know that in .NET you can resolve specific dependencies using Factories? We’ll use them to switch between concrete classes based on the current HTTP Request

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Say that you have an interface and that you want to specify its concrete class at runtime using the native Dependency Injection engine provided by .NET.

    For instance, imagine that you have a .NET API project and that the flag that tells the application which dependency to use is set in the HTTP Request.

    Can we do it? Of course, yes – otherwise I wouldn’t be here writing this article 😅 Let’s learn how!

    Why use different dependencies?

    But first: does all of this make sense? Is there any case when you want to inject different services at runtime?

    Let me share with you a story: once I had to create an API project which exposed just a single endpoint: Process(string ID).

    That endpoint read the item with that ID from a DB – an object composed of some data and some hundreds of children IDs – and then called an external service to download an XML file for every child ID in the object; then, every downloaded XML file would be saved on the file system of the server where the API was deployed to. Finally, a TXT file with the list of the items correctly saved on the file system was generated.

    Quite an easy task: read from DB, call some APIs, store the file, store the report file. Nothing more.

    But, how to run it locally without saving hundreds of files for every HTTP call?

    I decided to add a simple Query Parameter to the HTTP path and let .NET understand whether use the concrete class or a fake one. Let’s see how.

    Define the services on ConfigureServices

    As you may know, the dependencies are defined in the ConfigureServices method inside the Startup class.

    Here we can define our dependencies. For this example, we have an interface, IFileSystemAccess, which is implemented by two classes: FakeFileSystemAccess and RealFileSystemAccess.

    So, to define those mutable dependencies, you can follow this snippet:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddControllers();
    
        services.AddHttpContextAccessor();
    
        services.AddTransient<FakeFileSystemAccess>();
        services.AddTransient<RealFileSystemAccess>();
    
        services.AddScoped<IFileSystemAccess>(provider =>
        {
            var context = provider.GetRequiredService<IHttpContextAccessor>();
    
            var useFakeFileSystemAccess = context.HttpContext?.Request?.Query?.ContainsKey("fake-fs") ?? false;
    
            if (useFakeFileSystemAccess)
                return provider.GetRequiredService<FakeFileSystemAccess>();
            else
                return provider.GetRequiredService<RealFileSystemAccess>();
        });
    }
    

    As usual, let’s break it down:

    Inject dependencies using a Factory

    Let’s begin with the king of the article:

    services.AddScoped<IFileSystemAccess>(provider =>
    {
    }
    

    We can define our dependencies by using a factory. For instance, now we are using the AddScoped Extension Method (wanna know some interesting facts about Extension Methods?):

    //
    // Summary:
    //     Adds a scoped service of the type specified in TService with a factory specified
    //     in implementationFactory to the specified Microsoft.Extensions.DependencyInjection.IServiceCollection.
    //
    // Parameters:
    //   services:
    //     The Microsoft.Extensions.DependencyInjection.IServiceCollection to add the service
    //     to.
    //
    //   implementationFactory:
    //     The factory that creates the service.
    //
    // Type parameters:
    //   TService:
    //     The type of the service to add.
    //
    // Returns:
    //     A reference to this instance after the operation has completed.
    public static IServiceCollection AddScoped<TService>(this IServiceCollection services, Func<IServiceProvider, TService> implementationFactory) where TService : class;
    

    This Extension Method allows us to get the information about the services already injected in the current IServiceCollection instance and use them to define how to instantiate the actual dependency for the TService – in our case, IFileSystemAccess.

    Why is this a Scoped dependency? As you might remember from a previous article, in .NET we have 3 lifetimes for dependencies: Singleton, Scoped, and Transient. Scoped dependencies are the ones that get loaded once per HTTP request: therefore, those are the best choice for this specific example.

    Reading from Query String

    Since we need to read a value from the query string, we need to access the HttpRequest object.

    That’s why we have:

    var context = provider.GetRequiredService<IHttpContextAccessor>();
    var useFakeFileSystemAccess = context.HttpContext?.Request?.Query?.ContainsKey("fake-fs") ?? false;
    

    Here I’m getting the HTTP Context and checking if the fake-fs key is defined. Yes, I know, I’m not checking its actual value: I’m just checking whether the key exists or not.

    IHttpContextAccessor is the key part of this snippet: this is a service that acts as a wrap around the HttpContext object. You can inject it everywhere in your code, but under one condition: you have to define it in the ConfigureServices method.

    How? Well, that’s simple:

    services.AddHttpContextAccessor();
    

    Injecting the dependencies based on the request

    Finally, we can define which dependency must be injected for the current HTTP Request:

    if (useFakeFileSystemAccess)
        return provider.GetRequiredService<FakeFileSystemAccess>();
    else
        return provider.GetRequiredService<RealFileSystemAccess>();
    

    Remember that we are inside a factory method: this means that, depending on the value of useFakeFileSystemAccess, we are defining the concrete class of IFileSystemAccess.

    GetRequiredService<T> returns the instance of type T injected in the DI engine. This implies that we have to inject the two different services before accessing them. That’s why you see:

    services.AddTransient<FakeFileSystemAccess>();
    services.AddTransient<RealFileSystemAccess>();
    

    Those two lines of code serve two different purposes:

    1. they make those services available to the GetRequiredService method;
    2. they resolve all the dependencies injected in those services

    Running the example

    Now that we have everything in place, it’s time to put it into practice.

    First of all, we need a Controller with the endpoint we will call:

    [ApiController]
    [Route("[controller]")]
    public class StorageController : ControllerBase
    {
        private readonly IFileSystemAccess _fileSystemAccess;
    
        public StorageController(IFileSystemAccess fileSystemAccess)
        {
            _fileSystemAccess = fileSystemAccess;
        }
    
        [HttpPost]
        public async Task<IActionResult> SaveContent([FromBody] FileInfo content)
        {
            string filename = $"file-{Guid.NewGuid()}.txt";
            var saveResult = await _fileSystemAccess.WriteOnFile(filename, content.Content);
            return Ok(saveResult);
        }
    
        public class FileInfo
        {
            public string Content { get; set; }
        }
    }
    

    Nothing fancy: this POST endpoint receives an object with some text, and calls IFileSystemAccess to store the file. Then, it returns the result of the operation.

    Then, we have the interface:

    public interface IFileSystemAccess
    {
        Task<FileSystemSaveResult> WriteOnFile(string fileName, string content);
    }
    
    public class FileSystemSaveResult
    {
        public FileSystemSaveResult(string message)
        {
            Message = message;
        }
    
        public string Message { get; set; }
    }
    

    which is implemented by the two classes:

    public class FakeFileSystemAccess : IFileSystemAccess
    {
        public Task<FileSystemSaveResult> WriteOnFile(string fileName, string content)
        {
            return Task.FromResult(new FileSystemSaveResult("Used mock File System access"));
        }
    }
    

    and

    public class RealFileSystemAccess : IFileSystemAccess
    {
        public async Task<FileSystemSaveResult> WriteOnFile(string fileName, string content)
        {
            await File.WriteAllTextAsync(fileName, content);
            return new FileSystemSaveResult("Used real File System access");
        }
    }
    

    As you could have imagined, only RealFileSystemAccess actually writes on the file system. But both of them return an object with a message that tells us which class completed the operation.

    Let’s see it in practice:

    First of all, let’s call the endpoint without anything in Query String:

    Without specifying the flag in Query String, we are using the real file system access

    And, then, let’s add the key:

    By adding the flag, we are using the mock class, so that we don&rsquo;t create real files

    As expected, depending on the query string, we can see two different results.

    Of course, you can use this strategy not only with values from the Query String, but also from HTTP Headers, cookies, and whatever comes with the HTTP Request.

    Further readings

    If you remember, we’ve defined the dependency to IFileSystemAccess as Scoped. Why? What are the other lifetimes native on .NET?

    🔗 Dependency Injection lifetimes in .NET | Code4IT

    Also, AddScoped is the Extension Method that we used to build our dependencies thanks to a Factory. Here’s an article about some advanced topics about Extension Methods:

    🔗 How you can create Extension Methods in C# | Code4IT

    Finally, the repository for the code used for this article:

    🔗 DependencyInjectionByHttpRequest project | GitHub

    Wrapping up

    In this article, we’ve seen that we can use a Factory to define at runtime which class will be used when resolving a Dependency.

    We’ve used a simple calculation based on the current HTTP request, but of course, there are many other ways to achieve a similar result.

    What would you use instead? Have you ever used a similar approach? And why?

    Happy coding!

    🐧



    Source link

  • Malware Campaign Leverages SVGs, Email Attachments, and CDNs to Drop XWorm and Remcos via BAT Scripts

    Malware Campaign Leverages SVGs, Email Attachments, and CDNs to Drop XWorm and Remcos via BAT Scripts


    Table of Content:

    • Introduction
    • Infection Chain
    • Process Tree
    • Campaign 1:
      – Persistence
      – BATCH files
      – PowerShell script
      – Loader
      – Xworm/Remcos
    • Campaign 2
    • Conclusion
    • IOCS
    • Detections
    • MITRE ATTACK TTPs

    Introduction:

    Recent threat campaigns have revealed an evolving use of BAT-based loaders to deliver Remote Access Trojans, including XWorm and Remcos. These campaigns often begin with a ZIP archive, typically hosted on trusted looking platforms such as ImgKit, and are designed to appear as legitimate content to entice user interaction.

    Upon extraction, the ZIP file contains a highly obfuscated BAT script that serves as the initial stage of the infection chain. These BAT files use advanced techniques to evade static detection and are responsible for executing PowerShell based loaders that inject the RAT payload directly into memory. This approach enables fileless execution, a growing trend in modern malware to bypass endpoint defences.

    A notable development in this campaign is the use of SVG files to distribute the malicious BAT scripts. These SVGs contain embedded JavaScript that triggers the execution chain when rendered in vulnerable environments or embedded in phishing pages. This technique highlights a shift toward using non-traditional file formats for malware delivery, exploiting their scripting capabilities to evade detection.

    Infection Chain:

    Fig: Infection Chain

    Process Tree:

    Fig: Process Tree

    Campaign 1:

    During the analysis Campaign 1, we identified multiple BAT scripts associated with the campaign, indicating an evolving threat landscape. Some of these scripts appear to be in active development, containing partial or test-stage code, while others are fully functional and capable of executing the complete attack chain, including payload download, execution.

    The image below showcases two BAT files:

    • Method 1: Delivered directly as an attachment via EML (email file).
    • Method 2: Downloaded via a URL hosted on the ImageKit platform.

    This variation in delivery methods suggests that the threat actors are experimenting with different approaches to improve infection success rates and evade detection mechanisms.

    Fig: Attachments

    Persistence:

    The malware achieves persistence by creating a BAT file in the Windows Startup folder. This ensures that the malicious script is automatically executed whenever the user starts the system or logs into their account.

    Fig: Persistence Mechanism through startup Menu

    BATCH files:

    The figure below displays the BAT script in its obfuscated form alongside its de obfuscated version.

    Fig: Obfuscated and deobfuscated bat files

    PowerShell script:

    Below, the figure shows the PowerShell process window and the command-line arguments used during execution. This provides insight into how the malware leverages PowerShell for in-memory payload delivery. We will analyze the PowerShell script in detail in the following sections to understand its role in the deployment of XWorm.

    Fig: Process window of PWERSHELL with argument as PS script

     

    Fig: Obfuscated Poershell script as an argument

    This PowerShell command runs the script by decoding a Base64 encoded string and executing it in the memory. It uses -nop to avoid loading the user profile, -w hidden to hide the window, and iex (Invoke-Expression) to run the decoded content.

    Deobfuscated Script:

    Fig: Deobfuscted script

    We divided the deobfuscated script into two parts. In first part of the PowerShell script; it is designed to locate and execute an obfuscated code embedded within a batch file (aoc.bat) located in the current user’s profile directory. Firstly, It retrieves the username from the environment variables to construct the full path to aoc.bat. It reads all lines using UTF-8 encoding and iterates through each line, specifically looking for lines that begin with a triple-colon comment prefix (:::), followed by a Base64-encoded string. Upon finding such a line, it attempts to decode the Base64 string into a byte array, then converts it into a Unicode string assigned to the variable $ttzhae. This decoded string ($ttzhae) contains an additional layer of PowerShell script, which is then executed in memory using Invoke-Expression. This allows the attacker to embed and execute complex or multi-stage malicious PowerShell logic covertly within a seemingly benign batch file comment, enabling stealthy and fileless execution.

    Fig: First part of PS script

    The script programmatically disables two key Windows security mechanisms AMSI (Antimalware Scan Interface) and ETW (Event Tracing for Windows) to evade detection. It leverages .NET reflection and dynamic delegate creation to resolve native functions such as GetProcAddress, GetModuleHandle, VirtualProtect, and AmsiInitialize. Using these, it locates and patches the AmsiScanBuffer function in memory with a instructions (mov eax, 0; ret), effectively bypassing AMSI scanning. Similarly, it disables event tracing by overwriting the beginning of EtwEventWrite with a return instruction. These in-memory modifications allow malicious PowerShell activity to execute stealthily, without being logged or scanned by endpoint protection solutions.

    Fig: Output of First part of PS script

    In the second part of the PowerShell script, it first retrieves the current user’s name from the environment and constructs the path to a file named aoc.bat located in the user’s profile directory. It proceeds to execute payloads embedded as encrypted and compressed .NET assemblies hidden within this batch file. The script specifically searches for a comment line prefixed with ::, which contains a Base64-encoded payload string. This string is then split into two parts using the backslash (\) as a delimiter. Each part undergoes Base64 decoding, AES decryption using a hardcoded key and IV (in CBC mode with PKCS7 padding), followed by GZIP decompression as illustrated in the accompanying figures. The result is two separate .NET assemblies, both loaded and executed directly in memory. The first assembly is invoked without any arguments, while the second is executed with ‘%*’ passed as a simulated command-line input.

    Fig: Second Part of Script_ encryption function

    The second payload plays a more critical role. it functions as a loader responsible for executing the final malware. which is XWorm remote access trojan (RAT).

    Fig: Second Part of Script_ call to encryption function

    Loader

    The loader is designed to evade detection, disable event logging, and execute embedded payloads directly in the memory. It achieves this by either decrypting and running .NET executables via Assembly.Load or executing decrypted shellcode using VirtualProtect and delegates.

    Fig: Loader which loads XWorm\

     

    Fig: Resources and Features

     

    Fig: Encrypted Resource containing XWorm/remcos

     

    Loader Capabilities:

    • Extract and execute embedded payloads

    Here, we identified multiple loaders that utilize in-memory execution techniques to evade detection and persist stealthily. Some of these loaders contain encrypted .NET executables, which are decrypted at runtime and executed directly from memory using Assembly.Load followed by .EntryPoint. Invoke, allowing the loader to run managed code without writing the executable to disk.

    In contrast, other variants have encrypted shellcode instead of a binary. These variants decrypt the shellcode, modify the memory protections using VirtualProtect to make it executable, and then execute it using a delegate created via Marshal.GetDelegateForFunctionPointer. As shown in below figures,

     

    Xworm

    We have previously reported on XWorm and Remcos earlier this year, providing an in-depth analysis of its core functionality, advanced capabilities such as keylogging, remote command execution, data exfiltration, and its methods of persistence and evasion.

    In addition to XWorm, several variants in the same campaign also utilized Remcos, a widely known commercial Remote Access Trojan (RAT) that offers a range of capabilities, including remote desktop access, keylogging, command execution, file manipulation, screenshot capture, and data exfiltration.

    Campaign 2:

    Campaign 2 introduces a notable shift in malware delivery by leveraging SVG (Scalable Vector Graphics) files embedded with JavaScript, which are primarily used in phishing attacks. These malicious SVGs are crafted to appear as legitimate image files and are either rendered in vulnerable software environments (such as outdated image viewers or email clients) or embedded within phishing web pages designed to lure unsuspecting users. Now, the embedded JavaScript within the SVG file acts as a trigger mechanism, initiating the automatic download of a ZIP archive when the SVG is opened or previewed.

    This downloaded ZIP archive contains an obfuscated BAT script, which serves as the initial access vector for the malware. Once the BAT script is executed either manually by the user or through social engineering tactics, it initiates a multi-stage infection chain similar to that observed in Campaign 1. Specifically, the BAT script invokes PowerShell commands to decode and execute a loader executable (EXE) directly in memory. This loader is responsible for decrypting and deploying the final payload, which in this campaign is the XWorm Remote Access Trojan (RAT).

    The use of SVG as a delivery mechanism represents a noteworthy evolution in attack methodology, as image files are typically considered benign and are often excluded from deep content inspection by traditional security tools. By exploiting the scripting capabilities of SVGs, threat actors can effectively bypass perimeter defences and deliver malicious payloads in a fileless, stealthy manner.

    Conclusion:

    These campaigns highlight a growing trend in the use of obfuscated scripts, fileless malware, and non-traditional file formats like SVGs to deliver Remote Access Trojans such as XWorm and Remcos. By embedding payloads in BAT files and executing them via PowerShell, attackers effectively bypass static defences. The shift from using SVGs in phishing attacks to malware delivery further emphasizes the need for behavioural detection, content inspection, and improved user awareness to counter such evolving threats.

     

    IOCS:

    MD5 File
    EDA018A9D51F3B09C20E88A15F630DF5 BAT
    23E30938E00F89BF345C9C1E58A6CC1D JS
    1CE36351D7175E9244209AE0D42759D9 LOADER
    EC04BC20CA447556C3BDCFCBF6662C60 XWORM
    D439CB98CF44D359C6ABCDDDB6E85454 REMCOS

    Detections:

    Trojan.LoaderCiR

    Trojan.GenericFC.S29960909

    MITRE ATTACK TTPs:

    Tactic Technique ID & Name Description
    Execution T1059.001 – Command and Scripting Interpreter: PowerShell PowerShell is used to interpret commands, decrypt data, and invoke payloads.
      T1106 – Execution Through API The script uses .NET APIs (e.g., Assembly.Load, Invoke) to execute payloads in memory.
    Defense Evasion T1027 – Obfuscated Files or Information Payloads are Base64 encoded, AES-encrypted, and compressed to bypass static detections.
      T1140 – Deobfuscate/Decode Files or Information The script decodes and decompresses payloads before execution.
      T1055.012 – Process Injection: .NET Assembly Injection Payloads are loaded into memory
      T1036 – Masquerading The malicious content is hidden in batch file.
    Persistence T1053 – Scheduled Task/Job Establish persistence through strartup menu.
    Initial Access T1204 – User Execution Execution depends on a user manually running the batch file
    Command and Control T1132 – Data Encoding Base64 and encryption are used to encode commands or payloads.
      T1219 – Remote Access Software Xworm provides full remote access and control over the infected host.
    Credential Access T1056.001 – Input Capture: Keylogging XWorm includes keylogging functionality to steal user input and credentials.
    Exfiltration T1041 – Exfiltration Over C2 Channel Stolen data is exfiltrated via the same C2 channel used by Xworm.



    Source link

  • The Missing Security Shield for Modern Threats


    Introduction: A Security Crisis That Keeps Leaders Awake

    Did you know that 97% of security professionals admit to losing sleep over potentially missed critical alerts? (Ponemon Institute) It’s not just paranoia—the risk is real. Security operations centers (SOCs) are flooded with tens of thousands of alerts daily, and missing even one critical incident can lead to catastrophic consequences.

    Take the Target breach of 2013: attackers exfiltrated 41 million payment card records, costing the company $18.5 million in regulatory settlements and long-term brand damage (Reuters). The painful truth? Alerts were generated—but overwhelmed analysts failed to act on time.

    Fast forward to 2025, and the situation is worse:

    • 3.5 million unfilled cybersecurity positions worldwide (ISC2 Cybersecurity Workforce Study 2023)

    • Average recruitment cycle of 150 days per role

    • 100,000+ daily alerts in large SOCs  as per Fortinet

    Clearly, traditional SecOps cannot keep pace. This is where Artificial Intelligence (AI) steps in—not as a luxury, but as the missing security shield.

    Why Traditional SecOps is Falling Short

    Alert Fatigue & Human Limits

    Manual triage overwhelms analysts. Studies show 81% of SOC teams cite manual investigation as their biggest bottleneck (TechTarget)—leading to burnout, mistakes, and missed detections.

    Signature-Based Detection Can’t Keep Up

    Conventional tools rely on known signatures. But attackers now deploy zero-days, polymorphic malware, and AI-generated phishing emails that evade these defenses. Gartner predicts 80% of modern threats bypass legacy signature-based systems by 2026 (Gartner Report).

    Longer Dwell Times = Bigger Damage

    Dwell time—the period attackers stay undetected—often stretches weeks to months. Verizon’s 2024 DBIR shows 62% of breaches go undetected for more than a month (Verizon DBIR 2024). During this time, attackers can steal data, deploy ransomware, or create persistent backdoors.

    Ransomware at Machine Speed

    Cybersecurity Ventures reports a ransomware attack every 11 seconds globally, with damages forecast to hit USD 265 billion annually by 2031 (Cybersecurity Ventures). Humans alone cannot fight threats at this velocity.


    How AI Bridges the Gap in SecOps

    AI isn’t replacing analysts—it’s augmenting them with superhuman speed, scale, and accuracy. Here’s how:

    1. Anomaly-Based Threat Detection

    AI establishes a baseline of normal behavior and flags deviations (e.g., unusual logins, abnormal data flows). Unlike static signatures, anomaly detection spots zero-days and advanced persistent threats (APTs).

    2. Real-Time Threat Intelligence

    AI ingests global threat feeds, correlates them with local telemetry, and predicts attack patterns before they hit. This allows SOCs to move from reactive defense to proactive hunting.

    3. Automated Alert Triage

    AI filters out noise and correlates alerts into coherent incident narratives. By cutting false positives by up to 60% (Tech Radar), AI frees analysts to focus on high-risk threats.

    4. Privilege Management & Insider Threats

    AI-driven Identity & Access Management (IAM) continuously checks user behavior against role requirements, preventing privilege creep and catching insider threats.

    5. Automated Threat Containment

    AI-powered orchestration platforms can:

    • Isolate compromised endpoints

    • Quarantine malicious traffic

    • Trigger network segmentation

    This shrinks containment windows from hours to minutes.

    6. Shadow IT Discovery

    Unauthorized apps and AI tools are rampant. AI maps shadow IT usage by analyzing traffic patterns, reducing blind spots and compliance risks.

    7. Phishing & Deepfake Defense

    Generative AI has supercharged phishing. Traditional keyword filters miss these, but AI can detect behavioral anomalies, reply-chain inconsistencies, and deepfake audio/video scams.

    8. BYOD Endpoint Protection

    AI monitors personal devices accessing corporate networks, detecting ransomware encryption patterns and isolating infected devices instantly.


    Seqrite’s AI-Powered SecOps Advantage

    Seqrite XDR Powered by GoDeep.AI

    • Uses deep learning, behavioral analytics, and predictive intelligence.

    • Reduces breach response cycles by 108 days compared to conventional methods (Seqrite internal benchmark).

    • Correlates telemetry across endpoints, networks, cloud, and identities.

    Seqrite Intelligent Assistant (SIA)

    • A GenAI-powered virtual security analyst.

    • Allows natural language queries—no complex syntax required.

    • Automates workflows like incident summaries, risk assessments, and remediation steps.

    • Cuts analyst workload by up to 50%.

    The Unified Advantage

    Traditional SOCs struggle with tool sprawl. Seqrite provides a unified architecture with centralized management, reducing complexity and cutting TCO by up to 47% (industry benchmarks).


    The Future: Predictive & Agentic AI in SecOps

    • Predictive AI: Anticipates breaches before they occur by analyzing historical + real-time telemetry.

    • Causal AI: Maps cause-effect relationships in attacks, helping SOCs understand root causes, not just symptoms.

    • Agentic AI: Autonomous agents will investigate and remediate incidents without human intervention, allowing SOC teams to focus on strategy.

    Conclusion: AI Is No Longer Optional

    Cybercriminals are already using AI to scale attacks. Without AI in SecOps, organizations risk falling hopelessly behind.

    The benefits are clear:

    • Faster detection (minutes vs weeks)

    • Reduced false positives (by up to 60%)

    • Automated containment (minutes vs hours)

    • Continuous compliance readiness

    AI is not replacing SecOps teams—it’s the missing shield that makes them unbeatable.



    Source link

  • Reality meets Emotion: The 3D Storytelling of Célia Lopez

    Reality meets Emotion: The 3D Storytelling of Célia Lopez


    Hi, my name is Célia. I’m a French 3D designer based in Paris, with a special focus on color harmony, refined details, and meticulous craftsmanship. I strive to tell stories through ground breaking interactivity and aim to create designs that truly touch people’s hearts. I collaborate with renowned agencies and always push for exemplary quality in everything I do. I love working with people who share the same dedication and passion for their craft—because that’s when results become something we can all be truly proud of.

    Featured Projects

    Aether1

    This project was carried out with the OFF+BRAND team, with whom I’ve collaborated regularly since February 2025. They wanted to use this product showcase to demonstrate to their future clients how brilliantly they combine storytelling, WebGL, AI integration, and a highly polished UI, and flawlessly coded.

    I loved working on this project not only because of the intense team effort in fine-tuning the details, but also because of the creative freedom I was given. In collaboration with Gilles Tossoukpé and Ross Anderson, we built the concept entirely from scratch, each bringing our own expertise. I’m very proud of the result.

    We have done a full case study explaining our workflow on Codrops

    aether1.ai

    My collaboration with OFF+BRAND began thanks to a recommendation from Paul Guilhem Repaux, with whom I had worked on one of the biggest projects of my career: the Dubai World Expo.

    Dubai World Expo

    We recreated over 200 pavilions from 192 countries, delivering a virtual experience for more than 2 million viewers during the inauguration of the Dubai World Expo in 2020.

    This unique experience allowed users to attend countless events, conferences, and performances without traveling to Dubai.

    To bring this to life, we worked as a team of six 3D designers and two developers, under the leadership of the project manager at DOGSTUDIO. I’m truly proud to have contributed to this website, which showcased one of the world’s most celebrated events.

    virtualexpodubai.com/

    Heidelbarg CCUS

    The following website was created with Ashfall Studio, another incredible studio whose meticulous work, down to the way they present their projects, inspires me tremendously.

    Here, our mission was nothing short of magic: transforming a website with a theme that, at first glance, wasn’t exactly appealing—tar production—into an experiential site that evokes emotion! I mean, come on, we actually managed to make tar sexy!

    ccus.heidelbergmaterials.com/en/

    Jacquemus

    Do you know the law of attraction? This principle is based on the idea that we can attract what we focus our attention and emotions on. I fell in love with the Jacquemus brand—the story of Simon, its creator, resonates deeply with me because we both grew up in the same place: the beautiful South of France!

    I wanted to create a project for Jacquemus, so I first made it a personal one. I wanted to explore the bridges between reality, 3D, photography, and motion design in one cohesive whole—which you can actually see on my Instagram, where I love mixing 3D and fashion in a harmonious and colorful feed.

    I went to their boutique on Avenue Montaigne and integrated my bag into the space using virtual reality. I also created a motion piece and did a photoshoot with a photographer.

    Céramique

    Last year, a friend of mine gave me a ceramics workshop where I created plates and cups. I loved it! Then in 2025, I decided I wanted to improve my animation skills—so I needed a subject to practice on. I was inspired by that workshop and created a short animation based on the steps involved in making my cups.

    Philosophy

    Are you one of those people who dream big—sometimes too big—and, once they commit to something, push it to the extreme of what it could become? Well, I am. If I make a ceramic plate once, I want to launch my own brand. If I invest in real estate, I want to become financially independent. If I spend my life in stylish cafés or designer gyms I discover on ClassPass, I start imagining opening a coffee shop–fitness space. When I see excellence somewhere, I think: why not me? And I give myself the means to reach my goals. But of course, one has to be realistic: to be truly high-quality, you need to focus on one thing at a time. So yes, I have many future projects—but first, let’s finish the ones already in progress.

    My next steps

    I recently launched my Airbnb in Paris, for which I’ll be creating some content, building a brand identity, and promoting it as much as I can.

    I’ve also launched my lifestyle/furniture brand called LABEGE named after the village where I grew up. For now, it’s a digital brand, but my goal is to develop it for commercialization. I have no idea how to make that happen just yet.

    Background & Career highlights


    Awwwards class

    There have been many defining moments in my career—or at least, I treat every opportunity as a potential turning point, which is why I invest so much in every project.

    But two moments, in particular, stand out for me. The first was when Awwwards invited me to create a course explaining my 3D WebGL workflow. Today, I might update it with some new insights, but at the time it was extremely valuable because there was nothing like it available online. Combined with the fact that it was one of the first four courses they launched, it gave me great visibility within our community.

    My Awwwards Class

    Spline

    Another milestone was when I joined the Spline team. Back then, the software was still unstable—it was frustrating to spend days creating only to lose all my work to a bug. But over time, the tool became incredibly powerful. The combination of Spline’s excellent social media presence and the growing strength of the software helped it grow from 5K to 75K Twitter followers in just two years, along with thousands of new users.

    Thanks to the tool’s early popularity and the small number of people who mastered it at first, I was able to build a strong reputation in the interactive 3D web field. I shared a lot about Spline on my social channels and even launched a YouTube channel dedicated to tutorials.

    It was fascinating to see how a tool is built, showcase new features to the community, and watch the enthusiasm grow. Being part of such a close-knit, human team—led by founder Alejandro, whose visionary talent inspires me—was an unforgettable experience.

    Tools & Techniques

    • Cinema 4D
    • Redshift
    • Blender
    • Figma
    • Pinterest
    • Marvelous Designer
    • Spline Tool
    • PeachWeb

    Final Thoughts

    Life is short—know your limits and your worth. Set non-negotiable boundaries with anything or anyone that drags you down: no second chances, no comebacks. Be good to people and to the world, but also be selfish in the best way—do what makes you feel alive, happy, and full of magic. Surround yourself with people who are worth your attention, who value you as much as you value them.

    Put yourself in the main role of your own life, dream big, and be grateful to be here.

    LOVE!

    Contact

    Thanks a lot for taking the time to read about me!

    Let’s connect!

    Instagram
    X (Twitter)
    LinkedIn
    Email for new inquiries: hello@celialopez.fr 💌





    Source link

  • Use a SortedSet to avoid duplicates and sort items &vert; Code4IT

    Use a SortedSet to avoid duplicates and sort items | Code4IT


    Using the right data structure is crucial to building robust and efficient applications. So, why use a List or a HashSet to sort items (and remove duplicates) when you have a SortedSet?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    As you probably know, you can create collections of items without duplicates by using a HashSet<T> object.

    It is quite useful to remove duplicates from a list of items of the same type.

    How can we ensure that we always have sorted items? The answer is simple: SortedSet<T>!

    HashSet: a collection without duplicates

    A simple HashSet creates a collection of unordered items without duplicates.

    This example

    var hashSet = new HashSet<string>();
    hashSet.Add("Turin");
    hashSet.Add("Naples");
    hashSet.Add("Rome");
    hashSet.Add("Bari");
    hashSet.Add("Rome");
    hashSet.Add("Turin");
    
    
    var resultHashSet = string.Join(',', hashSet);
    Console.WriteLine(resultHashSet);
    

    prints this string: Turin,Naples,Rome,Bari. The order of the inserted items is maintained.

    SortedSet: a sorted collection without duplicates

    To sort those items, we have two approaches.

    You can simply sort the collection once you’ve finished adding items:

    var hashSet = new HashSet<string>();
    hashSet.Add("Turin");
    hashSet.Add("Naples");
    hashSet.Add("Rome");
    hashSet.Add("Bari");
    hashSet.Add("Rome");
    hashSet.Add("Turin");
    
    var items = hashSet.ToList<string>().OrderBy(s => s);
    
    
    var resultHashSet = string.Join(',', items);
    Console.WriteLine(resultHashSet);
    

    Or, even better, use the right data structure: a SortedSet<T>

    var sortedSet = new SortedSet<string>();
    
    sortedSet.Add("Turin");
    sortedSet.Add("Naples");
    sortedSet.Add("Rome");
    sortedSet.Add("Bari");
    sortedSet.Add("Rome");
    sortedSet.Add("Turin");
    
    
    var resultSortedSet = string.Join(',', sortedSet);
    Console.WriteLine(resultSortedSet);
    

    Both results print Bari,Naples,Rome,Turin. But the second approach does not require you to sort a whole list: it is more efficient, both talking about time and memory.

    Use custom sorting rules

    What if we wanted to use a SortedSet with a custom object, like User?

    public class User {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    
        public User(string firstName, string lastName)
        {
            FirstName = firstName;
            LastName = lastName;
        }
    }
    

    Of course, we can do that:

    var set = new SortedSet<User>();
    
    set.Add(new User("Davide", "Bellone"));
    set.Add(new User("Scott", "Hanselman"));
    set.Add(new User("Safia", "Abdalla"));
    set.Add(new User("David", "Fowler"));
    set.Add(new User("Maria", "Naggaga"));
    set.Add(new User("Davide", "Bellone"));//DUPLICATE!
    
    foreach (var user in set)
    {
        Console.WriteLine($"{user.LastName} {user.FirstName}");
    }
    

    But, we will get an error: our class doesn’t know how to compare things!

    That’s why we must update our User class so that it implements the IComparable interface:

    public class User : IComparable
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    
        public User(string firstName, string lastName)
        {
            FirstName = firstName;
            LastName = lastName;
        }
    
        public int CompareTo(object obj)
        {
            var other = (User)obj;
            var lastNameComparison = LastName.CompareTo(other.LastName);
    
            return (lastNameComparison != 0)
                ? lastNameComparison :
                (FirstName.CompareTo(other.FirstName));
        }
    }
    

    In this way, everything works as expected:

    Abdalla Safia
    Bellone Davide
    Fowler David
    Hanselman Scott
    Naggaga Maria
    

    Notice that the second Davide Bellone has disappeared since it was a duplicate.

    This article first appeared on Code4IT

    Wrapping up

    Choosing the right data type is crucial for building robust and performant applications.

    In this article, we’ve used a SortedSet to insert items in a collection and expect them to be sorted and without duplicates.

    I’ve never used it in a project. So, how did I know that? I just explored the libraries I was using!

    From time to time, spend some minutes reading the documentation, have a glimpse of the most common libraries, and so on: you’ll find lots of stuff that you’ve never thought existed!

    Toy with your code! Explore it. Be curious.

    And have fun!

    🐧



    Source link

  • Secure Mobile Device Management for Indian Businesses


     In an increasingly mobile-first world, organizations are leveraging mobile devices for a variety of operational needs – making them indispensable tools for business productivity.  Whether it’s sales reps using tablets in the field, managers accessing dashboards from their phones, or logistics teams managing and tracking deliveries in real time — mobile devices are the backbone of modern enterprises. However, this reliance introduces a complex set of security, compliance, and management challenges.

    The Rising Threat Landscape

    According to the Verizon 2024 Mobile Security Index, 28% of all cyberattacks on corporate endpoints targeted mobile devices1, making them the second most attacked category after IoT. India, notably, accounted for 28% of global mobile malware attacks2, and the threat is accelerating — cyberattacks in India’s government sector organizations alone increased by 138% in four years.

    Common Challenges Faced by IT Teams

    If your organization is issuing mobile devices but not actively managing them, you’re leaving a wide door open for cyber threats, data breaches, and productivity loss. Without a Mobile Device Management platform, IT Admins in an organization also struggle with multiple challenges, including:

    • Lack of visibility into how and where devices are being used
    • Compliance headaches, especially in sectors like BFSI and government
    • Increased risk from data breaches and insider threats
    • Rising IT overhead from manual device provisioning and support
    • User resistance due to poor onboarding and restrictive policies
    • High IT overhead for manual updates and troubleshooting
    • Productivity losses due to device misuse
    • Hidden costs from lost, misused, or underutilized devices

    These issues not only compromise security but also hamper operational efficiency.

    Enter Seqrite Mobile Device Management (MDM): Purpose-Built for Indian Enterprises

    Seqrite Mobile Device Management (MDM) is a comprehensive solution designed to manage, secure, and optimize the use of company-owned mobile devices across industries. Seqrite MDM offers a comprehensive solution that empowers IT admins to streamline device management and security with ease. It simplifies device enrolment by automating provisioning and configuration, reducing manual effort and errors. With robust security features like inbuilt antivirus, password complexity enforcement, and remote wipe, organizations can ensure sensitive data remains protected. IT teams can also deploy managed applications consistently across devices, maintaining compliance and control. Furthermore, employees benefit from seamless access to corporate resources such as emails and files, driving greater productivity without compromising security

    Seqrite MDM offers full lifecycle device deployment & management for Company Owned Devices with diverse operational modes:

    1. Dedicated Devices
      Locked down devices for specific tasks or functions managed in kiosk/ launcher mode with only selected apps and features – reducing misuse and maximizing operational efficiency.
    2. Fully Managed Devices
      Manage all apps, settings, and usage, ensuring complete security, compliance, and a consistent user experience with full administrative control.
    3. Fully Managed Devices with Work Profile
      Hybrid model, allowing personal use while keeping work data isolated in a secure Android Work Profile – Manage only the work container, ensuring data separation, user privacy, and corporate compliance.

    Seqrite MDM has following comprehensive mobile security and anti-theft features, which attribute to advanced differentiators setting it apart as a security-first MDM solution:

    • Artificial Intelligence based Anti-Virus: Best-in-class, built-in antivirus engine that keeps the devices safe from cyber threats.
    • Scheduled Scan: Remotely schedule a scan at any time and monitor the status of enrolled devices for security risks and infections.
    • Incoming Call Blacklisting/Whitelisting: Restricts incoming calls to only approved series or contacts, reducing distractions and preventing unauthorized communication.
    • Intruder Detection: Captures a photo via the front camera upon repeated failed unlock attempts, alerting users to potential unauthorized access.
    • Camera/Mic Usage Alerts: Monitors and notifies when the camera or microphone is accessed by any app, ensuring privacy and threat detection.
    • Data Breach Alerts: Integrates with public breach databases to alert if any enterprise email IDs have been exposed in known breaches.
    • App Lock for Sensitive Apps: Adds an extra layer of protection by locking selected apps behind additional authentication, safeguarding sensitive data.
    • Anti-theft: Remotely locate, lock, and wipe data on lost or stolen devices. Block or completely lock the device on SIM change.
    • Web Security: Comprehensive browsing, phishing, and web protection. Blacklist/ whitelist the URLs or use category/keyword-based blocking. Also, restrict usage of YouTube to control non-work-related content consumption during work hours.

    Seqrite MDM goes beyond the basics with advanced features designed to deliver greater control, flexibility, and efficiency for businesses. Its granular app management capability allows IT teams to control apps down to the version level, ensuring only compliant applications are installed across devices. With virtual fencing, policies can be applied based on Wi-Fi, geolocation, or time – making it especially valuable for shift-based teams or sensitive field operations. Real-time analytics provide deep visibility into device health, data usage, and compliance through intuitive dashboards and automated reports. Downtime is further minimized with remote troubleshooting, enabling IT admins to access and support devices instantly. Backed by Seqrite, a Quick Heal company, Seqrite MDM is proudly Made in India, Made for India – delivering modular pricing and unmatched local support tailored to diverse business needs. From BFSI to logistics, education to government services, Seqrite MDM is already powering secure mobility across sectors.

     

    Ready to Take Control of Your Corporate Devices?

    Empower your organization with secure, compliant, and efficient mobile operations. Discover how Seqrite Mobile Device Management can transform your mobility strategy:

    Learn more about Seqrite MDM

    Book a demo

     

    References:

    1 https://www.verizon.com/business/resources/T834/reports/2024-mobile-security-index.pdf

    2 https://www.zscaler.com/resources/industry-reports/threatlabz-mobile-iot-ot-report.pdf

    3 https://www.tribuneindia.com/news/india/138-increase-in-cyber-attacks-on-govt-bodies-in-four-years/



    Source link

  • Critical SAP Vulnerability & How to Protect Your Enterprise

    Critical SAP Vulnerability & How to Protect Your Enterprise


    Executive Summary

    CVE-2025-31324 is a critical remote code execution (RCE) vulnerability affecting the SAP NetWeaver Development Server, one of the core components used in enterprise environments for application development and integration. The vulnerability stems from improper validation of uploaded model files via the exposed metadatauploader endpoint. By exploiting this weakness, attackers can upload malicious files—typically crafted as application/octet-stream ZIP/JAR payloads—that the server mistakenly processes as trusted content.

    The risk is significant because SAP systems form the backbone of global business operations, handling finance, supply chain, human resources, and customer data. Successful exploitation enables adversaries to gain unauthenticated remote code execution, which can lead to:

    • Persistent foothold in enterprise networks
    • Theft of sensitive business data and intellectual property
    • Disruption of critical SAP-driven processes
    • Lateral movement toward other high-value assets within the organization

    Given the scale at which SAP is deployed across Fortune 500 companies and government institutions, CVE-2025-31324 poses a high-impact threat that defenders must address with urgency and precision.

    Vulnerability Overview

    • CVE ID: CVE-2025-31324
    • Type: Unauthenticated Arbitrary File Upload → Remote Code Execution (RCE)
    • CVSS Score: 8 (Critical) (based on vector: AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H)
    • Criticality: High – full compromise of SAP systems possible
    • Affected Products: SAP NetWeaver Application Server (Development Server module), versions prior to September 2025 patchset
    • Exploitation: Active since March 2025, widely weaponized after August 2025 exploit release
    • Business Impact: Persistent attacker access, data theft, lateral movement, and potential disruption of mission-critical ERP operations

    Threat Landscape & Exploitation

    Active exploitation began in March–April 2025, with attackers uploading web shells like helper.jsp, cache.jsp, or randomly-named .jsp files to SAP servers . On Linux systems, a stealthy backdoor named Auto-Color was deployed, enabling reverse shells, file manipulation, and evasive operation .

    In August 2025, the exploit script was publicly posted by “Scattered LAPSUS$ Hunters – ShinyHunters,” triggering a new wave of widespread automatic attacks . The script includes identifiable branding and taunts, a valuable signals for defenders.

    Technical Details

    Root Cause:
    The ‘metadatauploader’ endpoint fails to sanitize uploaded binary model files. It trusts client-supplied ‘Content-Type: application/octet-stream’ payloads and parses them as valid SAP model metadata.

    Trigger:

    Observed Payloads: Begin with PK (ZIP header), embedding .properties + compiled bytecode that triggers code execution when parsed.

    Impact: Arbitrary code execution within SAP NetWeaver server context, often leading to full system compromise.

    Exploitation in the Wild

    March–April 2025: First observed exploitation with JSP web shells.

    August 2025: Public exploit tool released by Scattered LAPSUS$ Hunters – ShinyHunters, fueling mass automated attacks.

    Reported Havoc: Over 1,200 exposed SAP NetWeaver Dev servers scanned on Shodan showed exploit attempts. Multiple confirmed intrusions across manufacturing, retail, and telecom sectors. Incidents of data exfiltration and reverse shell deployment confirmed in at least 8 large enterprises.

    Exploitation

    Attack Chain:
    1. Prepare Payload – Attacker builds a ZIP/JAR containing malicious model definitions or classes.
    2. Deliver Payload – Send crafted HTTP POST to /metadatauploader with application/octet-stream.
    3. Upload Accepted – Server writes/loads the malicious file without validation.
    4. Execution – Code is executed when the model is processed by NetWeaver.

    Indicators in PCAP:
    – POST /developmentserver/metadatauploader requests
    – Content-Type: application/octet-stream with PK-prefixed binary content

    Protection

    – Patch: Apply SAP September 2025 security updates immediately.
    – IPS/IDS Detection:
    • Match on POST requests to /metadatauploader containing CONTENTTYPE=MODEL.
    • Detect binary payloads beginning with PK in HTTP body.
    – EDR/XDR: Monitor SAP process spawning unexpected child processes (cmd.exe, powershell, etc).
    – Best Practice: Restrict development server exposure to trusted networks only.

    Indicators of Compromise (IoCs)

    Artifact Details
    1f72bd2643995fab4ecf7150b6367fa1b3fab17afd2abed30a98f075e4913087 Helper.jsp webshell
    794cb0a92f51e1387a6b316b8b5ff83d33a51ecf9bf7cc8e88a619ecb64f1dcf Cache.jsp webshell
    0a866f60537e9decc2d32cbdc7e4dcef9c5929b84f1b26b776d9c2a307c7e36e rrr141.jsp webshell
    4d4f6ea7ebdc0fbf237a7e385885d51434fd2e115d6ea62baa218073729f5249 rrxx1.jsp webshell

     

    Network:
    – URI: /developmentserver/metadatauploader?CONTENTTYPE=MODEL&CLIENT=1
    – Headers: Content-Type: application/octet-stream
    – Binary body beginning with PK

    Files:
    – Unexpected ZIP/JAR in SAP model directories
    – Modified .properties files in upload paths
    Processes:
    – SAP NetWeaver spawning system binaries

    MITRE ATT&CK Mapping

    – T1190 – Exploit Public-Facing Application
    – T1059 – Command Execution
    – T1105 – Ingress Tool Transfer
    – T1071.001 – Application Layer Protocol: Web Protocols

    Patch Verification

    – Confirm SAP NetWeaver patched to September 2025 release.
    – Test with crafted metadatauploader request – patched servers reject binary payloads.

    Conclusion

    CVE-2025-31324 highlights the risks of insecure upload endpoints in enterprise middleware. A single unvalidated file upload can lead to complete SAP system compromise. Given SAP’s role in core business operations, this vulnerability should be treated as high-priority with immediate patching and network monitoring for exploit attempts.

    References

    – SAP Security Advisory (September 2025) – CVE-2025-31324
    – NVD – https://nvd.nist.gov/vuln/detail/CVE-2025-31324
    – MITRE ATT&CK Framework – https://attack.mitre.org/techniques/T1190/

     

    Quick Heal Protection

    All Quick Heal customers are protected from this vulnerability by following signatures:

    • HTTP/CVE-2025-31324!VS.49935
    • HTTP/CVE-2025-31324!SP.49639

     

    Authors:
    Satyarth Prakash
    Vineet Sarote
    Adrip Mukherjee



    Source link

  • How to parse JSON Lines (JSONL) with C# | Code4IT


    JSONL is JSON’s less famous sibling: it allows you to store JSON objects separating them with new line. We will learn how to parse a JSONL string with C#.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    For sure, you already know JSON: it’s one of the most commonly used formats to share data as text.

    Did you know that there are different flavors of JSON? One of them is JSONL: it represents a JSON document where the items are in different lines instead of being in an array of items.

    It’s quite a rare format to find, so it can be tricky to understand how it works and how to parse it. In this article, we will learn how to parse a JSONL file with C#.

    Introducing JSONL

    As explained in the JSON Lines documentation, a JSONL file is a file composed of different items separated by a \n character.

    So, instead of having

    [{ "name": "Davide" }, { "name": "Emma" }]
    

    you have a list of items without an array grouping them.

    { "name" : "Davide" }
    { "name" : "Emma" }
    

    I must admit that I’d never heard of that format until a few months ago. Or, even better, I’ve already used JSONL files without knowing: JSONL is a common format for logs, where every entry is added to the file in a continuous stream.

    Also, JSONL has some characteristics:

    • every item is a valid JSON item
    • every line is separated by a \n character (or by \r\n, but \r is ignored)
    • it is encoded using UTF-8

    So, now, it’s time to parse it!

    Parsing the file

    Say that you’re creating a videogame, and you want to read all the items found by your character:

    class Item {
        public int Id { get; set; }
        public string Name { get; set; }
        public string Category { get; set; }
    }
    

    The items list can be stored in a JSONL file, like this:

    {  "id": 1,  "name": "dynamite",  "category": "weapon" }
    {  "id": 2,  "name": "ham",  "category": "food" }
    {  "id": 3,  "name": "nail",  "category": "tool" }
    

    Now, all we have to do is to read the file and parse it.

    Assuming that we’ve read the content from a file and that we’ve stored it in a string called content, we can use Newtonsoft to parse those lines.

    As usual, let’s see how to parse the file, and then we’ll deep dive into what’s going on. (Note: the following snippet comes from this question on Stack Overflow)

    List<Item> items = new List<Item>();
    
    var jsonReader = new JsonTextReader(new StringReader(content))
    {
        SupportMultipleContent = true // This!!!
    };
    
    var jsonSerializer = new JsonSerializer();
    while (jsonReader.Read())
    {
        Item item = jsonSerializer.Deserialize<Item>(jsonReader);
        items.Add(item);
    }
    return items;
    

    Let’s break it down:

    var jsonReader = new JsonTextReader(new StringReader(content))
    {
        SupportMultipleContent = true // This!!!
    };
    

    The first thing to do is to create an instance of JsonTextReader, a class coming from the Newtonsoft.Json namespace. The constructor accepts a TextReader instance or any derived class. So we can use a StringReader instance that represents a stream from a specified string.

    The key part of this snippet (and, somehow, of the whole article) is the SupportMultipleContent property: when set to true it allows the JsonTextReader to keep reading the content as multiline.

    Its definition, in fact, says that:

    //
    // Summary:
    //     Gets or sets a value indicating whether multiple pieces of JSON content can be
    //     read from a continuous stream without erroring.
    //
    // Value:
    //     true to support reading multiple pieces of JSON content; otherwise false. The
    //     default is false.
    public bool SupportMultipleContent { get; set; }
    

    Finally, we can read the content:

    var jsonSerializer = new JsonSerializer();
    while (jsonReader.Read())
    {
        Item item = jsonSerializer.Deserialize<Item>(jsonReader);
        items.Add(item);
    }
    

    Here we create a new JsonSerializer (again, coming from Newtonsoft), and use it to read one item at a time.

    The while (jsonReader.Read()) allows us to read the stream till the end. And, to parse each item found on the stream, we use jsonSerializer.Deserialize<Item>(jsonReader);.

    The Deserialize method is smart enough to parse every item even without a , symbol separating them, because we have the SupportMultipleContent to true.

    Once we have the Item object, we can do whatever we want, like adding it to a list.

    Further readings

    As we’ve learned, there are different flavors of JSON. You can read an overview of them on Wikipedia.

    🔗 JSON Lines introduction | Wikipedia

    Of course, the best place to learn more about a format it’s its official documentation.

    🔗 JSON Lines documentation | Jsonlines

    This article exists thanks to Imran Qadir Baksh’s question on Stack Overflow, and, of course, to Yuval Itzchakov’s answer.

    🔗 Line delimited JSON serializing and de-serializing | Stack Overflow

    Since we’ve used Newtonsoft (aka: JSON.NET), you might want to have a look at its website.

    🔗SupportMultipleContent property | Newtonsoft

    Finally, the repository used for this article.

    🔗 JsonLinesReader repository | GitHub

    Conclusion

    You might be thinking:

    Why has Davide written an article about a comment on Stack Overflow?? I could have just read the same info there!

    Well, if you were interested only in the main snippet, you would’ve been right!

    But this article exists for two main reasons.

    First, I wanted to highlight that JSON is not always the best choice for everything: it always depends on what we need. For continuous streams of items, JSONL is a good (if not the best) choice. Don’t choose the most used format: choose what best fits your needs!

    Second, I wanted to remark that we should not be too attached to a specific library: I’d generally prefer using native stuff, so, for reading JSON files, my first choice is System.Text.Json. But not always it’s the best choice. Yes, we could write some complex workaround (like the second answer on Stack Overflow), but… does it worth it? Sometimes it’s better to use another library, even if just for one specific task. So, you could use System.Text.Json for the whole project unless for the part where you need to read a JSONL file.

    Have you ever met some unusual formats? How did you deal with it?

    Happy coding!

    🐧



    Source link

  • Keep the parameters in a consistent order &vert; Code4IT

    Keep the parameters in a consistent order | Code4IT


    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    If you have a set of related functions, use always a coherent order of parameters.

    Take this bad example:

    IEnumerable<Section> GetSections(Context context);
    
    void AddSectionToContext(Context context, Section newSection);
    
    void AddSectionsToContext(IEnumerable<Section> newSections, Context context);
    

    Notice the order of the parameters passed to AddSectionToContext and AddSectionsToContext: they are swapped!

    Quite confusing, isn’t it?

    Confusion intensifies

    For sure, the code is harder to understand, since the order of the parameters is not what the reader expects it to be.

    But, even worse, this issue may lead to hard-to-find bugs, especially when parameters are of the same type.

    Think of this example:

    IEnumerable<Item> GetPhotos(string type, string country);
    
    IEnumerable<Item> GetVideos(string country, string type);
    

    Well, what could possibly go wrong?!?

    We have two ways to prevent possible issues:

    1. use coherent order: for instance, type is always the first parameter
    2. pass objects instead: you’ll add a bit more code, but you’ll prevent those issues

    To read more about this code smell, check out this article by Maxi Contieri!

    This article first appeared on Code4IT

    Conclusion

    To recap, always pay attention to the order of the parameters!

    • keep them always in the same order
    • use easy-to-understand order (remember the Principle of Least Surprise?)
    • use objects instead, if necessary.

    👉 Let’s discuss it on Twitter or in the comment section below!

    🐧





    Source link